When I’m teaching my middle school students how to conduct research, we discuss creating un-googleable deep or complex questions. Then we move on to using databases like google scholar or EBSCOHost to find peer-reviewed articles on their topics. We discuss how to choose and refine keywords and using phrases like “Not” or symbols like quotes to get direct text results. Finally, we discuss how to evaluate their sources for relevancy, accuracy, purpose, and authority. Once that is done, we create annotated bibliographies that share where the source is from and a summary of how the information can be used later to relate to their topic of interest. This process can take up to 3 hours of class time to complete the first time.

With the rise of artificial intelligence, my students could go to one of the many new AI research assistants, type in their questions as a complete sentence, and receive the equivalent of an annotated bibliography with vetted sources in a matter of seconds. It will even share how it evaluated each source (for credibility, accuracy, etc.). With all these new tools available, what are the enduring ideas and skills that students will need to be knowledge constructors and meaning-makers? (ISTE 1.3.c, 2020).

Academic Researchers have only just begun to discuss the ramifications of using AI to conduct research. “Participants suggested that searching and summarizing papers were the kinds of tasks that were ‘time consuming’, involving ‘endless drudgery’ (Physical, Natural and Life Science 21; Arts and Humanities 04). While another felt that those same skills defined them as a researcher ‘some people dedicate their life to learning those skills'(Psychology 12).” (Chubb 2022). The skills needed to be an academic researcher include finding key search terms and reading, processing, summarizing, and evaluating abstracts of many papers. If AI does that for you, are you still a researcher? Is asking a question and reading the answer the same as doing your research? 

Perhaps asking a machine to synthesize a compilation of abstracts is too heavy a hand for AI right now. Instead, we should tell AI to do some of the categorizing and finding sources for us instead of asking it to summarize the content. Even this, though, can be challenging to trust the machine to organize information and recommend resources without bias. “Care is needed to avoid approaching this question with the assumption that all is working well without pausing to criticize assessment, metrics, the application of narrow criteria in indicators for impact, research integrity, reproducibility for narrowing diversities, for encouraging systemic bias against types of research or researchers, or diverting attention toward only that which is valued or trackable rather than what is precious and valuable” (Chubb 2017 in Chubb 2022). As with all AI and Machine Learning systems, it’s always important to consider what values are being highlighted and who this system will benefit or harm. It’s worth considering, however, that most search engines and academic databases use a process involving or related to AI and Machine Learning to provide the searcher with results. The most significant transition from the current methods to the ones available now and moving forward is in the precision of search terms and the need to use advanced features rather than simple text style requests to narrow results. With this in mind, I am confident that the use of AI in conducting research does not replace the skills and strategies we have been teaching, merely the platform we have been using. Transferring ideas from one mode or medium to another is a 21st-century skill. (ISTE 1.1.d, 2020).

The final leap of using AI in a project is to decide how and when to apply it. Nature, the scientific publication, has adopted two rules regarding Large Language Models (LLMs), which are AI-generated writing tools like ChatGPT: 

1. no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

2. researchers using LLM tools should document this use in the methods or acknowledgments sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.

(Nature, 2022)

Teachers are responsible for encouraging curiosity and fostering it by giving students the tools to find answers themselves. We need to give them the knowledge to conduct research using traditional methods and then provide them with a means of doing it efficiently using AI. Let’s empower our students to be knowledge constructors and meaning-makers.

Tips for Teachers who want to introduce AI into research practices

1. Explicitly share the steps for conducting research before adding in the AI assistants. 
2. Share the risks of AI’s doing the heavy lifting: the implicit biases that may be at play.
3. Look at a variety of databases and search tools to make sure they are seeing a breadth of results to ensure there are no gaps in results.
4. Create a standard method of noting how and when AI was used.


Chubb, J., Cowling, P. & Reed, D. Speeding up to keep up: exploring the use of AI in the research process. AI & Soc 37, 1439–1457 (2022). https://doi.org/10.1007/s00146-021-01259-0

International Society for Technology in Education. (2020). ISTE standards: Coaches. ISTE. https://www.iste.org/standards/iste-standards-for-coaches

Nature Publishing Group. (2023, January 24). Tools such as CHATGPT threaten transparent science; here are our ground rules for their use. Nature News. Retrieved February 5, 2023, from https://www.nature.com/articles/d41586-023-00191-1

Leave a Reply

Your email address will not be published. Required fields are marked *