Generate, Verify, Reflect (GVR)
A process to support research and information literacy
Intro
In the following process, we will focus on what a typical classroom research cycle could look like as one way to practice information literacy. The goal is that by engaging in this process, students can begin to develop dispositions that extend beyond academic tasks and into their adult lives.
The Generate, Verify, Reflect process (below) may appear simple at first glance, but the key idea is that all three steps are essential whenever we encounter new information, especially from AI platforms. Often, students stop after the first step, accepting what they see without verifying its accuracy or reflecting on their own assumptions. What I want to make explicit is that teachers must intentionally create time for all three stages if we hope to help students cultivate genuine information literacy skills with AI.

Generate
Overall, what we want to see students doing at this stage is being critical of information and AI models in the moment; this step is not for us to look at other sources, which will be completed in the second step of the process. Here, AI is the source, and students are learning to treat it like one.
Use the ROBOT Test to evaluate the AI tools themselves.
We do not always assume everything we read is true; we keep in mind that it might represent a biased opinion, and we utilize our common sense: Does it feel right?
We push back against the AI when it affirms our responses and ask it to help us understand and not to please us. Furthermore, students can interrogate the AI: when was it last trained? Does it cite sources? Could the phrasing include a hallucination?
We consider what AI tool we select to find information, and ask: Does this AI reason or check its information? Often, free AI tools are more prone to mistakes, whereas more advanced models that require more time will have more accuracy.
We ask the AI to propose alternative perspectives or opposing viewpoints on the information it generates, recognizing that any response reflects a particular point of view. Asking “whose reality does this represent?” helps us see the information as one angle rather than the whole picture.
Verify
Now that we have encountered information that seems to be reliable, we find its claim, references, or data points and look for an alternative source of information and verify it. Additional things that we do during this step include:
We deliberately slow by pausing to consider the information we are encountering, rather than trying to efficiently complete a task.
We utilize more than one source and do not stop with just ChatGPT’s answers; this includes seeking out human perspectives from a trusted teacher, expert, or community member who can offer lived experience or professional knowledge that no AI model can replicate.
We analyze what we have read by using the CRAAP Test.
We look up the information we encounter in one or more other independent places that use a different publishing process (e.g., a peer-reviewed article, a trusted website, a credible author).
We trace a claim back to its original source rather than trusting a summary of it. An article, an AI, or even a textbook can misrepresent what a source actually says, and reading the original is often the fastest way to find out.
Reflect
As a final step, we want to ensure that we are avoiding cognitive biases and echo chambers; the essential thing is that time is allocated and the conditions for reflective thought are right (i.e. students are not rushed at the end of class just before the bell rings, they are calm, alert). When we reflect on the overall process, we encourage metacognition which can be an extremely powerful move.
Example questions teachers can share with students to encourage reflection on biases include:
Default-to-Truth: What evidence or source did I find that could have made me doubt or disprove this claim?
AI Omniscience Bias: What might this AI not have known or been unable to do, and how did I check the information with a non-AI source?
Confirmation Bias: What trustworthy sources did I find that disagreed with what I believed or what the AI said?
Availability Heuristic & Algorithmic Amplification: Was this claim showing up because it was true, or because it was recent, dramatic, or recommended to me based on my “likes” or interests (i.e. algorithmic recommendations)?
AI Sycophancy Bias: Could it be that AI was telling me what I wanted to hear at any point in our conversation? When I asked the AI to disagree with me or give the opposite view, what did it say, and did that change my thinking?


