The Metaphors We Use for AI
How we talk about AI, and why it matters in schools
We’re three years into the AI experiment, and I’ve noticed something during that time: the way we talk about AI keeps shifting. The metaphors change from month to month, and they shape the conversation more than we realize. A lot of authors and speakers pick one analogy and argue that it’s the best way to understand what AI is and what it means for education. In this meta-article, I want to do something different. I want to look at the range of metaphors we use, unpack what each one highlights, and consider their limitations.
This article isn’t trying to crown a single “right” metaphor. It’s a case for building a metaphor toolkit we can use deliberately. Metaphors are helpful because they make complex ideas feel manageable. But they also push our thinking in certain directions. They can narrow what we notice, shape what we assume, and set limits on the questions we think to ask.
So today I’m going to look at five categories of AI metaphors. For each one, I’ll explore how it helps our thinking, how it limits our thinking, and when it’s useful.
Category 1: Calming Metaphors
This is perhaps the first form of metaphor that I heard being used in 2022 when ChatGPT exploded onto the scene. This section is dedicated to metaphors meant to address skepticism or reluctance.
Common phrases you’ll hear in this category of thinking are:
“We’ve been here before, it’s like when everyone freaked out over the calculator.”
“ChatGPT is just the next Google Search or Wikipedia, remember when those were new?”
1) How this category helps our thinking.
One metaphor I hear a lot is the idea that we’ve been here before, and the comparison people reach for is the calculator. It’s one of the metaphors that we can probably laugh about the most because it’s considered such a common technology at this point. Check out this newspaper article with protests against calculators in lower elementary school.
Unpacking AI as a “calculator” metaphor, it’s often used to lower the temperature. When ChatGPT started writing essays on demand, it rattled assessment overnight and raised big questions. What does it mean to “know” something? What evidence counts? What is the role of feedback? The calculator comparison gives educators a familiar storyline: a disruptive tool shows up, schools adapt, and eventually it becomes normal.
And calculators aren’t the only example. Schools have pivoted before in plenty of other ways. We pivoted hard during COVID when learning moved online. We pivoted again when 1:1 programs put a device in every student’s hands and laptops became part of daily classroom life. Those shifts were messy, uneven, and sometimes frustrating, but we adjusted.
2) How this category limits our thinking.
The calculator comparison breaks down in two important ways.
First, calculators are narrow tools with predictable outputs. They don’t generate ideas, rewrite arguments, imitate voice, or steer a student toward a particular conclusion. Calculators don’t do relationships. Even Joaquin Phoenix wouldn’t date a TI-83.
Second, calming metaphors can turn into minimizing metaphors. If we treat AI like “just another calculator moment,” we risk missing what’s actually new here: a system that participates in language, produces plausible explanations, and can shape student thinking while sounding confident. That creates different risks and demands different literacies. Verification, source judgment, and attention to persuasion become central, not optional.
This is why, as Dr. Sabba Quidwai has argued (paraphrasing), AI isn’t “like a calculator.” A calculator executes. Generative AI responds. Sometimes it’s brilliant, sometimes it’s wrong, and it can be persuasive either way. If we stop at “we’ve been here before,” we may calm people into staying put when they actually need to learn new skills and redesign curriculum, assessment, and norms with eyes open.
3) Suggested use of this type of metaphor.
I would use this trope sparingly when a community is anxious and new to AI, as a way to reduce panic and open dialogue. Avoid it when the goal is serious planning, because it can oversimplify what makes generative AI uniquely powerful and uniquely risky.
Calming metaphors are helpful because they reconnect us to a belief: we can handle this. They’re especially useful at the beginning of the AI journey, when people are panicking and need to remember, “we’ve got this.” The takeaway is that we need to move through our first emotional reaction so we can return to a sense of efficacy and begin a real dialogue with students about how AI is changing the world they’re growing up in. But calming metaphors become a problem when they’re the only way we talk about AI. The calculator metaphor, for example, isn’t really a claim that the technologies are alike; it’s a reminder of a familiar moment: disruption, panic, adjustment. Useful, yes, as long as it moves us past knee-jerk reactions and into the work of learning, adapting, and having meaningful classroom dialogue about what AI is for, how it’s being used, and what still needs to remain human. As you’ll see in the sections that follow, other metaphors can help us operationalize AI in practice instead of getting stuck in the emotional first response.
Category 2: AI as A Tool
Common phrases you’ll hear in this category of thinking are:
“It’s a tool.”
“AI is like a knife. It’s dangerous when misused, but it’s powerful when used appropriately.”
“A hammer won’t build the house for you; it’s the carpenter’s skill that matters.”
“AI is like a car, and you need a license to know the rules of the road.”
1) How this category helps our thinking.
Tool language is useful because it makes AI feel manageable. Teachers understand tools. Tools are chosen for a purpose. Tools belong in a workflow. This metaphor helps schools move from panic to practice by asking practical questions: What is this for? Who should use it? When is it helpful? What is the cost? It also moves responsibility back onto humans. A hammer does not build a house by itself, and it does not get credit or blame. The person using it does. That mindset can be grounding when people are tempted to treat AI output as truth or treat the model as the decision-maker.
Tool metaphors also make space for safety. We already teach students that powerful tools come with rules. You do not hand a first grader a saw in your makerspace. You teach basic use, supervision, and responsible handling. AI fits that same pattern. If we treat it like a tool, then we naturally ask about purpose and function, training, and age-appropriate use.
2) How this category limits our thinking.
The tool metaphor can also oversimplify what generative AI actually is. Tools are neutral and depend upon the user. Knives don’t talk back or persuade you to pursue flawed ideas, while Generative AI does, and in an often biased way, depending on its corpus’s training data.
Tool language can also push people toward a vending machine mindset. Put in a prompt, get out a product. That is where shallow work lives and where Prompt-Copy-Paste becomes the default. It can also hide the fact that AI is not a neutral tool. It carries patterns from its training data, and that can show up as bias and distortions. In short, when we call it “just a tool,” we can accidentally train students to treat output as something you either accept or reject, rather than something you interrogate, negotiate, or iterate upon. More on this in the 5th category.
3) Suggested use of this type of metaphor.
Use this metaphor when your goal is to build basic readiness. It is especially helpful for setting guardrails, introducing responsible use, and clarifying that humans own the goals and the final decisions of AI output. Pair it with clear limits so it does not turn into AI is just another thing to “do school” and get a grade, so we can move on.
Category 3: AI as A Threat
Common phrases you’ll hear in this category of thinking are:
“The weaponization of AI could turn society into Foucult’s Panopticon.”
“We’re letting the genie out of the bottle.”
“AI is Pandora’s box.”
“We’re unleashing the Terminator or even Frankenstein’s Monster: our creation that’s going to turn against us!”
“We are in an AI arms race!”
1) How this category helps our thinking.
Threat metaphors are useful because they surface many risks that we should consider, like surveillance, manipulation, data misuse, deepfakes, disinformation, and inequality. They also remind us that systems do not need to be evil to cause harm. Sometimes harm comes from convenience, lazy policy, or unintentional adoption.
In education, threat language helps us protect what matters most to us: students are learning and growing into good humans. It pushes us to ask hard questions about student privacy, consent, and what data is being collected. Threat metaphors raise the alarm on dependency and cognitive offloading, where students lose the habit of wrestling with ideas because an answer is always available. It also calls attention to power. Who controls the tools? Who benefits? Who gets left behind? How’s it impacting our environment?
2) How this category limits our thinking.
When we frame AI in the language of threats, schools often respond with bans, blanket rules, and fear-based messaging. The result is that students hide their AI use, and it becomes harder to teach them how to use it well. Some students start to equate any AI use with cheating, even when it’s being used in a productive, transparent way.
Furthermore, there’s a difference between a student using AI to get feedback on writing and a school deploying surveillance-style analytics on every click. If we only speak in the language of threats, we miss the chance to build AI literacy, which is the best protection we have.
3) Suggested use of this metaphor.
Use this metaphor when you need to raise standards and slow down adoption. Avoid using it as the main classroom narrative for students, especially younger ones, because fear is not a foundation for good judgment.
Category 4: Productivity
Common phrases you’ll hear in this category of thinking are:
“It’s an amplifier.”
“AI is a spark for my ideas and thinking.”
“It’s magic… and can you time as a teacher.”
1) How this category helps our thinking.
Productivity metaphors are popular because they are often true. It can speed up drafting, brainstorming, differentiation, translation, and formatting. For teachers drowning in work, this category of language resonates. It also helps people see that AI can amplify good practice. A strong teacher with clear goals can use AI to generate options faster and then apply professional judgment.
This category also helps with momentum. Schools need quick wins to get started. Productivity is the quickest win. Used well, it can free teachers to spend more time on relationships, feedback, and the human work of teaching.
2) How this category limits our thinking.
These metaphors can reinforce a culture where efficiency is the goal. In classrooms, that plays out when students use AI to skip the messy middle: the struggle, the uncertainty, the revision. The output looks impressive, but the thinking is superficial.
At the school level, the same mindset can shift us toward quantity over quality. More lessons, emails, and rubrics. More content. If we’re not careful, AI becomes a machine for producing more school slop (low-quality content nobody wanted write, and nobody wants to read, including the person who initiated it).
This framing also downplays risk. When the primary goal is saving time, people verify less, reflect less, and rely on shortcuts. And we shouldn’t assume the “time saved” automatically turns into more time building relationships with learners. In many schools, it simply becomes the capacity to do more administrative work, because the workload expands to match whatever efficiency we create. After all, we don’t have more contact time during our prep blocks.
3) Suggested use of this metaphor.
Use this metaphor when your goal is teacher workload relief or rapid iteration, especially behind the scenes. Be cautious using it as the dominant student-facing metaphor, because it can habituate students to optimize for speed over depth. Remember, adult use of AI is different than student use of AI, and how you speak about and model AI is important.
Category 5: AI as Teammate
This is one of the most common metaphor categories I hear in schools right now. It shows up whenever people are trying to move the conversation past “AI is just a tool” and toward something more like shared work, shared thinking, and shared responsibility. All of the examples below have something in common: they use humanized relationship and role language. They also encourage us to think less about output and more about how AI fits into the learning process.
Common phrases you’ll hear in this category of thinking are:
“AI is an intern.”
“It’s a co-pilot.”
“AI is your collaborator.”
“It’s a thought partner.”
“It’s a coach or personal trainer; those people don’t do the work for you, but they help you build muscles. AI can do that for your mind like a tutor.”
1) How this cateogory helps our thinking.
Jeremy Utley from Stanford University talks about this category of AI metaphor a lot. His video and follow-up blog post are linked in the “Where to Learn More” section below. One message of his that sticks with me is simple: do not use AI, work with it, treat it like a teammate. The best part starts around the six-minute mark.
The teammate metaphor encourages us to think in terms of delegation. In any good team, we do not let one member do all the work while everyone else watches. We assign roles, we make expectations explicit, we make sure each person is an active contributor, etc. That matters even more with AI, because we are effectively placing expert-level intelligence next to a developing learner. AI can outpace a student’s thinking simply because it can generate faster, wider, and with more confidence than most kids can.
So the question is not “Can AI do this task?” Of course it can. The better question is “Who is doing what, and what thinking still belongs to the student?”
At the same time, teammate language does not make AI harmless. I actually appreciate the tool metaphors above because they encourage respect: knives, hammers, and fire are useful in the right hands and dangerous in the wrong ones. But we are not dealing with a tool waiting to be picked up, nor are we dealing with a predictable calculator that follows a linear sequence. We are dealing with emergent intelligence that can associate, remix, and improvise, and in the not-too-distant future, likely feel, reason, and have a level of sentience.
That is why the real skill is purposeful collaboration as a form of AI literacy and fluency: an ongoing conversation with pushback, follow-up questions, and feedback. That mindset helps us avoid unhealthy processes like Prompt-Copy-Paste (it sounds like a dystopian Thinking Routine, doesn’t it?).
“I don’t use AI. I work with AI.”
2) How this category limits our thinking.
Collaboration language can blur accountability, just like we see in group work: students sometimes hide behind more active partners. AI can become a similar cover unless expectations are explicit within the learning process.
It can also encourage over-anthropomorphizing. When AI sounds human, it is easy to treat its outputs as intentional or authoritative in what one might call AI omniscience bias. That can reduce healthy skepticism and make it harder for students to practice verification and careful judgment. Furthermore, AI can dominate the direction of a task unless the process is designed to protect student agency and voice. So this brings me back to one of my central beliefs: the process needs to be designed to support intentional use of AI as it collaborates with us on specific thinking moves within our routines.
3) Suggested use of this type of metaphor.
Use this metaphor when your goal is to promote students using AI in specific and intentional ways in the learning experience. It is especially useful for teaching students that AI can contribute to brainstorming, feedback, revision, and practice while students remain responsible for the thinking and decisions.
To use it well, pair it with explicit norms:
Define roles for AI and roles for humans before students begin work.
Create purpose-built bots that help students use AI while following the intended process.
Name the thinking moves that must remain human and where AI is allowed to collaborate.
Require evidence of student reasoning and decision-making, not just a finished product (i.e., document the process).
A Monday-Ready Conclusion
Choose your metaphors like you choose your instructional moves, based on the moment and the outcome you want. Use calming metaphors when your community is anxious, and you need to lower the temperature enough to begin a real conversation, but do not let “we’ve been here before” become an excuse to avoid updating curriculum and assessment. Use tool metaphors when you need clarity and guardrails, especially when setting expectations and shared responsibility. Use threat metaphors when you need to slow down adoption and take risks seriously, but do not make fear the main student-facing story, or you will push use underground. Use productivity metaphors when the goal is workload relief and rapid iteration, but be cautious. Efficiency can become the point, and AI can turn into a machine for producing “school slop.” Use teammate metaphors when you want purposeful collaboration, where the process is clear, roles are defined, and students are expected to be active learners.
If I had to add one more metaphor to this toolkit, it would sit inside the teammate category: AI as a guest in the room. When we invite a human guest into class, we do due diligence. We think about fit and purpose. We plan the visit so it supports the curriculum. We prep the guest, and we prep students. Then, when the guest arrives, we introduce them. We explain why they are there, and we name how we are going to work with them. We set simple norms that keep the focus on learning.
That is the posture I think we need with AI. Even if it is “the same tool” each time, AI is a chameleon. It can take on different personas and play different roles depending on how we frame the interaction. One day, it is a brainstorming partner. Another day, it is a feedback coach. Another day, it is a debate opponent, a summarizer, or a tutor. Because of that, teachers have to be deliberate about the introduction each time we invite AI into the room. What role is it playing right now? What is the learning goal? What does collaboration look like in this moment, and what does it not look like? If we skip that framing, we leave too much open for interpretation, and students end up making their own calls about when and how AI should be used. That is where misunderstandings happen, and that is where “dishonesty” can get murky fast.
To make this practical, we have to get more specific than “AI allowed” or “AI not allowed.” Those labels create gray areas and invite confusion. A better approach is to name the thinking we want to see, then indicate when and how AI, as a guest in the room, is allowed to collaborate on particular thinking moves across an assignment. Break the larger task into the moves that actually produce learning. Be explicit about which moves must remain human and which moves can be supported through collaboration. Keep the focus where it belongs.
These metaphors could be a signal of where we are on the trajectory of understanding AI in education. Everyone is free to use the metaphor that fits their current experience, and early on, calming or tool metaphors may be exactly what a community needs. But if you have been using AI intentionally with students and you feel further along in your thinking, this is an opportunity to push the current collaborator narrative one step further. You can expand it by leaning on an existing practice we already understand well: how we vet guests. In this case, we are moving from an organic guest to a synthetic one. The point is to treat it as a guest that deserves respect and careful planning to protect students and their intellectual growth. Thank you for reading.
Where to Learn More
Jeremy Utley’s Substack & Interview on YouTube. This is the author’s Substack and the interview that I refer to in the Teammate section of this post. I really appreciate his thinking and work around creativity, as well as his rationale behind why metaphors like collaborator positively impact our interactions with AI.
Mindset, by Carol Dweck. As my friend Emily would say, this is an evergreen book that I believe is especially relevant to this post. Our mindset and the ways we think about learning are reflected in the metaphors we use. If you have not already read this powerful book, I highly recommend it.
“The Adolescence of Technology”, by Dario Amodei. This essay came out this week and has taken the internet by storm. Amodei is the CEO of Anthropic, the company behind Claude. This lengthy essay is rather dense, but it is worth your time in both its content and its use of metaphors to understand AI. In the essay, there are several great metaphors for AI. Amodei compares the AI era that we are in currently to an adolescent growing up. He also talks about a future with powerful AI as being a country full of geniuses in a data center, along with many more.
“Students Are Skipping the Hardest Part of Growing Up”, by Clay Shirky. This is an opinion post from the New York Times and is focused on the social impact of AI on young people. The article is full of metaphors for AI, but my favorite that I hadn’t heard before is that AI is a cigarette, implying long-term costs from our AI use to our social relationships.
Conferences. This academic year, I have a few more public speaking engagements lined up where I will be speaking about the above ideas and more (see more on my Conferences page of the site). I hope to see you in person and talk more about AI-enhanced processes!
21 CL, Hong Kong - Breakout sessions
AIFE, Yokohama - MC and breakout sessions
AI in Action, Beijing - Keynote and breakout sessions
Subscribe. If you haven’t already subscribed to this newsletter, hit the button below! Or if you know someone who might benefit from it, please share it and help me grow my audience.
AI Disclosure
All origami images in this post were created with the support of ChatGPT and Google Gemini to illustrate the ideas in each section.
It’s also worth sharing that the reading of me at the very top of this article was me recording a reading for 30 minutes of the post and then enhancing it with Adobe’s Voice Enhancer, an AI model that is quite good at making everyone sound like they’re a talk show host. What I found is that the higher quality the recording, the better the results.
I co-created this post with ChatGPT and Gemini. It took me 8-10 hours to write because of the thinking, revision, and back-and-forth collaboration. The LLMs’ roles were to push back on my ideas, help refine my writing, flag redundancy and verbosity, and help me maintain my tone and style. I ran a report after finalizing the entire article to fact-check the piece to ensure there were no glaring or errant claims. The thinking, final edits, and publishing decisions were mine.
Finally, I want to end by sharing my values. I believe we should normalize transparent disclosure of AI use in education. When adults model openness, students are more likely to discuss their AI use honestly rather than keep it secret. My goal is to engage in dialogue with the students, and that means it’s essential to avoid a judgmental atmosphere. Modeling how to use AI to enhance our thinking is one way to open the lines of communication.
I share my process because I want students to develop independence and wise decision-making with all technology. And it’s fair to say that most teachers use AI without disclosure. So, in many ways, you might actually be modeling secrecy.








Evergreen -- ha! Love this, Alex. What a great way to break down and make sense of the complex ways we are speaking about AI and consider how they are supporting and limiting us in different ways.