Teaching AI literacy reconfigures the role of the teacher. Rather than just passing on facts, educators now act as mentors who guide students in the creative and ethical application of generative AI. Based on the classroom insights of Teacher Lo, this approach prioritizes intellectual curiosity, prompt writing, spotting AI hallucinations, and ethics to build critical thinking, media literacy, and accountability while using AI as a collaborative tool for deeper, student-led learning.
There’s no doubt that generative AI has brought about much excitement and worry for teachers and curriculum planners. Moving past the initial surprise of using AI chatbots, the goal for educators now is to nurture AI literacy among students.
In an interview with Lo YuJen, a teacher at the Taipei Municipal Neihu Senior High School and a veteran educator in Taiwan with 25 years of experience, we see a clear path for navigating the role of AI in education. Her approach focuses on turning the classroom from a place where information is delivered into a space for collaborative inquiry.
From Teacher as Expert to AI Mentorship
The old idea of a teacher being the only authoritative source of knowledge doesn’t work anymore, especially when AI chatbots can answer questions instantly. Teacher Lo believes that an educator today is more like a knowledge companion who helps students navigate a massive amount of digital information.
AI can help students master specific skills, but teachers remain the primary facilitators for turning those skills into teamwork, social interaction, and life skills. This aligns with UNESCO’s view that AI should protect human agency and not replace teachers but shift their role toward higher-order human skills and critical mentoring.1
At the heart of this idea is mutual exchange. In pre-AI classrooms, the teacher gave the input and expected a specific output from the student. With AI, classrooms become a space for interaction. Students can use AI to learn the basics before they even get to class, leaving more time for deep discussions and learning to work with others to achieve results.
For Teacher Lo, students should be encouraged to do projects that they are personally interested in. For many, this involves coding and designing games. When students pursue and solve problems that matter to them, they take charge of their own learning.
This stops them from becoming what can be called “data transfer agents,” or people who just copy text from an AI chatbot to a page without thinking. Furthermore, students keep a sense of pride and ownership over what they learn.
Teaching Students to Spot Hallucinations
A key part of AI literacy is helping students understand that these systems are not search engines but rather probability engines. This is a black box concept, where students must realize that AI predicts words based on statistics rather than factual truth2. And that’s why AI models can hallucinate, meaning they create information that looks right but is completely wrong.
As Teacher Lo points out, a good curriculum needs to show students how to identify three main types of AI mistakes: made-up facts, bad logic, and fake citations.
Unsurprisingly, students have the hardest time spotting faulty reasoning. While a wrong date is easy to check, a logical error in a complex argument can be very convincing.
To help, Google Education’s AAA Framework provides the following 3-step verification process:
| Pillar |
Key Questions |
Details to Check |
| Is it factual? |
Does the response contain factual errors or false information? AI can sometimes "hallucinate" while sounding confident. |
| Is it verifiable? |
Can you cross-reference the statements with reliable sources? If the AI provides data, names, or events, try to verify them on authoritative websites or in literature. |
| Is it logical? |
Is the reasoning sound, or do the arguments seem baseless? Check for leaps in logic or contradictions. |
| Does it answer the prompt? |
Did the output directly address your specific question or instructions? Sometimes AI skews off-topic and provides irrelevant information. |
| Is it task-relevant? |
If you asked for a poem, did it write a poem rather than prose? |
| Is it contextually consistent? |
Do the sentences follow the flow of conversation in a logical way? |
| Is it professional? |
Is the tone suitable for the intended field (e.g., a formal tone for a business report)? |
| Is the terminology correct? |
Does it use the specific professional jargon and standards of that field? |
| Is it age-appropriate? |
Does the system provide examples or explanations suited to the specific audience (e.g., a high schooler vs. a college student)? |
The underlying principle is to “look closer if something feels off.” It’s easy to skip this when the AI’s answer looks professional. Teachers must remind students that human intuition and a healthy dose of skepticism are always a student’s best tools.
Mastering the Art of the Prompt
To make AI useful, students need to learn how to write a good prompt. A prompt is more than just a question, but a set of specific instructions. A strong prompt usually has four parts3: a persona (the identity, perspective, or function you want the AI to adopt), a clear task, set boundaries, and a specific format for the answer.
Teacher Lo tells her students to focus on intent. If you give the AI context and ask it to “challenge my ideas” or “find the holes in this plan,” AI is a tool that helps you think more deeply. The quality of the answer depends on the depth of the question.
Structured prompting transforms AI from a simple search tool into a sophisticated brainstorming partner. Educators should aim for lesson plans that focus on this kind of back-and-forth dialogue rather than just asking one simple question.
Handling Ethics and Bias in AI Education
More than just being about technical skills, AI literacy also covers ethics and social risks. This includes AI bias. If the data used to train an AI is one-sided, the answers will be too.
Echoing the Oxford Internet Institute4, which notes that because AI is trained on historical data, it can often amplify existing societal prejudices, Teacher Lo highlights 5 types of common AI bias:
- Sample or Data Bias: Errors arising from the data sets used to train the model.
- Algorithmic Bias: Biases introduced by the calculations or processes used by the AI to make decisions.
- Human Bias: Reflecting the existing prejudices of the people who create or interact with the technology.
- Historical Bias: Prejudices stemming from past societal data that the AI continues to replicate.
- Confirmation Bias: The tendency of a system to favor information that confirms existing beliefs or hypotheses.
What’s more, the rise of deepfakes makes media literacy a top priority. Students should be taught to look for small clues, such as odd lighting and shadows, unnatural alignments, or missing EXIF file data, to spot fake images. Class activities should be hands-on, giving students a chance to verify the truth of different photos and videos in real-time.
The ethical lesson is straightforward: the user is responsible for what the tool creates. Teacher Lo mentions the case of a lawyer in Los Angeles who was fined for using fabricated AI-generated legal cases in court. The takeaway for students is that the machine isn’t at fault, but rather the human who didn’t check the work is the one who faces the consequences.
From Finding Answers to Asking Better Questions
If there is one piece of advice on AI and education, it would be this: stop focusing on finding the right answer and start focusing on asking the right question. As mentioned in the World Economic Forum’s 2023 report5, analytical thinking and curiosity are the top skills for the future workforce as answers become automated. In a world where answers are easy to find, the ability to ask “why” and “how” is a student’s greatest strength.
The ultimate goal of an AI-ready curriculum is to graduate students who are critical thinkers, ethical users of technology, and lifelong learners. By following the lead of experienced teachers like Teacher Lo, schools can make sure AI is a tool for human growth.
Frequently Asked Questions
What is AI literacy and why does it matter in schools?
AI literacy is the set of skills to understand, use, evaluate, and ethically apply AI tools. It matters because students will rely on AI for information and must learn to verify outputs, spot bias, and ask deeper questions.
How should teachers’ roles change with AI in the classroom?
Teachers shift from sole knowledge providers to mentors who guide critical thinking, collaboration, and the ethical use of AI.
What are common AI errors students should learn to spot?
Teach students to identify hallucinations based on made-up facts, faulty or deceptive logic, and fake or incorrect citations.
What practical steps can students use to verify AI outputs?
Verify the AI’s cited sources, cross-reference with reputable sources and confirm consistency, validate with the teacher, primary data or experts in the field, and re-prompt the model to compare reasoning.
How can classrooms capture and share student learning with AI tools?
Use visualizers (document cameras) and Auto Tracking Cameras to record moments of problem-solving and breakthroughs, then share those clips for class reflection and collaborative feedback.
References
Guidance for Generative AI in Education and Research. 2023. UNESCO eBooks. https://doi.org/10.54675/ewzm9535. ↑
OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, et al. 2023. “GPT-4 Technical Report.” arXiv.Org. March 15, 2023. https://arxiv.org/abs/2303.08774 ↑
Federiakin, Denis, Dimitri Molerov, Olga Zlatkin-Troitschanskaia, and Andreas Maur. 2024. “Prompt Engineering as a New 21st Century Skill.” Frontiers in Education 9 (November). https://doi.org/10.3389/feduc.2024.1366434. ↑
Oxford Internet Institute project, Trustworthiness Auditing for AI. https://www.oii.ox.ac.uk/research/projects/trustworthiness-auditing-for-ai/. ↑
“The Future of Jobs Report 2023.” 2023. World Economic Forum. https://www.weforum.org/publications/the-future-of-jobs-report-2023/digest. ↑