I remember the exact moment my doubts about teaching in the AI age dissolved. It wasn’t in a high school classroom—those days ended years before ChatGPT burst onto the scene. It was last semester at NYIT, watching an adjunct faculty member struggle with a lesson plan on climate justice. She’d been stuck for days, paralyzed by the perfectionism that plagues so many future educators. Then she did something that would have been impossible during my decades in K-12 classrooms: she asked ChatGPT to generate five completely different pedagogical approaches to teaching carbon footprints.
The AI’s responses were terrible. Patronizing. Riddled with factual errors about the Paris Agreement. But here’s what stopped me cold: she spent the next week dissecting each flawed approach, cross-referencing them against actual climate science and culturally responsive pedagogy frameworks, ultimately crafting a lesson far more sophisticated than anything I’d seen from new faculty in my seven years at NYIT. A ridiculous AI experiment had sparked the kind of critical thinking I’ve spent my whole career trying to cultivate in K-12 students, educational leaders, and university faculty.
That towel-cape moment—when a child’s wild flight of imagination crashes into adult skepticism—haunts every conversation I have about AI in teacher preparation. What if we’re doing it again? What if, in our panic about ChatGPT writing lesson plans and generating assessments, we’re inadvertently clipping wings instead of teaching future educators to build better gliders?
The Dawn of Digital Doubt
The question hangs over faculty meetings and curriculum revision sessions: Are we training students to think, or just to engineer smarter prompts? Sir Ken Robinson famously asked whether schools kill creativity, pointing to research showing children’s creativity scores declining steeply through their schooling years—not necessarily because schools stifle imagination, but because logic and reason develop alongside it.[1] Now we face a parallel enigma in teacher education. AI hasn’t arrived to kill pedagogical innovation; it’s arrived to test whether we ever truly nurtured it at all.
Walk into most schools of education today and you’ll witness two extremes: professors banning AI tools outright in syllabi, treating them like academic misconduct waiting to happen, or teachers copy-pasting ChatGPT-generated lesson plans wholesale, their pedagogical reasoning asleep at the wheel. Neither camp is building the educators our students desperately need. We’re either fostering digital prohibition or algorithmic parroting. What if teacher preparation programs aren’t killing creativity with AI, but unwittingly training parrots instead of pioneers?

The stakes are real. Nearly half of Gen Z youth report struggling to critically evaluate AI-generated information, according to a 2024 global survey.[2] Yet AI literacy empowers learners to understand AI and make decisions about its use in meaningful and ethical ways, and when integrated thoughtfully into learning, AI provides new opportunities to exercise critical thinking and creative expression.[3] The tool isn’t the problem. Our timidity is. And if we can’t model thoughtful AI integration for future teachers, how will they possibly navigate it with their own students?
Embracing Algorithmic Possibility
Here’s what I’ve learned from decades in education—years teaching high school students in BC, leading educational initiatives through SCSBC, and now seven years working with faculty at NYIT—celebrating magnificent failures and watching educators surprise me weekly: generative AI works best as a provocateur, not a butler. It’s the sparring partner who pushes back on lazy thinking, the co-pilot who demands you stay alert at the controls. And preparing teachers to help students wield it requires us to rethink what “pedagogical mastery” even means.
Let me offer you the strategies that transformed my pedagogical coaching from AI-anxious to AI-amplified—not because I have all the answers, but because I learned to ask better questions alongside my colleagues, who will be navigating this landscape long after I’ve retired.
1. Start Small: Prompt Engineering Sprints
Forget unit-long assignments. Begin with ten-minute “prompt sprints” on unsolved, deliciously messy educational problems. I launched mine with: “Design an assessment system that measures deep learning in mathematics without relying on standardized tests or traditional grading.” My students iterated queries, refined them, laughed at absurd AI suggestions (blockchain-verified pencil grip analysis, anyone?), and gradually honed prompts that yielded actual, feasible alternatives they could pilot in their studies.
The magic wasn’t the final answer—it was watching them realize that vague pedagogical questions yield garbage, and precision unlocks possibility. They learned to think like educational researchers, not ChatGPT users. One student took her AI-sparked ideas into her student teaching placement at a middle school and completely re-imagined her math assessment approach. Would that have happened without the AI prompt? I doubt it. The algorithm gave her permission to question orthodoxy.


2. Teach Ethical Scaffolding: The Co-Pilot Metaphor
I require every AI-assisted assignment in my courses to include an “audit log.” Students must dissect one AI output for bias, inaccuracy, or cultural insensitivity. Last semester, we used ChatGPT to generate a lesson on New York’s immigration history. The AI’s initial response was sanitized, politically safe, and completely erased the Chinese Exclusion Act and the experiences of undocumented communities.
Students cross-referenced it with scholars like Mae Ngai and José Antonio Vargas, marking every problematic omission in red. That exercise taught them more about whose narratives get centred in the curriculum—and whose get erased—than any lecture I could deliver. When they enter their careers, they’ll know that AI doesn’t just make mistakes; it reproduces historical silences.
During my years in K-12 leadership, we didn’t have these tools. We had textbook adoption committees arguing over whether evolution belonged in science class. Now the bias is algorithmic, invisible, and exponentially more dangerous if left unexamined. Frame AI as a co-pilot, never autopilot. A co-pilot can crash the plane if you’re not paying attention.

3. Foster Novel Problem-Solving: Hybrid Projects
This is where AI shifts from threat to amplifier. Try use AI to generate differentiated reading passages on a topic you are interested in at five different K-12 lexile levels. Then think about this: if you had to pilot-test the passages with actual students and document, which passages would fail, which would succeed? How quickly do you think you’d recognize that the AI’s assumptions about “readability” often miss cultural context, background knowledge, and student interest entirely?
Suddenly, we’re not just consumers of AI outputs—we’re critics, educators holding silicon accountable to reality. Does the AI’s “7th grade reading level” passage actually work with 7th graders in Surrey? In rural Montana? With multilingual learners? That’s the question that breeds pedagogical innovators.
4. Build Failure Fluency: The Blooper Reel
Every few weeks, in conversation with faculty, we share our worst AI experiments, or an AI blooper reel if you prefer. We discuss how AI hallucinations often make the work more, instead of less, difficult. How students often think that some text well-strung together means that it is good. Or, the oft-noted issue of fabricated “research citations” that look convincing until someone actually tries to find the sources.
Creativity empowered breeds revision—it’s fostered through a willingness to try, recognizing that an idea only dies when we accept our last failure as final, and it’s the role of the teacher to foster seeing possibility by reframing failure.[1] Our “blooper reel” normalizes iteration, turning errors into metaphors for resilient innovation. We stop fearing the “wrong” prompt and start treating AI like a conversation, not an oracle.
If they can’t handle failure productively with AI in my university, they’ll never model that resilience in their careers.
5. Cross-Curricular Weaving
AI doesn’t live in tech courses. I collaborate with colleagues across NYIT to weave AI experimentation into all aspects of teaching and learning.
In an education foundation course, students prompt AI about philosophical questions: “Explain John Dewey’s progressive education to a school board skeptical of project-based learning.” The AI’s responses become texts for analysis, not answers. What did ChatGPT emphasize? What did it omit? How does algorithmic “summary” flatten pedagogical complexity?
Weave AI across the educational programs, and students begin to see it as a thinking tool, not a shortcut silo.
6. Professional Growth Nudge: Co-Prompting with Vulnerability
Often, I prompt AI in real-time during workshops or keynote presentations, refining queries aloud, admitting confusion, and modelling iterative thinking to the best of my ability. “Watch me mess this up three times—now you try.” I fail, laugh, refine my approach, and we all learn how to generate better AI results. (I’ll share my findings on how to write the best AI prompts and use fine-tuning syntax like “temperature” and “presence_penalty, etc. in another article). I demonstrate how I cross-check AI-generated statistics against actual research. I admit when the algorithm produces something I hadn’t considered.
Modelling vulnerability dismantles the myth of professor-as-omniscient-expert and re-positions me as lead learner. It also keeps me honest; sometimes I get lazy… and they call me out. Good. That’s exactly the critical consciousness we all need.
Tempering Reason, Unleashing Voices
Here’s the paradox AI forces us to confront in education: it’s both the logic that tempers wild pedagogical ideas and the megaphone that amplifies traditionally marginalized voices. A student in our school, whose writing anxiety had nearly derailed her writing of a philosophy statement, began using AI to generate a first draft. Then, she rewrote it in her own voice—the algorithm gave her permission to start messy and iterate without judgment.
The OECD and European Commission’s 2025 AI Literacy Framework emphasizes teaching students to use AI tools, co-create with them, and reflect on the responsible and ethical use of AI.[4] But frameworks mean nothing without educators willing to experiment in the trenches, to celebrate prompt flops alongside polished outputs, to admit we’re figuring this out together—and that our students might actually be better at this than we are.
The Invitation
I don’t have this solved.
My thirty-plus years in K-12 education happened in a pre-AI world. I taught students to research using card catalogues and microfilm. I led schools through technology integration that meant building networks, getting computers into classrooms, and getting the school connected to the internet, not navigating algorithmic intelligence. I don’t get to claim I have this figured out because I’ve been in education forever. If anything, my experience makes me humble about how radically the landscape has shifted.
So here’s my invitation, from one educator to another—whether you’re teaching middle school in Manitoba, leading a school district in California, or preparing teachers in New York: What if we stopped asking whether AI helps or harms learning, and started asking what kind of thinkers we want to unleash into a world where algorithms are ubiquitous? What if our role isn’t to control AI, but to raise humans—and prepare teachers who can raise humans—so critically awake, so ethically grounded, so irrepressibly curious that they’ll bend every tool toward justice, beauty, and problems worth solving?
I’ve been in education long enough to remember when overhead projectors were revolutionary. I’m humbled by how much I still have to learn. I’m still refining my own prompts. Join me in this experiment?
Key Takeaways
- AI Should Be a Provocateur, Not a Shortcut
- The most powerful role of AI in education is as a sparring partner that challenges students’ thinking, rather than as a tool for producing polished answers. When used thoughtfully, AI can ignite curiosity and critical analysis instead of replacing intellectual effort.
- AI Literacy Is Essential for Ethical and Critical Engagement
- Students need structured opportunities to evaluate AI outputs for bias, accuracy, and cultural sensitivity. Teaching them to audit and critique AI responses fosters deeper understanding of knowledge systems and ethical reasoning.
- Creativity and Problem-Solving Thrive Through Iteration and Failure
- Activities like “prompt sprints” and sharing AI bloopers normalize experimentation and failure, helping students see AI as part of an iterative process rather than a source of perfect solutions. This builds resilience and innovative thinking.
- Cross-Disciplinary Integration Unlocks New Learning Possibilities
- AI should not be siloed in tech classes. When woven into subjects like literature, math, and science, it becomes a tool for inquiry and creativity across domains, encouraging students to connect ideas and challenge assumptions.
