
With all of the buzz around AI in the classroom, it’s tempting to fall into one of two camps: embrace the use of AI in program creation, or caution students against using it altogether. At this stage of the game, I’d like to propose that we increase these options for a more inclusive learning experience.
First, I’d like to introduce a set of “levels” of learning with regards to AI. I developed this progression using similar ideologies to what I recommend in the classroom around the development of technology in general. These levels are not purely sequential, nor are they necessarily complete, but they *are* more inclusive than an all-or-nothing protocol.
Levels of AI in Education
Let’s take a look at the levels that I feature, then I’ll dive into explanations and examples.
- Pretending
- Predicting
- Using
- Understanding
- Creating
Overview
At the time of this writing, the vast majority of the population is acutely unaware of the broader AI landscape (whether they realize it or not.) Many feel that AI is a technology that burst on to the scene around two years ago and is moving at a lightning pace to take over every industry. Some may even harbor fears of the singularity and world domination. These lenses color the technology in ways that will ultimately shape how those individuals decide to use — or not use — its capabilities.
One thing to note is that the current wave of AI isn’t the “world domination” type. The tools that we’ve been using with Copilot, ChatGPT, Midjourney, and Dall-e (etc.) are what we call “generative AI.” These technologies take in billions to trillions of datapoints and use them to create new pieces that follow similar rules. These systems learn from large datasets and produce statistically likely outputs based on prompts. They don’t think, plan, or make decisions independently. They don’t have goals or desires. They don’t “understand” in the human sense — they predict what is likely to come next.
While the progress of AI appears to have happened astonishingly fast, in reality, it grew more like bamboo. The roots were laid quietly underground for years before the shoots sprung out into public view. The first practical use of artificial intelligence happened in the 1950s with a robot mouse named Theseus that was able to solve mazes through trial and error. By the early 2010s, AI was already everywhere, but it was referred to as “advanced analytics.” Companies like Netflix used this to recommend better movies based on your tastes and Amazon used it to recommend better products based on previous purchases. Generative AI appeared in the form of a chatbot in 1966 and began showing up as predictive text in the 1980s.
The wave that we’re currently seeing is really more like viral marketing. These products are so fundamentally useful that they would be embedded within products whether or not we understood what was happening in the backend, and the only reason we see them as a swarm of AI clones rather than a series of individual innovations is that their parent companies have chosen to hop on the trendy term bandwagon.
Level 1. Pretending
While this method is stellar for young students (those beneath the standard 13-year-old COPPA floor) it’s also extremely useful for those who are just beginning their artificial intelligence journey. A lot can be learned about the process of generative AI by doing it yourself. It’s artificial artificial intelligence!
Imagine a classroom assignment where students are given two or more inputs and asked to create an output that follows the same rules? In English class, this could look like a set of poems that students need to digest and emulate. In art, it could be photographs that students need to choose elements from to put into a painting. In history, this could be using the study of the previous two world wars to figure out what a third might look like. Whatever the task, if you call it “Understanding AI” and complete it as a lightly cloaked game, it should be a big hit!
2. Predicting
Prediction is a fantastic thought exercise for any subject, but when it comes to AI, it can really help you start to see patterns. When you make predictions, you are forced to go through and digest data before making hypotheses around outcomes. Before students begin using AI to solve problems on the regular, it can be great for them to have experience trying to predict the ways in which AI might respond to certain types of prompts. The useful thing here is that students don’t need to use the AI themselves. This can be done as a classroom activity where the teacher does the prompting and the students make their predictions in groups, trivia style.
In physical education, this could look like predicting the answers that Copilot will give when asked what the most popular olympic sport is. It might also involve predicting what resources the AI used to craft that response. In Spanish class, it could be predicting the exact words or phrases that Copilot will suggest for translations of common sayings or predicting the best places to visit for those who speak Spanish as a second language.
Don’t forget to talk about how AI may have come to the conclusions that it did. Does it give you the same answer if you ask the question again in the exact same way? What will happen if you ask the same question again in a different way? Will you get a different answer? Is there a way to phrase your question so that you get exactly the answer that you want to get? What does that tell you about generative AI in general?
3. Using
This level is probably the most straightforward. Try providing students of an appropriate age and experience level with a set of tasks to be carried out using Copilot, ChatGPT, Midjourney, etcetera. Make sure they think through the prompting process, as well as the post-answer process. Before you undertake assignments like this (or incorporate it into other assignments) be sure to discuss your specific list of allowable parameters for AI use in your classroom.
Allowable Parameters
This list is up to you, but I like to divide my preferred methods of AI assistance into two categories: idea generating and unblocking.
Idea generating includes things designed to help get past the blank page.
- “Copilot, I have an assignment where I need to write about a famous poet. Can you suggest 10 or more poets that are female, write short-form, and use humor in their work?”
- “Copilot, I need to add two more paragraphs to my essay. Can you help me think of a way to do that that will enhance what I already have, but will not explode into several more pages?”
Unblocking includes defining the unknown, or ensuring the correctness of content that the student has already created.
- “Copilot, I remember hearing that Amelia Earhart crashed in the Bermuda Triangle and her plane was never found. Is this true? Where can I find sources that provide me with facts around that historical event?”
- “Copilot, I made this game that is supposed to set a timer once, but instead it’s resetting whenever my character overlaps the pizza. Here’s my program…can you ask me questions meant to help lead me to the changes that I need to make?”
Disallowed Parameters
I also make sure to be very specific about what I do not want to see. I do not want students to use AI to create anything that sacrifices “the point” of the assignment. If the assignment is to write a short story, then it may be okay to have AI generate pictures to enhance the presentation (properly sourced and cited) but it is not okay to have AI do any of the writing.
When in doubt, ask AI to provide you answers in Olde English or pirate speak so that you know that you’re not plagiarizing anyone else’s work and you still have to mentally process the material that you’ve been provided. Also, make sure to do your fact checking! Anything that can be disproven should be crossed out. Circle anything that you have at least two reputable, validated sources for.
I also tell students, in order to avoid future claims of cheating, they should cite everything and show their work. It’s helpful to include a printout of the AI’s assistance on the project.
Finally, I like to encourage the use of voice AI so that students practice active listening and note-taking.
Remember, the goal is to make students more curious, not less!
4. Understanding
You could think of this as a follow-up to predicting, but it also goes much deeper. Strong AI users need to understand how AI works, as well as its strengths and faults. In what ways does this help society? In what ways can it hurt? What privacy and security concerns should we have? The answers to these questions are best digested when students research them, rather than receiving the information directly.
5. Creating
For the most part, this level is for extreme cases…and probably CS students, but it can also be helpful for the larger population as a thought exercise. This could include anything from writing a spec for “The Perfect AI” all the way up to having students code their own bot or even their own GPT.
Wrapping Up
With all of this said, it’s not like there are zero downsides to the use of AI. With every pro, there is a con. I’m choosing to end this article with a list of positives and negatives that you can take into consideration before you decide how you want to use AI in the classroom.
Pros
- On-demand access to targeted information
- Assists individuals with disabilities
- Systematic improvements through pattern recognition
- Increased debugging capacity and error prediction
- Great for individualized instruction and adaptive learning
- Allows for more constructive educator time (when students are able to help themselves with the small things, teachers can pay attention to more constructive tasks)
- Voice-based and conversational options can lead to more engaging educational interactions
- Allows students to be able to complete tasks that they would not have been capable of in the past
- If needed, it can offer emotional and/or psychological support during difficult tasks
Cons
- AI sometimes lies or is generally incorrect
- Bias in source data can lead to bias in outputs
- Lack of current regulations could lead to privacy leaks
- Increased dependence on technology can lead to a decrease in human interaction
- Technology that’s capable of doing all pieces of an assignment make it tempting to opt out of doing the work yourself
- Generating pieces based on existing works can lead to copyright infringement or appropriation of another’s unique style
- The water and electricity requirement for generative AI is astonishing compared to other tech resources. For every 800K text responses, you could build a full-size car. That number drops to 300K for image responses and 20K video responses.
What are your thoughts on using AI in the classroom? Is there anything you would like to add to the conversation?



