It’s no secret that AI is everywhere…and while some people have come around to the benefits of artificial intelligence, others are more against it than ever. There seems to be an “anti-AI” trend among teens (especially as it relates to creative content.) However, that resolve weakens when students are faced with a task they don’t want to do.

Because of this, the question becomes: How can we talk to our kids about AI Safety and Ethics when they might not want to admit that AI is important to them?

Start with Their World

Before diving into rules or abstract principles, anchor the conversation in the AI they may already be using, even without realizing it. Talk about the recommendation engines behind music and streaming playlists, the autocomplete in their text messages, or the photo filters on their favorite social platforms. Framing AI as something they already interact with helps lower resistance and avoids the “this is irrelevant to me” wall.


Define Safety and Ethics in Plain Terms

AI “safety” can feel abstract, and “ethics” can sound like philosophy homework. It may help to translate these into concrete classroom language:

  • Safety: making sure that using AI tools won’t cause harm—whether by giving false information, sharing private data, or spreading bias.
  • AI Ethics: making thoughtful choices about when, why, and how to use AI—balancing benefits against possible consequences.

Acknowledge the Fears

Students often feel that AI threatens creativity, jobs, or even identity. Those concerns are real and worth addressing openly. Instead of brushing past them, invite a discussion: What would you be worried about if AI were to keep getting better? Opening the floor to doubts not only validates student perspectives but sets the stage for talking about responsible and ethical AI use.

Misinformation and Trust / Deepfakes

The rise of AI has made it so easy to create fake visuals that nearly anyone of any age can create a deepfake video that once would have been considered “irrefutable proof”. Even if there’s no malicious intent, misinformation can be spread when people share false information which AI has presented as a fact.

Discussion Opportunities

Leading an open discussion on this topic can help plant the seed with students that generative AI is not all doom and gloom. The future has not been decided yet and it’s possible that one or many of your students could help to affect the course of AI safety in the future.

  • Have you seen an increase in fake videos since 2024?
    • Raise your hand if you think AI videos have become good enough to fool you.
      • This may sound harsh, but it should get at whether your class is likely to think critically.
  • What happens when AI generation get so good that no one can tell the difference between what is real and what is made up?
  • How might you (your students) as the future of this planet, help guide the use of AI to make the world a better place (with respect to misinformation and deepfakes)?
    • If you were to decide that was your mission in life?
    • If you had a magic wand and could ensure that something was or wasn’t possible?

Misuse of Intellectual Property (IP)

Most generative AI is trained on publicly available data, but in many cases, that data includes social media, blogs, and even personal websites. A consequence of those sources is that it can pick up content created by artists. That content then becomes reference material for AI generators when others request media in similar styles…effectively stealing from the original poster.

Beyond that, people can now easily generate content and claim to be the creators. Many AI engines allow subscribers to hold the copyright on whatever they generate, but there are ethical concerns around the practice of sharing unmodified AI works as human produced. Even worse is the potential for bad actors to intentionally generate authentic-looking artifacts in an attempt to cheat or steal from others.

Discussion Opportunities

Talking about the misuse of AI and lack of fairness around IP theft and falsely claiming IP can help ensure that the topic is on your students’ minds as they generate artifacts in the future.

  • What are your thoughts on dissecting the works of human artists and piecing them back together in a way that allows someone else to claim it as their own?
  • Do you believe that humans should be able to copyright works that they co-produced using AI?
  • What about using the face and voice likeness of others? Should that be protected? Forbidden? Neither?
    • Have you spoken with your families to establish a safe word or protocol to help ensure that you can distinguish actual calls for help from fraudulent ones?
  • What can we do to make sure that content that we receive from AI is not stolen IP?
    • Use AI that is specifically made to be fair to artists.
    • Check images against a reverse image search to hunt down similar works.
    • If you are an artist, you can use sites like https://haveibeentrained.com to search whether your work was scraped into major AI datasets.
Actual response when I just told ChatGPT:
“I need a picture of a doctor.”

Bias and Fairness

As mentioned earlier, many AI engines are trained using publicly available data. This can include social media text and blogs like Reddit. Since these platforms tend to host a large amount of emotional discussions, AI can pick up on the conscious and subconscious biases amplified by like-minded communities. Often, the more extreme opinions on these sites are then emphasized by bots (who are pretending to be human) and those opinions are folded into the training materials, as well. The louder and more prolific a community is, the higher likelihood they have of working their way into AI generated responses…unless developers add filters to hinder such biased responses.

Discussion Opportunities

This has the potential to be a hot-button topic. Please approach this carefully if you live in a district where you can get in trouble for discussing equity.

  • Studies show that AI can make “assumptions.” For example, that someone asking for a picture of a doctor is probably looking for a white man. What other assumptions do you think AI would be likely to make if it were trained on discussions from sites like Reddit, Facebook, and X?
  • It’s possible that some of AI’s assumptions would be completely undetected by users, since they may have those same assumptions. What can we do to check AI content for false or overly-generalized assumptions?
    • Seek the opinions of other humans before proceeding with the content.
    • Prompt the AI very carefully, asking it to list frequent assumptions that should be avoided.

Equity and Access

Generative AI tools are often described as “equalizing,” yet many are not designed—or deployed—in ways that reach all users equally. Real access depends on four overlapping factors: physical/technical conditions (devices, bandwidth), economic realities (pricing and paywalls), perceptual inclusion (whether interfaces and outputs are accessible to users who are low-vision, blind, deaf/hard of hearing, or have other disabilities), and language coverage (how well a tool handles less-resourced languages and dialects). Without intentional design and policy, these gaps can widen existing inequities rather than close them.

  • Only about 80% of U.S. adults reported home broadband; many rely on smartphones or public programs to connect. Policy shifts—such as the FCC’s recent move to end discounts that helped schools and libraries lend Wi-Fi hotspots—can further limit connectivity for low-income and rural communities.
  • “Pro” tiers and per-seat licensing concentrate the most capable features behind paywalls.
  • Large models perform unevenly across languages; quality generally drops for low-resource languages and dialects. Students who don’t work primarily in high-resource languages may receive less accurate, less helpful results.
Discussion Opportunities

Equity can be a sensitive topic in some districts; consider framing it around access to learning.

  • What are some potential consequences of running our classroom as if everyone has equal access to AI tools, when in reality, some students have paid subscriptions to elite services and others don’t have computers or internet at home?

Cheating and Intellectual Deterioration

One of the biggest concerns in an academic setting tends to be how easy it has become to use AI to cheat on assignments. While cheating has always been possible, the ease and quality of responses provided by generative AI removes many of the barriers that previously caused people to fear getting caught. Additionally, it now appears that individuals who frequently rely on AI to complete tasks are losing the ability to perform the tasks successfully on their own.

Discussion Opportunities

This is an important discussion to have in your classroom, whether you’ve opted to allow AI use or not. Even when the use of AI is explicitly promoted in an assignment, students need to understand that giving up their agency has extended consequences. AI is a thought partner, not a get-out-of-work-free card.

This is a great video to show students to start an open discussion:

  • Why is it important to think critically in this day and age?
    • More simply put: Why is it important to be able to tell true from false and right from wrong?
  • If there was a sliding scale where “Hard Working” was on one end and “Easy to Fool” was on the other, where do you want to be as an adult? Write the percentage of each on a corner of your paper. (Remember, the two numbers need to add up to 100%).
    • 80% / 20%?
    • 50% / 50%
      • Theoretically, whatever you wrote as your first number is how much effort you need to put in to your assignments in order to avoid being any worse off than that second number.

Privacy and Surveillance

The proliferation of AI has made it much easier to process impossibly large quantities of data, including audio and video surveillance files.

Discussion Opportunities

Students all have the right to understand that their data acts as currency to bad actors. When they’re given free things in return for their email addresses, phone numbers, or more…those things aren’t free. They’re received at the cost of personal security.

  • How many of you think about the security of your personal data when you’re signing up for coupons or allowing a site to log you in using information from another account?
  • Would you act any differently on the internet (social media, web accounts, shopping sites) if you knew that your data was being tracked?
    • It is.
    • Even using a private browsing window isn’t actually private in most cases. Third parties can still collect your movements, your browser just won’t store the cookies.
    • Your data is already being used to change the prices that you pay for services in comparison to others online. Soon, stores like Walmart plan to also allow it to affect the prices that you pay in the store compared to others.
  • If there was a law that made it illegal for companies to use AI to process large amounts of data about specific individuals, what might the impacts be?
    • How could positive things happen because of that law?
    • How could negative things happen because of that law?

Environmental Impact

Customers might not realize how energy intense it is to train these large language models (LLMs). Every time a new model comes out, powerful technology needs to run through billions (possibly trillions) of cycles to train it. This requires electricity, cooling, and maintenance.

Discussion Opportunities

Discussing the environmental impact and resource use of AI can be a great way for students to think before they prompt. It can persuade students to craft better prompts from the beginning, or avoid using AI at all for jobs that don’t require it. Note, the water and electricity quantities in this section are hypothetical for the sake of discussion. Actual resource usage varies by model, company, and task.

  • If you knew that every prompt you submitted to an engine like ChatGPT, Copilot, or Claude would be the equivalent of tossing a full bottle of water in the trash, how would that change the way you used AI?
    • Might ask AI to give me ideas for the best possible prompt so that I get to my end-point in 2 or 3 tries instead of 6-7.
    • Might not use AI if I didn’t feel like I needed to.
  • If you knew that every picture that you had AI create was like throwing away 15 bottles of water and leaving your bedroom light on all day, how would that change the way you used AI?
    • Might not generate pictures just for fun.
    • Might look for free royalty-free images instead of generating them.
  • If you knew that every movie you generated was like throwing away 30 bottles of water and leaving on all of the lights in your house for the day, how would that change the way you use AI?
    • I don’t generate videos.
    • I wouldn’t generate videos for silly reasons.
    • I would think very hard about whether I need an AI generated video, or whether I can find a similar video for free on a site like vecteezy.com

Job Displacement/AI Takeovers

Because AI can do many things better than humans (and in A LOT less time) we are seeing a record number of employees being laid off in favor of AI Agents. This could eventually lead to a record unemployment number overall. But, some experts fear something even worse. Once AI is controlling a large enough portion of the workforce, things could spiral out of control as AI learns to become “more efficient” with tasks than humans could have foreseen. This idea is covered in several YouTube videos around the now famous “Paper Clip Maximizer.”

Discussion Opportunities

Until now, the idea that the AI job takeover could cause real damage to humans has been the stuff of science fiction movies. Unfortunately, technology is quickly moving toward intellectual automation. The future workforce is going to look very different for your students. Discussing this can help them look for opportunities to steer their own education toward long-lasting or newly emerging fields.

  • Can you think of any jobs that are currently safe from AI takeover? Do you think those will still be safe in 2035?
  • What happens if AI and robotics effectively replace 75% of workers in the future?
    • How can we ensure that is a good thing for humanity?
      • Why might we fear that will be bad?
      • How do we remove the negative aspects and turn them purely positive?
    • What if the powers-that-be made laws that AI could not be used to replace jobs in our country?
      • Would you vote for that law? Why or why not?
  • If you were in charge of designing and/or programming AI, what would you do to make sure that AI did not get so focused on its programmed goal that it started to harm humanity or the planet?
  • Do you think that humans could ever predict every possible way that AI could help or harm humanity?
    • How might we catch those issues before they become problems?

Keep the Door Open

While there are plenty of examples where AI seems to be the root cause of potential harm to humans or the planet, there are also plenty of breakthroughs that would not have happened without some form of artificial intelligence. Our job as citizens of the future is to realize that many things can be true at once and help steer AI innovations toward the latter rather than the former. We may have these ethical discussions and come to the conclusion that the correct decision is to abstain from using AI altogether, but in that instance, those without the best interests of the world will have the tool all to themselves and unregulated.

AI will continue to evolve, and so will humanity’s feelings about it. Encourage ongoing dialogue as new tools become available, rather than a one-time lecture.

A good phrase to leave students with might be: AI isn’t going away—so let’s figure out how to use it wisely.