
By now, we need to consider that AI is more than a trend. In the last two years, it seems to have seeped into every crack and crevice from entertainment, to hobbies, to education. And these tools will continue spreading, as long as they keep getting better and more reliable.
But, as AI becomes more prolific, ethical concerns become harder to ignore. In this story, I’ll walk you through some pressing issues, and present you with solutions that you can consider to help take the edge off.
Lies
One of the most pressing issues with generative AI is that it can confidently generate false information. These “hallucinations” may sound convincing, but they can be entirely fabricated. In an educational context, this can be especially harmful. Students may unknowingly copy and share incorrect information…and educators might use flawed content without realizing it.
How to help:
Teachers can model critical thinking by encouraging students to fact-check AI-generated responses, just as they would with any other online source. Assignments can include cross-referencing with trusted databases or citing multiple sources to verify claims. I like to suggest that students go through and circle all of the facts in the research that gets returned to them. From there, they should cross out every fact they can’t verify, and place a checkmark next to every fact that checks out with at least two other sources.
Large companies, like Amazon, are also working to reduce hallucinations through tools like retrieval-augmented generation (RAG), which links answers to verified documents. But it’s still a good idea to teach students how to question and verify what they read…whether it’s AI or not.
Bias
AI models learn from the data they are trained on—and unfortunately, that data often reflects historical and cultural biases. This means AI outputs can unintentionally reinforce harmful stereotypes or leave out underrepresented perspectives entirely.
How to help:
Educators can use these moments as teaching opportunities to discuss how bias enters systems, how it affects outcomes, and how we can design more equitable technology. Lessons around algorithmic bias, fairness, and inclusive design are becoming just as important as learning to code. Many companies are now investing in broader and more inclusive training datasets and publishing transparency reports, but awareness at the classroom level is still critical.
To illustrate this behavior, try requesting “a picture of a doctor.” What did you get?
What if you ask for a story about a male teacher, then a story about a female teacher? How are they different?
Hopefully, as the training content changes, so will the responses…but not all generative AI instances will be run by corporations who value fair and equal representation.
IP Theft
Generative AI tools learn from enormous datasets that include art, writing, code, and music scraped from the web—often without the consent of original creators. This has raised serious questions about intellectual property and creative ownership.
How to help:
Teachers can guide students in ethical creation practices by emphasizing the value of original work and the importance of crediting sources. Classroom conversations can also include fair use, Creative Commons licensing, and the role of AI in remix culture.
Meanwhile, some AI companies have begun offering opt-out systems for artists and are working on watermarking techniques to protect original content, though enforcement is still inconsistent. Additionally, there are now art generation tools (like Adobe Firefly and Microsoft Designer) that use only licensed and user-contributed datasets to train their models.
Resource Use
Training and running large AI models requires significant electricity and water—much more than most people realize. Data centers often rely on carbon-intensive energy and use millions of gallons of water annually for cooling, contributing to environmental stress.
How to help:
This issue ties directly into digital citizenship and sustainability education. Teachers can incorporate lessons on the environmental impact of digital technologies, encouraging students to consider when and why they use AI tools. It’s also a good idea to point out that students may want to reconsider generating pictures and videos “just for fun,” as those formats gobble up the most energy and fresh water.
On the corporate side, some companies have pledged to reduce carbon emissions and increase transparency about their resource use. Microsoft, for example, has committed to being both carbon negative and water positive by 2030, but tracking and accountability are ongoing concerns.
Wrapping Up
As AI becomes more embedded in our lives and classrooms, we have the opportunity…and responsibility…to teach students not just how to use it, but how to use it thoughtfully.
When we guide students to understand both the power and the pitfalls of this technology, we prepare them to take partial responsibility for the world that AI will help to shape.



