Student Edition
Level 2

A self-paced course on thinking critically, creating responsibly, and using AI with real understanding. Enter your name to begin โ€” your progress saves automatically on this device.

๐Ÿ’พ Progress is saved automatically on this device.
Unit 5 of 7

What to Watch Out For

AI brings real benefits โ€” but also real risks. This unit covers the dangers that matter most: misinformation, deepfakes, over-reliance, and privacy. Not to scare you, but to prepare you.

๐Ÿ“– 4 Lessons
โฑ๏ธ ~50โ€“65 min
๐Ÿ“ Graded Quiz
๐ŸŽฏ 80% to pass
CCR Focus
Critical โ€” Identify specific AI risks and apply critical scrutiny to content, sources, and your own habits
Creative โ€” Use awareness of risk to make more deliberate, intentional choices about when and how to use AI
Responsible โ€” Protect yourself and others from the real harms AI can enable

Misinformation & Deepfakes

When AI makes false things look real.

Start Here

Humans have always spread misinformation โ€” but it used to require effort, expense, or skill to fabricate convincing false content. AI has dramatically lowered all three. Understanding how AI enables misinformation is now a core survival skill for anyone who consumes digital content.

AI-Generated Misinformation at Scale

Generative AI has made it vastly easier to produce large quantities of false but plausible-sounding content. False news articles, fake academic papers, fabricated social media posts, invented quotes from real people โ€” all of these can now be generated in seconds, at scale, and at a quality level that makes them difficult to distinguish from genuine content on first reading.

The danger is not just that individual pieces of false content are harder to detect โ€” it is that the volume of potential misinformation has increased enormously. When false content could only be produced slowly and expensively, the overall landscape of digital information still contained mostly genuine content. Now, content can be generated faster than it can be fact-checked, which creates an environment where bad actors can flood information channels with false narratives and make it genuinely difficult to find and amplify the truth.

Deepfakes: When Seeing Is No Longer Believing

Deepfakes are synthetic media โ€” video, audio, or images โ€” in which AI has replaced or altered a person's appearance or voice with convincing realism. What was once a highly technical capability available only to well-funded professionals can now be produced by anyone with a consumer device and basic software.

The implications are significant. Political deepfakes can put false words in the mouths of real leaders. Financial deepfakes can impersonate executives to authorize fraudulent transactions. Personal deepfakes can be used to harass, defame, or humiliate private individuals. In each case, the harm is real โ€” and the challenge is that audiences may not immediately know what they are seeing is fake.

The most important defense against deepfakes is not technical detection software โ€” it is the habit of pausing before sharing any emotionally charged content, cross-referencing with independent sources, and treating surprising or inflammatory content with heightened skepticism. If a video makes you angry, scared, or triumphant in a way that seems designed to โ€” that emotional response is worth examining.

Key Idea

The single most powerful protection against misinformation and deepfakes is the pause before you share. If content is designed to provoke a strong reaction and demands you act immediately, that urgency is itself a red flag.

How to Evaluate Suspicious Content

Several practices help evaluate suspicious content. Reverse image search can reveal whether an image has appeared elsewhere, in different contexts or at earlier dates. Checking whether multiple independent, credible news outlets are reporting the same story provides a quick test for major claims. Examining metadata on images and videos can reveal inconsistencies. And perhaps most importantly, asking "who benefits from me believing this?" is a useful frame โ€” misinformation is almost always motivated by an interest that benefits from your emotional reaction.

Read & Analyze

๐Ÿ“น The Viral Video

During a tense local election, a video appears on social media showing a candidate apparently making a shocking statement at a private event. The video is filmed from a distance, somewhat blurry, and the audio is slightly distorted. Within hours it has been shared thousands of times. Many people in the comments are outraged โ€” some claim it confirms what they always suspected about the candidate, others insist it's fake.

Rania sees the video in her feed. She has no strong feelings about the election. She notices several things: the video appeared from an account with no prior history. No mainstream news outlet has reported on the event shown. The lighting in the video looks inconsistent in a few places. The audio quality seems different from the background noise. A reverse image search of the account's profile photo reveals it appears on multiple other social media profiles.

She searches for corroborating reporting and finds none. She decides not to share it and flags it as potentially misleading.

1
What specific red flags did Rania notice? Which of these would you have caught? Which might you have missed?
2
Why is the fact that the video provokes outrage actually a reason to be more skeptical, not less?
3
What would it take for you to feel confident that this video was real? What verification process would you need to follow?
4
What is the harm in sharing a piece of content you're not sure about, as long as you add "this might be fake" in your caption?
0 / 150 characters minimumMinimum response required to continue
โœ๏ธ Please write at least 150 characters before continuing.

๐Ÿ”Ž CCR Connection

Critical

Emotional responses to content are not evidence of its accuracy. The stronger your reaction, the more deliberately you should verify before sharing.

Creative

Media literacy is a creative skill โ€” constructing a clear picture of what is actually true requires active effort and multiple sources.

Responsible

Sharing unverified content โ€” even with caveats โ€” spreads it. Your digital citizenship includes taking responsibility for what you amplify.

Over-Reliance and the Atrophying of Thinking

What do you lose when AI thinks for you?

Think First

AI tools can make you faster, more productive, and better at certain tasks. But they can also make you weaker โ€” if you stop developing the skills you outsource to them. This is not a theoretical risk. It is already happening, and it is worth taking seriously.

The Atrophy Problem

Cognitive skills โ€” writing, reasoning, problem-solving, memory โ€” improve with use and weaken with disuse. This is not a moral failing; it is simply how the brain works. When a skill is consistently outsourced to a tool, the neural pathways associated with that skill get less exercise.

The question for AI use is not whether this happens โ€” it does โ€” but whether it matters. For any given skill, the relevant question is: do I actually need to be able to do this myself, or is having a reliable tool that does it sufficient? For some tasks, the tool is fine. If GPS navigation means you no longer build a mental map of your city, that may simply be an acceptable trade-off in exchange for never getting lost.

But for skills that are central to learning, reasoning, and independent judgment โ€” critical thinking, writing, synthesis, problem-solving โ€” the trade-off is much more significant. These are not just task-completion skills. They are the fundamental capacities through which you understand and engage with the world. Outsourcing them comprehensively to AI is not just a convenience; it is a development trajectory with real consequences.

The Difference Between Enhancement and Replacement

Not all AI use is created equal from a cognitive development standpoint. Using AI to get feedback on writing you have already done engages your judgment in evaluating that feedback. Using AI to write for you bypasses the development of the skill entirely. Using AI to check math you have already solved develops the ability to recognize right answers. Using AI to solve math you never attempted develops nothing except the ability to operate an AI tool.

Enhancement uses AI as a partner that extends what you can do while you remain cognitively engaged. Replacement uses AI as a substitute that does what you would otherwise do โ€” but doesn't develop you in the process.

The distinction is not always clean, and it shifts with context: a professional writer using AI to speed up tasks they have already mastered is different from a student using AI to avoid developing the craft at all. But asking honestly which side of the line you're on is a valuable habit.

Key Idea

Using AI to skip development is borrowing against your own future. The skills you outsource now may be the skills you lack when it matters most.

Building a Healthy AI Relationship

A healthy relationship with AI tools is one in which you are deliberate about what you do yourself and what you delegate. It includes knowing your own development goals โ€” the skills you are trying to build โ€” and being unwilling to outsource practice in those specific areas. It includes using AI for tasks that genuinely benefit from it rather than out of habit or laziness. And it includes periodic practice without AI, both to maintain skills and to understand what you actually know and can do independently.

Read & Analyze

๐Ÿง  The Calculator Generation

A group of students in an advanced class have been using AI tools extensively for most of their major assignments for the past semester. Their teacher gives them an in-class essay โ€” no AI, no notes, just themselves and a piece of paper.

Several students find they are stuck in ways that surprise them. One student, who has produced some of the most polished essays all semester, stares at a blank page for fifteen minutes. "I don't know how to start without the AI to get me going," she admits afterward. Another student produces a genuinely strong essay โ€” rough in some ways, but authentically his own.

The teacher reflects: she has been assessing the AI's writing all semester, not the students'. The polished essays told her almost nothing about what the students could do on their own. And the students who relied most heavily on AI now know least about their own capabilities.

1
What has the student who stares at a blank page lost? Why does it matter beyond this specific assignment?
2
Is it the teacher's fault, the students' fault, or something more complicated? Make a nuanced argument.
3
What is the difference between the two students in the scenario โ€” the one who is stuck and the one who writes a strong in-class essay? What choices might explain the difference?
4
What skills in your own life are you at risk of atrophying if you rely on AI too heavily? Be specific.
0 / 150 characters minimumMinimum response required to continue
โœ๏ธ Please write at least 150 characters before continuing.

๐Ÿ”Ž CCR Connection

Critical

Over-reliance on AI is worth scrutinizing critically โ€” especially when the skill being outsourced is one you are still supposed to be developing.

Creative

Creative and intellectual work grows from the struggle. AI should support the work, not remove the struggle that makes you better.

Responsible

Your future self deserves the skills you develop now. Using AI responsibly means thinking about long-term capability, not just short-term convenience.

Privacy, Data & Your Digital Life

What AI knows about you โ€” and what that means.

Think First

Every time you interact with an AI tool, you are providing data. Every prompt you type, every preference you reveal, every behavior you exhibit is information that may be stored, analyzed, and used. Understanding what happens with that data is part of being a literate AI user.

What Data AI Systems Collect

When you interact with a consumer AI tool, you typically generate several types of data: your input (the prompts you type), your behavior (which suggestions you accept or reject, how long you engage), and sometimes metadata (your device, location, and usage patterns). Many AI services use this data to improve their models โ€” meaning your conversations may become training data for future versions of the AI.

The privacy implications depend on what you share. Most consumer AI tools have terms of service that allow the company to use conversation data for model improvement, marketing, and product development. This does not mean your conversations are being read by a human employee โ€” but it does mean that sensitive personal information you share with an AI tool is not necessarily private.

The practical implication: avoid sharing personal identifying information with AI tools unnecessarily. Your legal name, address, medical details, financial information, passwords, or the personal details of others โ€” none of this should be entered into a consumer AI tool that has not given you specific, verifiable assurances about data handling.

Personalization and the Filter Bubble

AI-driven personalization systems โ€” the algorithms that decide what content to show you on social media, streaming platforms, and search engines โ€” are simultaneously one of the most useful and most concerning applications of AI. Useful because personalization can surface content genuinely relevant to you. Concerning because it can also create a filter bubble: an information environment in which you primarily encounter content that confirms what you already believe.

Filter bubbles are not a conscious conspiracy โ€” they are the outcome of an algorithm optimizing for engagement. Content that confirms your existing beliefs tends to generate more engagement than content that challenges them. An algorithm optimizing for engagement will therefore tend to serve you more confirming content over time. The result can be a personalized information environment that feels complete but is actually missing important perspectives, counterarguments, and facts that conflict with your existing worldview.

Key Idea

A personalized information environment feels more comfortable and relevant than a genuinely diverse one. That comfort is worth being suspicious of. The questions "what am I not seeing?" and "whose perspective is missing?" are habits worth building.

Being a Thoughtful Digital Citizen

Digital citizenship in the AI era means being thoughtful about data: what you share, with whom, under what conditions, and with what understanding of how it may be used. It means reading privacy policies critically โ€” or at minimum understanding that your agreement to terms of service is a real agreement with real consequences. It means choosing AI tools partly based on their data handling practices, not just their capabilities. And it means being aware of how personalization shapes your information environment, and deliberately seeking out perspectives and sources outside your algorithmic comfort zone.

Read & Analyze

๐Ÿ” The Health Chatbot

Mei is dealing with a personal health concern she is embarrassed to discuss with her parents or doctor. She discovers an AI chatbot marketed as a private, confidential health advisor. She begins sharing detailed personal health information with it over several weeks, including symptoms, family history, and her mental health struggles.

Later, Mei notices that the ads she sees everywhere โ€” on social media, on websites, even in her email โ€” have started reflecting the topics she discussed in those conversations. She reads the chatbot's terms of service for the first time and realizes that the company shares aggregate (and potentially individual) user data with advertising partners for "personalized health marketing."

Mei is unsettled. She never intended for her private health concerns to influence what ads she sees. She wonders what else may have been done with her information.

1
What specific mistakes did Mei make, and what could she have done differently at each step?
2
Why is the phrase "private and confidential" in marketing language not the same as a legal guarantee of privacy? What should Mei have looked for instead?
3
What are the potential consequences โ€” beyond just annoying ads โ€” of health-related data being shared with advertising partners?
4
Draft a personal "data hygiene" policy for yourself โ€” specific rules for what personal information you will and won't share with AI tools.
0 / 150 characters minimumMinimum response required to continue
โœ๏ธ Please write at least 150 characters before continuing.

๐Ÿ”Ž CCR Connection

Critical

Read critically before you share. Terms of service are real agreements โ€” understanding them is a critical literacy skill.

Creative

Deliberate choices about your digital life โ€” what you share, what you keep private, whose algorithms you feed โ€” are creative acts of self-determination.

Responsible

Your data is part of who you are. Protecting it thoughtfully is an act of self-respect and responsibility toward others whose data may be affected by the same systems.

Unit Quiz & Final Reflection

Show what you know โ€” then show what you think.

Answer all 8 questions, then submit. You need 80% (7/8) to pass. If you don't pass, you'll be directed back to review before retaking.
Question 1 of 8
Why has AI dramatically increased the risk of misinformation compared to previous eras?
Question 2 of 8
A video appears showing a public figure making a surprising statement. It has been shared thousands of times and provokes strong emotional reactions. What should be your first response?
Question 3 of 8
What is the "atrophy problem" in the context of AI and cognitive skills?
Question 4 of 8
What is the key difference between "enhancement" and "replacement" in AI use?
Question 5 of 8
Why should you avoid sharing personal health or financial information with consumer AI tools?
Question 6 of 8
What is a "filter bubble" in the context of AI personalization?
Question 7 of 8
What is the most important single habit for protecting yourself against misinformation?
Question 8 of 8
A student consistently uses AI to draft all of her essays in a class. In an in-class, no-AI assessment, she stares at a blank page and cannot start. What does this most likely indicate?
Answer all 8 questions to submit.
out of 8
๐ŸŽ“

Unit 5 Complete!

You've finished What to Watch Out For. Below are your certificate and digital badge.

CCR
Student Edition ยท Level 2 ยท Kerszi.com ยท #AILiteracyAndEthics
This certifies that
Student Name
has successfully completed Unit 5 of the AI Literacy & Ethics course, demonstrating understanding of AI-enabled misinformation, deepfakes, over-reliance on AI, and digital privacy risks and the critical thinking skills required to navigate an AI-shaped world.
CriticalCreativeResponsible
Unit 5: What to Watch Out For
AI Literacy & Ethics ยท Level 2 ยท Secondary ยท #AILiteracyAndEthics
โ€”Date Completed
โ€”Quiz Score
Kathi KersznowskiCourse Author

Your downloadable digital badge โ€” shareable on LinkedIn, email signatures, or portfolios.

Ready to continue?