AI and Disinformation
For the last five years, I have spent time teaching students how to think critically about the information they see online. Previously, this has involved teaching students about misinformation and disinformation, plus practicing skills like lateral reading. However, this year I decided to supplement these lessons with some focus on AI generated disinformation, as it is quickly making the online information landscape a shakier place.
To check out the slide deck connected with the lessons below that was produced in collaboration with my colleague Tara McLauchlan, please click here. I've also written about the rest of the unit, including teaching about what AI is and how it works, how AI impacts our learning and mental health, and how AI can hallucinate or generate biased content.
What is AI generated misinformation and disinformation?
To starting teaching about this topic, students need to have a working understanding of misinformation and disinformation. Misinformation is when a person shares information that is false or misleading, without realizing it is not true. This can occur when somebody shares an AI generated video, perhaps one that is even clearly labeled as AI generated, believing it represents a true event. Disinformation involves sharing something that is false or misleading on purpose, perhaps to gain attention or money. For example, in June 2025 somebody used AI generated images to promote a fake store while claiming that their warehouse in Flin Flon had recently been destroyed by wildfire. Disinformation can also be used to push forward a particular point of view, such as when a website with AI generated text made false claims about Pierre Poilievre's net worth or when this video of an AI generated person made racist claims online.
It's important to discuss with students how AI generated disinformation and misinformation can be harmful for us and for society. People getting scammed out of money is obviously negative, as is AI generated content that promotes discriminatory perspectives. While disinformation predates artificial intelligence, the ease with which people can now create false and believable content is unsettling and the impact this is having is alarming.
Verifying Online Information in the Age of AI
Currently there are a lot of good resources that can help students learn to spot AI generated images. Some are quite fun for students, such as this Spot the AI quiz, which positions real images and AI images alongside each other. For identifying AI generated video, I showed my class this video that was produced by professional video editors. However, these resources come with a pretty big caution, since AI image and video generators are improving rapidly and it's unclear how much longer some of these tips will realistically be useful. One useful resource that is following these trends is the YouTube page of Jeremy Carrasco, particularly his shorts, and his work with Riddance. Do note that his work is geared to educating adults and while some shorts are good teaching tools, you'll need to preview them before using them in class.
While my students did spend some time looking at content to try to determine if it is AI generated or not, the lessons that focused on lateral reading and other information verification skills are more valuable in the long run. A first step can be to consider the source of the information. Checking the profile page can be useful, particularly to see if the account is associated with a reliable source, like a newspaper. Some social media accounts will clearly label whether something is AI generated, but this isn't consistently the case. Checking the previous posts of an account can sometimes help in assessing whether the content is AI generated. It can also help answer the question why is the account posting this type of content, which is a question students should get in the habit of considering.
Another important skill is lateral reading. This skill involves opening a new tab and looking at other reliable sources to verify a claim or to check that a source is reputable. As AI image generators become more sophisticated, lateral reading will be an important skill to assess online content. For example, the post below falsely claimed that a girl was rescued by a dog during historic floods in Texas. A quick Google search of key words "girl rescued by dog in Texas floods" surfaces an article on the fact checking website Snopes that dispels the claim. Snopes is a great tool for finding useful and timely examples. I also gathered a variety of examples using the CTRL-F example bank.
The final skill we looked at was how to complete a reverse image search. Google and other websites offer this service, where you can drag and drop an image or insert an image's link to search for that image. On Google, this action results in tabs for exact matches to the image, visual matches to the image, and an About this Image tab. The latter can offer information about where the image originated, which is useful when trying to assess its validity.
As AI technology becomes more sophisticated, it will have a significant impact on our information systems and our society. Teaching students skills to think critically and differentiate reliable information from questionable claims is going to become more and more essential.