Google AI Overview and Teaching Critical Thinking

This school year, I decided to teach a unit on online search skills to my grade six class. Using a set of lessons from CTRL-F, we learned about the difference between search engines and social media, how to use search terms to effectively find content online and how algorithms work. However, before the end of our lessons it became clear that I needed to update how I was approaching the topic. Google had introduced AI Overview to Canada, which will likely impact how people search for information online.

AI Overview is a feature where Google's generative AI generates a summary for some search terms. These summaries are created using various online sources and the summaries include links to these sources. While at first glance, these summaries may appear useful, they pose a lot of problems when it comes to accessing information and critically thinking about the information we find.

What Problems Are Posed by AI Overview

First of all, a lot of problems that plague generative AI programs are also issues for AI Overview. There is concern about the disproportionate environmental impact posed by generative AI and its excessive energy needs, and AI Overview is no exception. As well, there is the question of whether online writers consent to having AI Overview draw from their content. I know I cringed when I saw AI Overview drawing from my blog when making book related searches. At one point, it told me that several middle school books would be appropriate for a grade three student and that 1984 was also a good option. This is not good advice. But putting those criticisms aside, this post is going to focus more attention on AI Overview and the information it offers, particularly its limitations.

One thing I do appreciate about AI Overview, particularly when compared to other generative AI, is that it is possible to see the sources it uses to compile its information. However, these sources are somewhat hidden and there is a strong impulse to take the summary at face value without fully considering where the information is actually coming from and whether these are reliable sources. AI Overview offers an illusion of certainty, when more critical engagement is needed in the search process.

Of course, this isn't a new problem. Verifying information and sources has always been an important skill, both with the Internet and before the Internet. In a recent Kurzgesagt video, fact checkers spent a year investigating the claim that a person's blood vessels would stretch 100,000 km, a distance that would circle the world twice. This fact is widespread online and could be considered common knowledge, but where did it come from? After extensive fact-checking, they traced this claim back to a 1922 book by August Krogh and then determined it was not accurate. During this investigation, a separate group of scientists also published a peer-reviewed article with a far more accurate estimate of 9,000 to 19,000 km. However, Google's AI Overview seems unaware of this fact and still offers a measurement that hearkens back to 100,000 km.

An AI Overview summary that indicates that "If stretched end to end, the blood vessels in the human body would stretch over 96,000 kilometers, which is more than twice the circumference of the Earth." A list of blood vessel facts is presented below with sources in a side panel.

This probably shouldn't be surprising, since AI Overview provides us a reflection of the Internet and the information it offers. The story of 100,000 km (or 96,000 km) is more interesting and memorable, so it has traveled far and now appears to be the correct answer. Some might argue that this is more a critique of the Internet as a whole, rather than just AI Overview. However, some students view AI as a faultless technology and it is is important for them to understand that AI Overview is only as reliable as its sources and we need to think critically about where it is getting its information. It's also probably worth mentioning that AI Overview does not seem to produce responses for some topics, such as searches related to racism, climate change, or vaccines (or at least this seemed to be true at the time of publication). I would guess this is an intentional move on the part of developers who are trying to prevent AI summaries that draw from disinformation that is widespread in some corners of the Internet.

AI Overview offers a summary of what is found on the Internet, which means in a lot of cases more dominant voices are more likely to be heard. A student-friendly (and a silly) example is that AI Overview can give a far more comprehensive list of advantages of dog ownership than cat ownership, as pictured below. Apparently the Internet at large is more likely to be a dog lover.

An AI Overview list of advantages of having a dog that includes companionship, stress reduction, improved mental health, reduced risk of cardiovascular problems, increased activity, social connections, sense of purpose and memories. A side panel includes a list of sources.
An AI Overview list of advantages of having a cat that includes stress relief, reduced risk of heart disease, companionship, better sleep, low maintenance, and pest control. A side panel includes a list of sources.

The previous example could result in some lighthearted debated and discussion, but other searches could result in more serious consequences for the searcher. For instance, a search for "Breastfeeding versus formula" offers a pro-breastfeeding summary. While breastfeeding is a recommended practice, it is not possible for all families for a lot of different and very valid reasons. In this example, the dominant medical advice is communicated, but not the lived experience of many mothers. I would be curious who else's lived experiences isn't being captured through these summaries, particularly when it comes to equity seeking groups.

An AI Overview summary that discusses how "breastfeeding is generally considered to be better for a baby than formula, but the decision to breastfeed or formula feed depends on individual circumstances." A side panel includes a list of sources.

Other searches can demonstrate how historically dominant voices are privileged in these AI summaries. For instance, a search for the best science fiction author offers a list of exclusively male, white authors, with the exception of Mary Shelley who is included as a footnote. Writers like Octavia Butler, Ursula Le Guin, N. K. Jemisin, and Margaret Atwood did not make the cut despite important contributions to the genre. These examples are based in my own interests and curiosities, but I suspect that asking students would result in other examples that are reflective of their own experiences and interests.

An AI Overview list of "some of the best science fiction authors", including Isaac Asimov, Orson Scott Card, Arthur C. Clarke, Jules Verne, H.G. Wells, William Gibson, Kim Stanley Robinson, and Neal Stephenson. Mary Shelley is included as a footnote.

What Teachers Can Do

AI Overview offers the illusion of certainty, puts a hurdle in the way of examining sources and mirrors dominant online voices and perspectives. For these reasons, it is important that educators actively teach about how to effectively search for information and how to critically consider the information AI generates. The CTRL-F lessons on online search skills and verification skills are a good place to start, but direct instruction is also necessary.

Teachers should model checking the sources of AI summaries and asking critical thinking questions. Are these sources reliable? Is this giving me the best information? For instance, a search for how to dress for cold weather results in the screenshot below. The highlighted portion suggests that a fedora or cap would be a good choice. I live in Canada, so that seems like a laughable idea in -30°C weather. However, the source for that suggestion links to Project Social T, a clothing company based in Los Angeles and perhaps not a source of expert knowledge in this area.

An AI Overview list of what we should do when dressing for cold weather. The second point recommends wearing a hat, including a beanie, cap, or fedora.

Another useful search is to ask who would win in a fight: a grizzly bear or a gorilla. While the topic is silly, AI Overview indicates that a grizzly would win and cites some reasons. However, if you look at the sources, you'll notice that the main one is Xavier News, which links to an article written in a student newspaper. The article is some fun and well done student work, but should also elicit discussion about the reliability of the information.

An AI Overview that gives a list of reasons why a grizzly bear would likely win a fight against a gorilla. A side panel lists sources, including Xavier News.

There are other avenues for introducing critical thinking in relation to AI Overview. A CTRL-F lesson recommends having students search for topics using positive, negative and neutral terms, such as "health benefits of chocolate," "negative health effects of chocolate" and "health impacts of chocolate." CTRL-F notes that the google search results page will offer different results, and AI Overview will be similarly different. The dog and cat example above could be used in a similar way, particularly if the queries are framed in positive and negative lights. Beyond these types of examples, I would be curious to see what avenues of investigation could be created with students.

Finally, as teachers it is always a good idea to spend time teaching about how to determine if something is a reliable source. Learning how to cite sources is also useful, and probably something I have not personally spent enough time on in the past. Even before AI Overview hit the scene, students would often mistake Google as a source and require clarification about the difference between the search engine and the sources of information it finds for us. Emphasizing these skills is important because young people need to be able to assess the reliability of sources, to investigate information themselves and to consider diverse perspectives in spite of AI Overview's assistance.