Thinking Critically about Artificial Intelligence - Part 1

Artificial intelligence is increasingly being sold by tech companies as a solution to many problems and a way to make life easier. This is especially true in the world of education, where teachers are encouraged to use AI to create engaging lessons, adapt for learners and involve students in this new technology. However, sometimes it seems that people use AI without fully understanding how it works, how it impacts us and how it is affecting our world. This is cause for significant concern, since it is hard to make decisions about technology when our understanding of it and its impacts are limited.

I am an AI skeptic, meaning I don't use generative AI in my classroom. This is in part due to ethical considerations (which I'll discuss below), but also because I'm wary of the impact this technology can have on mental health, learning, skill development, and self-expression. However, the technology is not going away and opting to ignore it in my classroom is a disservice for my students. While I don't value it as a learning tool, I do know that they need to better understand AI, so they can make informed decisions when it comes to its use in their own lives.

To support this goal, I collaborated on a set of lessons with Tara McLauchlan to help students understand what is artificial intelligence, how AI affects us, and the impact AI has on our information landscape. While I delivered these lessons to grade seven students, the above resource can be adapted for older students, particularly in an English Language Arts context.

What is AI?

The first set of lessons focused on what artificial intelligence is, how it works, how AI models are trained and the environmental consequences of these processes. This understanding of the basics of AI technology is important. Without it, students tend to misunderstand how AI works and develop overconfidence in models that come with significant flaws. This basic understanding also helps support later lessons relating to hallucinations and bias.

Firstly, students need to know that there are different types of artificial intelligence. Often, AI gets conflated into a monolith. I have seen news articles where a journalist offers criticisms of ChatGPT, and then seeks to offer balance by discussing a different type of AI model, such as one that detects and monitors forest fires. Both of these examples might use machine learning, which is a process where AI analyzes training data, which allows it to generalize to unseen data, without necessarily being explicitly told how to do so. However, the training data, statistical algorithms, expected outcomes, environmental impacts, and the overall scale of each model are drastically different. Its important that students can distinguish between different types, so they can think critically about AI products that are increasingly being presented to us by corporations.

Students tend to be most familiar with two types of AI: predictive and generative. Predictive AI involves models which make predictions based on a data set, such as the personalized recommendations provided by Netflix based on past viewing history. Generative AI involves AI models which can generate text, images or video based on a written prompt. Generative AI includes Large Language Models (LLMs), such as ChatGPT or Claude. Since generative AI tends to pop up more regularly in a school context, it is the main focus of subsequent lessons.

Generative AI is a probability engine that produces answers that are statistically plausible (which is strictly speaking not the same thing as correct). Large Language Models are trained on extensive amounts of training data, such as news articles, social media posts, books and more. Typically more training data results in more accurate results. To generate answers, these models draw from probability, linear algebra and calculus. While the math involved is complex, it is possible to run a simplified simulation to help students understand the basics.

Pretend we have a very simple AI model that has been trained on six sandwich recipes: 1) salami, 2) ham, 3) cheddar, 4) tuna, 5) cucumber, and 6) peanut butter. I prompt the AI model to give me a sandwich recipe that has two ingredients. If I roll two six-sided dies, I can demonstrate how the AI model is generating my sandwich recipe. In my class, students generated some delicious recipes, like ham and cheddar, while others generated questionable outputs, like peanut butter and tuna.

Of course, real AI models are a lot more complex and I do show my students this video to make that clear. However, this simplified simulation is still useful because it shows that the AI model is not a person who is making decisions and it does not understand what it is generating. It is following an algorithm, which is a set of instructions. This also helps students understand that AI generates information based on probability, which means it often generates correct information, but can possibly generate something that is incorrect or nonsensical. This also helps explain AI hallucinations, which is when an AI model generates something bizarre or inaccurate, such as when Google's AI Overview told people to put glue on pizza and to eat rocks.

Training Data

To generate information, generative AI needs to be trained on extensive amounts of data. This use of data without permission has caused some controversy. Privacy issues have arisen where companies have scraped user data without permission to use for training. As well, artists and authors have objected to AI companies using their works of art to train AI models without permission. Some have even sued AI companies.

Generative AI also requires the labour of data workers, who label images, give feedback on the accuracy of generated information or moderate content. While this work is essential for the development of better AI models, data workers are not well-treated. Many companies have outsourced this work to countries like Kenya, where data workers are underpaid and overworked. Many data workers are also exposed to upsetting and traumatic content as part of their job, without mental health supports.

With my students, we spent time exploring the issues of copyright, privacy and working conditions that are connected with the training of AI models. This involved reading articles or viewing media that discussed these issues, identifying the main concerns, discussing potential solutions to these problems, and expressing our opinions. It became clear that most solutions we proposed involved requiring AI companies to act in more ethical ways, such as compensating artists and authors fairly or providing better wages and mental health supports for data workers. It isn't lost on me that AI companies are not likely to take these steps independently.

AI and the Environment

The last part to understanding how AI works is learning about the physical infrastructure that is required and how this infrastructure impacts local communities and the environment. We might use our phones or computers to access AI, but the computer systems that actually house these models are located in data centers. These data centers contain many servers, which is where the computer processing takes place that lets AI generate text or images.

An issue related to data centers is that they have a large environmental footprint. AI, particularly generative AI, requires huge amounts of electricity to run, which in many places comes from the burning of fossil fuels. Last year, data centers used 1.5 percent of the world's electricity consumption, with predictions pointing at electricity usage doubling by 2030, which will largely be driven by AI usage. Additionally, servers generate a great deal of heat and must be cooled in order to function properly. Currently, cooling involves using large amounts of fresh water, which can strain water resources, particularly during droughts. As well, while using AI requires electricity and water, the training process is also resource intensive.

To help students understand how AI impacts the environment and communities, I had students generate a t-chart of positive and negative impacts. We watched a video discussing how AI servers require water for cooling and another that discussed the impact of AI on the environment. The second video included some uses of non-generative AI by scientists to address climate change impacts, such as supporting farmers or detecting wildfires. While these were likely presented in an effort to demonstrate balance, and it is worth pointing out that the journalist is comparing different types of AI. Generative AI is a lot more resource intensive, which should inform our decision making about it. We also worked to annotate an article that focused on the impact of data centers on specific communities. Afterwards, students were able to describe various ways AI can impact the environment and communities.

Generative AI has been positioned by corporations as invaluable, but this marketing typically obfuscates how AI works, how it is trained, and the physical processes that are needed to support it. When we better understand these facts that form a basis for AI processes and when we understand some of the ethical considerations mentioned above, we can learn to think more critically about how AI is used and how it impacts us, which I will discuss in Part 2 of this series.