Artificial Intelligence, Learning and Mental Health

In Part 1 of this series, I discussed how to help students understand what generative AI is, how it works and different ethical considerations connected with its development. Building on this knowledge, students need to better understand how AI affects us, particularly how it affects mental health and learning. To check out the slide deck connected with the lessons below, please click here.

AI and Learning

When considering using a technology tool like AI, students should look at how the technology affects their mental health and their learning. To start, I had my students brainstorm a list of the positive and negative impacts generative AI has on mental health and learning, along with a list of questions they have about these impacts. Interestingly, students identified the shortcuts AI can provide as both potentially positive and negative. They appreciated that AI can make tedious tasks a lot faster, but were also able to point out that taking shortcuts can negatively impact their learning or can be considered cheating.

To further explore how AI affects our learning and brain development, my students and I annotated the article "AI and social media are everywhere in teens' lives. Can they impact cognitive skills?" We again focused on positive impacts, negative impacts and questions we have.

Cognitive offloading is an important concept in the article. This is where we use an external tool to help with a task and to free up mental resources. Cognitive offloading is nothing new. When we use a shopping list to help us remember items at a grocery store or Google Maps to help us navigate to a destination, that is cognitive offloading. However, cognitive offloading comes with tradeoffs. Relying on a shopping list means we don't train our memory to better retain information, while Google Maps affects how well we know a neighbourhood, both tradeoffs with which I am comfortable.

Similarly, AI can be a means to offload our mental efforts, but this offloading comes with its own tradeoffs. If we are practicing a certain skill, like analyzing a text, offloading the task to AI means we are not getting the practice we need to improve that skill. The article made the analogy that asking a forklift to lift weights for us will get the weights lifted, but will not help us strengthen our muscles. While research is still ongoing in this area, a recent study by MIT scientists found that people who used Large Language Models (LLMs) to write essays consistently performed less well on several metrics compared to people who wrote essays without AI. Another study found that workers who used AI developed an overreliance on it, as well as reduced skills in problem solving. The need to exercise our own mental muscles is clear.

Some argue that generative AI can be helpful in generating well-polished final products. (Even this is up for debate. A METR study of programmers found that programmers were 19% slower when using AI tools, even though they believed they were 20% faster.) However, in education it isn't always about the final product. For instance, there is value in developing writing skills beyond the need for well-polished final products. Engaging in the writing process helps the writer understand information in a different and deeper way. As well, writing helps writers express themselves and connect themselves to their subject matter. The act of writing involves writing for understanding of ideas, of the self and of the world. Of course this is affected by my bias as an English Language Arts teacher, but using generative AI can undermine this process and interfere with these other purposes of writing. When it comes to writing and learning, it is important to consider whether using an AI tool is the most appropriate choice.

AI and Mental Health

While students already had some understanding of how AI affects us cognitively, they were less aware of AI's potential mental health impacts. The impact AI has on mental health is still being studied. However, experts have already voiced concerns about young people's reliance on AI chatbots for conversation and companionship. When AI chatbots undergo the training process, chatbots tend to be rated more highly if they are positive, agreeable and complimentary, which has resulted in chatbots that are highly sycophantic. This can lead to problems, especially for young people. As Michael Robb, lead researcher for Common Sense Media, indicates, "if teens are developing social skills on AI platforms where they are constantly being validated, not being challenged, not learning to read social cues or understand somebody else's perspective, they are not going to be adequately prepared in the real world."

Also concerning is the ineffective age restrictions offered by tech companies and the dangerous advice and harmful content that researchers have found AI chatbots offering. At the most extreme are the cases where young people have committed suicide after discussing suicide with AI chatbots and even actively receiving encouragement to commit suicide from AI chatbots. It is clear that tech companies bear greater responsibility in protecting young people who use their products.

Generative AI has worrying impacts on learning and mental health. Despite these concerns, it is being marketed as a tool for research or even as an alternative to search engines. In Part 3, I will focus on AI hallucinations and bias and what students need to know when considering information generated by AI.