What students' AI use may mean for cognitive engagement

Diana Devine headshot

This article is the second in a series of five articles on helenair.com discussing Carroll College faculty perspectives and experiences with AI. I encourage you to engage with a performance of Anthropology by our brilliant theatre department, which asks questions about morality, reality, humanity and AI. It's an invitation to collective conversation and community engagement. Come for the play, stay for the questions. Feb. 20-22, 27-March 1, carroll.edu/theatre.


I don't think anyone would call me a proponent of AI. In fact, most people would probably label me anti-AI. I consciously avoid using Generative AI (GenAI) programs and am troubled about the financial and environmental costs of data centers being passed on to surrounding communities, largely rural and under-resourced.

I take issue with moving too quickly past problems of legality and bias regarding foundations of the models. Once training data has been used, the algorithms cannot be “untrained” and continue to use data to which companies should not have access. Even with a firm grasp on these concerns, I understand the value and benefit of AI in many applications, including education.

The question of AI in education is a difficult one, but it's not one whose answer will be found by pretending the issue doesn't exist. As a college professor, it is my responsibility to be up-to-date on the GenAI tools that are available to students, but not to catch students. I don't seek to be an authority in a surveillance state of education.

Since the beginning of formal education, there have been students who cheat, but there have always been students with passion and desire to learn who benefit from guidance. Most of my students don't hold my same views or worries about AI, and most certainly engage with AI in some capacity. I need to know about these tools because part of my job is to help students understand these programs, how they can be used appropriately and efficiently as tools, and when they should not be a part of a workflow or process.

As a developmentalist and researcher, I'm passionate about people being able to understand and evaluate research, so they aren't taken by mis- or disinformation. A concern with GenAI is cognitive offloading, or delegating tasks that require thinking to other means. Cognitive offloading isn't new or inherently bad. In fact, it's a feature that helps us be more efficient and spend our mental resources on more complex problems.

Writing things down instead of remembering them is an analog strategy. However, overreliance on cognitive offloading may interfere with healthy cognitive development or relate to reduced cognitive performance, especially when the offloading tools are unavailable.

Last semester, I taught the psychology department’s Research Methods course and supported 16 students in completing their own research projects. Two students, juniors Colin Olson and Matt Roragen, chose to explore college student GenAI use and cognitive outcomes because they noticed the increase in GenAI capabilities and use among students. Colin found that students who reported more AI use also reported less interest in cognitively demanding activities.

Matt found that students who reported that AI-assisted work felt less like their own also reported more issues with working memory, or a person's ability to remember and use information at one time. These findings amplify important questions about AI and cognitive function and mental resources but do not endorse stunning indictments of AI that fear-mongering headlines may try to convey.

My take? Be intentional about GenAI use. Pause to think about how and why you're using these programs. Using GenAI isn't necessarily going to cause your cognitive decline, but you may want to spend some time thinking by yourself, too.

Download an image of this article


Diana Devine, Ph.D. is an assistant professor in the Department of Psychology at Carroll College