By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General - Studies show AI use reduces cognitive function, erodes critical thinking skills

With LLMs becoming increasingly widely used, studies suggest that offloading our thinking to this technology diminishes our ability to think critically and can lead to weaker neural connectivity, lower memory retention, and the accumulation of cognitive debt.

https://phys.org/news/2025-01-ai-linked-eroding-critical-skills.html

https://www.nextgov.com/artificial-intelligence/2025/07/new-mit-study-suggests-too-much-ai-use-could-increase-cognitive-decline/406521/

Other research suggests the technology can be addictive, with people suffering withdrawal symptoms if they do not use regularly:

https://www.tomshardware.com/tech-industry/artificial-intelligence/some-chatgpt-users-are-addicted-and-will-suffer-withdrawal-symptoms-if-cut-off-say-researchers



Around the Network

No shit



When the herd loses its way, the shepard must kill the bull that leads them astray.

Both of the studies need to be reproduced and expanded upon in my opinion. 

The MIT study is interesting, but controlling for the same task doesn't make sense to me. If you had two students doing a math test where a calculator mattered. One student had a calculator, and the other student didn't, I think you'll probably measure a similar difference in brain activity. But what the calculator does is enable the student to offload basic arithmetic so that they can focus on other concepts. For example, if one student was taking a test in which the calculator was useful but other types of mathematical problem-solving is necessary, and another student was doing arithmetic by hand, the brain activity probably would be similar. 

You can use AI tools like a calculator, saving you from thinking about a sub-task and abstracting it away (although you probably should still check the output, due to hallucinations), while focusing your thinking on cognitive tasks that the AI can't (yet) help with. The college essay-writing scenario, just isn't one of these workloads. LLM's are pretty good at writing college-essays. 

As for the second study, I would have to read the paper a bit more, but I think all that is shown is that there is a (moderate) negative correlation between simply using AI tools and lacking critical thinking skills. That's interesting, but not the same thing as "erodes critical thinking skills." You'd have to actually do some sort of cohort study, controlling for lurking variables (or a similarly designed experiment), to show "erosion." Simply showing that there is a correlation between people with less capacity to think critically and AI tool use, isn't enough. This is also probably why "education mitigates some cognitive impacts of AI reliance." 

And that is only addressing the quantitative component. I am not convinced of the questionnaire methodology, which seems to be mostly based in self-perception.  

I think the writer of the article also makes a good point, 

"If survival in a technology-driven environment does not require the classical skills of human reasoning, those skills are likely not going to survive, fading from use like handwritten cursive, math without calculators, texting without autocorrect and books without audio."

Let's not forget that, these "classical skills" are actually pretty new in the grand scheme of things. For most of the history of the human species, humans lived as hunter-gatherers with a totally different skillset.



I think social media reduced cognitive function long before AI



 

 

Grok, is this true?



 

 

 

 

 

Around the Network

There are multiple layers to this. I think I second everything sc is saying, both that offloading mental load to a tool if available is natural and not specifically indicating that critical thinking is eroded and also the chicken and egg problem if AI is eroding critical thinking or people without critical use AI more.

Anyways, it also wouldn't be new that overreliance on a tool can have negative consequences. I remember something about a ship going wrong as the GPS failed, because while the navigator was trained to get position by other means they relied too strongly on the GPS to really check.

That is a danger I see with LLMs as well. I am a programmer and I am using happily AI for programming, but I deliberately avoid putting AI into my IDE and sure as hell avoid vibe-coding. Putting AI into the IDE is *too* convenient, I fear it reduces my ability to check the results, if I can just let it do a function I probably would avoid looking too deeply into it. Instead I am using AI-chat like asking a coworker: if I am stuck or unsure I ask and then check the results (yes, I do that with coworkers answers as well). This way I try to avoid overreliance at the price of less convenience.



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

my greatest games: 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024

10 years greatest game event!

bets: [peak year] [+], [1], [2], [3], [4]

Studies need to be replicated a gazillion times before they can be taken seriously. And people really should be reading from reputable journals.



Kinda have to agree with the wide usage of chatGTP in schools.

Book reports and other projects are mostly written by chatGTP nowadays... That's a lot of critical reading skills off-loaded to AI making summaries, correlations and even conclusions. Instead of teaching the scientific method, follow the evidence, chatGTP will do it in reverse. Start with a hypothesis / conclusion and let chatGTP find corroborating evidence. 

It's very different from using a calculator. You still need to know what you're calculating and how. Calculating Sqrt(x^2 + y^2) is quite different from asking google the length of a triangle. Letting AI do algebra for you doesn't teach you what's going on at all. 

Schools need to crack down more on AI usage. It can be useful but kids will find the easy way out. Copying homework is as old as school and AI is simply a better way to copy home work. Cell phones are now banned in class here, great first step. Yet the Covid generation does have a deficit already, home learning got a whole generation introduced to chatGTP for school work. 

Typing ruined writing skills, schools hardly bother to teach those anymore. Doing homework online destroyed organizational skills, writing problems out in an organized manner to help find the solution and trace back where you went wrong. 



SvennoJ said:

 Letting AI do algebra for you doesn't teach you what's going on at all.

If it is used as an educational method it can. I remember using Symbolic AI's in the form of CAS systems (MAPLE, Wolfram Alpha, etc) in my Intro Diff EQ's class slightly more than a decade ago. It was actively taught and encouraged to use these tools because they save a lot of tedious, mistake-prone work and let you actually focus on the bigger picture and concepts. If I wanted to know the specific algorithms and techniques, some of these tools would break down the steps, and worked great as an educational aid. 

Testing and assessment was designed to work with the assumption that somebody would be using these tools, as they would in the real-world. The exams were still difficult with them. You still had to think critically. 

That isn't to say algorithms and symbolic manipulation techniques shouldn't be taught, but it probably makes sense for these algorithms and symbolic manipulation techniques to be taught with the goal of imparting general principles in dedicated courses. Rotely memorizing a serious of steps is not that. 

Where this seems to be a risk in education is that a lot of the assessment process, especially with the movement to online learning, has been largely automated, and these new forms of cheating disrupt those automated assessment pipelines. Teachers and instructors could construct curricula that reduce this cheating, but these would be labor intensive, and their resources are limited. Also teachers would have to be acquainted with the material themselves above the level of rote methods, and in many education systems that is a very difficult gap to bridge. 



There is definitely a concern with unintelligent people becoming overly reliant on these tools. They're increasingly useful each year but do require at least some level of critical thinking to use effectively so loads of people who lack that will just blindly accept whatever output they get especially if it confirms their biases. You just know there are people out there who are right now arguing with LLMs until they get an output that agrees with them and reinforces their worldview.