Roberto Tomé

ROBERTO TOMÉ

Is AI Making Us Dumber? The Great Cognitive Heist of Our Time
Opinion

Is AI Making Us Dumber? The Great Cognitive Heist of Our Time

20 min read
Is AI Making Us Dumber? The Great Cognitive Heist of Our Time

“Technological progress has merely provided us with more efficient means for going backwards.” — Aldous Huxley

Here’s a fun thought experiment: Remember when your parents would tell you about walking uphill both ways to school in a snowstorm? Well, we’re about to become those parents, except our story will be: “Back in my day, we had to actually think for ourselves.” And unlike the snow story, this one might actually be true.

We’re living through what might be the most elegant con job in human history. We’ve been sold a productivity miracle that promises to make us smarter, faster, and more capable. Instead, we’re watching millions of people voluntarily outsource their brains to machines, then celebrate the efficiency gains while their cognitive abilities quietly waste away like unused muscles.

The research is starting to pour in, and it’s not pretty. A 2025 MIT study found that people who used ChatGPT to write essays showed 32% less cognitive engagement than those who used their actual brains. Even better? 83% of AI users couldn’t remember a single passage they had just “written”. We’re not just offloading our thinking, we’re actively forgetting how to think.123

But here’s where it gets ironic: we’re doing this while IQ scores are already declining across the developed world for the first time since we started measuring them. The Flynn Effect, that century-long trend of rising intelligence, has not just stalled but reversed. IQ scores have been dropping for over a decade, with particular declines in verbal reasoning, logic, and mathematical abilities. And now we’re throwing AI into this cognitive mess like gasoline on a fire.456

The Neurological Evidence: Your Brain on AI

Let’s start with the hard science, because unlike most hot takes about technology’s impact on society, this one comes with actual brain scans. MIT researchers strapped EEGs to people’s heads and watched what happened when they used ChatGPT versus when they actually engaged their gray matter. The results were about as exciting as your latest colonoscopy.21

When participants used AI assistance, their brain connectivity (measured through alpha and theta waves) dropped by nearly half. Think of your brain as a bustling city where different neighborhoods need to communicate to solve problems. AI assistance essentially threw up roadblocks between those neighborhoods, creating cognitive suburbs where nothing interesting happens.3

But the real kicker came months later when researchers asked the same people to write essays without AI help. Those who had previously relied on ChatGPT performed worse than their peers, producing writing that was “biased and superficial”. They hadn’t just become dependent on the tool, they had actually gotten worse at the underlying skill. It’s like using a calculator so much that you forget basic arithmetic, except the calculator occasionally lies to you and you don’t notice because you’ve forgotten how to check the math.2

The phenomenon has a name: cognitive debt. Think of it as the mental equivalent of eating nothing but fast food. Sure, it’s quick and convenient, and you feel satisfied in the moment. But over time, your body forgets how to properly digest real nutrients, your energy crashes, and you end up weaker than when you started. Except instead of your digestive system, it’s your ability to think critically, analyze information, and generate original ideas that’s slowly degrading.32

And this isn’t just affecting college students writing essays. A Microsoft Research study of 319 knowledge workers found that those who frequently used AI tools and trusted them highly experienced measurable declines in critical thinking skills. These weren’t digital natives or AI novices. These were professional adults whose jobs literally depend on their cognitive abilities. They were outsourcing their thinking to machines and paying for it with their intellectual capacity.7

The Google Effect: Death by a Thousand Searches

Of course, AI didn’t create our tendency toward cognitive laziness. It just turbocharged it. We’ve been practicing for this moment ever since Google convinced us that having access to information was the same thing as understanding it.

Remember the Google Effect? That’s the polite academic term for what happens when your brain decides it doesn’t need to remember something because it can always look it up later. A 2015 study found that people who used the internet to find answers to questions became dramatically overconfident in their ability to explain completely unrelated topics. They literally mistook Google’s knowledge for their own.87

Think about it: when was the last time you memorized a phone number? Navigated somewhere without GPS? Did mental math instead of reaching for your phone? We’ve been systematically outsourcing our cognitive functions for two decades, and now we’re acting surprised that our brains have gotten lazy.

The Kaspersky Lab documented this in their research on “Digital Amnesia” (the experience of forgetting information you trust a digital device to remember). They found that 91% of people surveyed admitted to using the internet as an extension of their brain. Half of smartphone users under 45 said their phone holds “almost everything they need to know or recall”.910

But here’s what makes the AI revolution different: Google still required you to think about what to search for, evaluate multiple sources, and synthesize information. You had to exercise some cognitive muscle, even if you were offloading the storage function. AI tools like ChatGPT eliminate even that minimal intellectual effort. You can literally ask a question and get a fully formed answer without engaging your brain at all. It’s the difference between using a calculator for complex equations versus having someone else do all your math homework while you watch Netflix.

The Deskilling Catastrophe: When Convenience Becomes Dependence

Here’s where we need to talk about deskilling. It’s a concept that’s been lurking in academic papers since the 1970s but is now having its main character moment. Deskilling happens when automation takes over so much of a job that workers lose the underlying skills needed to perform without the technology.1112

The classic example is what happened when airlines introduced automated flight systems. Pilots became so dependent on autopilot that they lost their ability to manually fly planes in emergencies. When Air France Flight 447 crashed in 2009, investigators found that the pilots had essentially forgotten how to fly their aircraft when the automation failed. They had the technical knowledge but had lost the practiced skills and instinctive responses that come from hands-on experience.

Now imagine that happening to thinking itself.

A study from Syracuse University found that generative AI creates exactly this dynamic in knowledge work. Workers become more productive in the short term because AI handles routine cognitive tasks. But over time, they lose the ability to perform those tasks independently. More dangerously, they lose the foundational skills that more advanced capabilities are built on.131214

It’s like learning to drive only in cars with automatic transmission, anti-lock brakes, and parking assistance, then suddenly being handed the keys to a manual car with no safety features. Except instead of just being unable to drive, you’ve forgotten what it feels like to have direct control over a vehicle at all.

The research shows this isn’t just theoretical. When companies discontinue automated systems, workers experience genuine disruption because they’ve lost the skills the system was performing for them. Their knowledge has become “latent,” still technically there but inaccessible without the technological crutch they’ve become dependent on.11

And unlike previous forms of automation that affected primarily manual or routine cognitive work, AI is going after the high-level thinking skills that we thought made us irreplaceable. Strategic analysis, creative problem-solving, critical evaluation. All the mental activities that separate knowledge workers from easily automated functions are now being outsourced to systems that most people don’t understand and can’t verify.

The Paradox of Productivity: Working Harder, Thinking Less

This brings us to one of the most seductive lies of the AI revolution: the productivity narrative. Tech optimists love to point to studies showing that AI users complete tasks faster and produce more output. What they conveniently ignore is what those users are sacrificing in the process.1516

Yes, AI can help you write emails faster, generate code more quickly, and produce first drafts at superhuman speed. But productivity isn’t just about output volume. It’s about the quality of thinking that goes into that output. And when you optimize for speed and quantity, something has to give. Usually, it’s depth, originality, and the kind of hard-won expertise that comes from actually struggling with problems instead of delegating them to algorithms.

The “AI productivity paradox” is real. People report feeling more productive while simultaneously becoming less capable. They’re like gym-goers who only use machines with perfect form assistance. Their workout numbers look great, but their stabilizer muscles atrophy because they never have to maintain balance on their own.1617

One particularly telling study found that while AI users completed tasks 60% faster, their “relevant cognitive load” (the actual mental effort required to transform information into knowledge) fell by 32%. They were literally thinking less while producing more. It’s the intellectual equivalent of taking performance-enhancing drugs: impressive short-term gains that mask long-term damage to your natural capabilities.3

The productivity gains also tend to be unevenly distributed. AI helps novices more than experts, which sounds great until you realize what that means: it’s leveling the playing field by bringing everyone down to a lowest common denominator rather than elevating human capability. When a machine can make a beginner perform like an expert, the value of actually becoming an expert through years of practice and learning plummets.12

The Flynn Effect Reversal: We Were Already Getting Dumber

Here’s the really depressing part: AI isn’t making us dumber so much as accelerating a process that was already underway. The Flynn Effect (the century-long trend of rising IQ scores) has been reversing since the 1990s in developed countries.5618

American IQ scores have been declining for over a decade, with particular drops in verbal reasoning, logic, and mathematical abilities. The same pattern is showing up in Norway, Finland, Denmark, and the UK. We’re literally watching the intellectual progress of the 20th century reverse itself in real time.65

The causes are hotly debated. Some researchers point to changes in education systems, declining reading habits, and the cognitive effects of digital media consumption. Others argue that we’ve reached the ceiling of environmental improvements like better nutrition and healthcare that drove the original Flynn Effect.196

But one factor that keeps coming up is our relationship with information technology. The Norwegian study that first documented the reversal found the decline was most pronounced in cohorts that grew up with widespread internet access. British research linked the IQ drop to “shifts in education systems and pervasive digital technology use among youth”.6

We’ve essentially been running a massive uncontrolled experiment on human cognition for the past 30 years. We introduced technologies that promised to make us smarter, then watched our actual intelligence scores decline. Now we’re doubling down with AI that’s even more seductive and invasive than anything that came before.

It’s like we looked at the rising rates of obesity and diabetes and decided what the world really needed was more efficient ways to consume sugar.

How else are we supposed to explain the Flat Earthers and vaccine deniers? It’s as if, after centuries of painstaking progress, we collectively threw up our hands and said, “Screw empirical evidence, Dr. BigBalls69 on Facebook knows what’s really going on.”

The Creative Apocalypse: When Machines Generate Culture

The cognitive decline isn’t limited to analytical thinking. Research shows that AI is actively undermining human creativity, which is possibly the most troubling development of all because creativity is supposedly our last competitive advantage over machines.

Studies on creative tasks reveal a disturbing pattern: individual users might produce better work with AI assistance, but the overall creativity of human groups decreases. It’s like having a really talented ghostwriter who makes your individual essays better while simultaneously making all essays more similar to each other.3

The mechanism is straightforward: AI systems are trained on existing human-created content, so they inherently bias toward conventional patterns and popular combinations. When humans rely on AI for creative inspiration, they’re essentially outsourcing their imagination to a system designed to reproduce the statistical averages of past human creativity. The result is work that’s technically competent but lacks the surprising connections and novel perspectives that drive genuine innovation.

We’re creating a culture of intellectual monoculture, where diversity of thought gets algorithmically smoothed away. It’s like replacing a forest ecosystem with a single crop that grows faster and more predictably but can’t survive environmental changes because it lacks genetic diversity.

The implications are staggering. If the next generation of creators grows up relying on AI for ideation, they won’t develop the cognitive habits that lead to genuinely original thinking. They’ll become curators of algorithmic output rather than generators of novel ideas. Culture will become increasingly self-referential and derivative, not because humans are incapable of creativity, but because we’ve systematically outsourced the mental processes that create it.

The Educational Meltdown: Teaching Machines Instead of Minds

The really terrifying part is watching this unfold in educational settings, where the consequences will shape decades of human cognitive development. Schools are adopting AI with the ceremony of a blindfolded cult, chanting “innovation” without asking what they’re summoning.

The research on educational AI use reveals a split personality disorder in the academic community. Meta-analyses show that AI can improve test scores and learning outcomes in controlled studies. But these same studies also find that the benefits disappear when students are asked to perform tasks without AI assistance. Students are learning to use the tools rather than developing the underlying capabilities the tools are meant to enhance.20152

It’s like teaching kids to drive exclusively in cars with fully automated steering, then acting surprised when they can’t handle a vehicle that requires actual driving skills. We’re creating a generation that can’t think without algorithmic assistance, then calling it educational progress.

The MIT researchers were so concerned about their findings that they released their study before peer review specifically to warn against what one researcher called “GPT kindergarten”. The idea of introducing AI assistance to developing brains, when neural pathways are still forming and cognitive habits are being established, represents a potentially catastrophic natural experiment.20

Young brains learn by struggling with problems, making mistakes, and developing mental models through direct experience. AI assistance short-circuits this process by providing answers without the cognitive work that creates understanding. It’s like giving kids the solutions to math problems without teaching them arithmetic. They might get correct answers in the short term, but they never develop the number sense that enables mathematical thinking.

And unlike previous educational technologies that augmented human capabilities, AI directly replaces human thinking. A calculator helps you with computation but still requires you to understand mathematical concepts. A search engine helps you find information but still requires you to evaluate and synthesize sources. AI provides fully formed thoughts, eliminating the mental struggle that creates intellectual development.

The Attention Economy’s Final Victory

What makes the AI-induced cognitive decline so insidious is how it leverages and amplifies the attention economy’s existing assault on human consciousness. We’ve spent the last two decades training ourselves to seek instant gratification, avoid cognitive effort, and outsource decision-making to algorithmic recommendations. AI is the natural evolution of this trend. The ultimate convenience technology for minds that have already been primed to avoid thinking.

Social media taught us to consume information in bite-sized chunks instead of engaging with complex ideas. Search engines taught us to look for quick answers instead of deep understanding. Recommendation algorithms taught us to let machines choose what we pay attention to. AI is simply the next step in this progression: letting machines do our thinking for us entirely.

The attention economy has been incredibly successful at making thinking feel like work rather than pleasure. Most people now experience sustained concentration as frustrating and uncomfortable rather than rewarding. AI offers relief from this discomfort by eliminating the need for effortful thought altogether. Why struggle to understand a complex topic when you can ask an AI for a summary? Why grapple with difficult decisions when an algorithm can provide recommendations?

But thinking, like physical exercise, becomes easier and more enjoyable with practice. The more you engage your cognitive abilities, the stronger and more efficient they become. The less you use them, the more atrophied and uncomfortable they feel. AI is accelerating our movement toward the latter state by making non-thinking not just possible but optimal for most immediate tasks.

We’re creating a society where the cognitive equivalent of walking has become so rare that most people find it exhausting and pointless. Then we wonder why our collective problem-solving ability is declining.

The Trust Problem: When Smart People Make Dumb Decisions

Perhaps the most dangerous aspect of AI-induced cognitive decline is how it affects people’s ability to evaluate information and make good decisions. Research shows that people who rely heavily on AI systems become worse at detecting errors, inconsistencies, and biases in AI output. They essentially lose their intellectual immune systems.217

This happens because critical evaluation requires practice. When you’re used to thinking through problems yourself, you naturally develop skepticism about easy answers and automated solutions. You maintain the cognitive habits that help you spot when something doesn’t quite add up. But when you outsource most of your thinking to AI, you lose the mental conditioning that enables good judgment.

The Microsoft Research study found that AI users became “more susceptible to diminished critical inquiry, increased vulnerability to manipulation and decreased creativity”. They weren’t just thinking less. They were becoming easier to deceive and manipulate because they had lost the intellectual vigilance that comes from regular cognitive exercise.2

This is particularly concerning given that AI systems are known to produce confident-sounding nonsense, exhibit various biases, and occasionally fabricate information entirely. Users who have become dependent on AI assistance are less likely to catch these errors because they’ve trained themselves to accept algorithmic output uncritically.

We’re creating a population that’s increasingly skilled at using powerful tools but decreasingly capable of evaluating whether those tools are producing good results. It’s like giving everyone access to advanced medical equipment while simultaneously eliminating medical training. The tools might be sophisticated, but the users lack the expertise to distinguish between helpful and harmful applications.

The Economic Implications: A Society of Digital Serfs

The cognitive decline enabled by AI has profound economic implications that go far beyond individual career prospects. We’re potentially creating a society where human intelligence becomes increasingly irrelevant to economic production, not because machines have surpassed human capabilities, but because humans have voluntarily diminished their own capabilities to the point where they can no longer compete.

The deskilling research shows that this process follows a predictable pattern. Initially, AI augmentation makes workers more productive and valuable. But over time, as workers become dependent on AI assistance and lose their underlying skills, they become interchangeable and replaceable. The value moves from human expertise to technological capability, and whoever controls the technology captures the economic returns.1412

This isn’t just about job displacement. It’s about the systematic devaluation of human cognitive labor. When AI can help novices perform like experts, the market value of actual expertise plummets. When machines can generate creative work, the economic returns to human creativity decline. When algorithms can solve complex problems, the premium for human problem-solving ability disappears.

We’re potentially creating an economy where most humans become digital serfs. Entirely dependent on AI tools they don’t understand or control, performing cognitive tasks at the sufferance of algorithmic systems, with no independent capability to add value without technological assistance.

The tech companies promoting AI adoption have obvious economic incentives to encourage this dependence. Every cognitive task that gets outsourced to AI represents a stream of revenue and a source of lock-in. Every human capability that atrophies through disuse creates a permanent market for technological substitution. We’re not just buying productivity tools. We’re purchasing our own intellectual obsolescence on the installment plan.

The Wisdom vs. Intelligence Distinction: What We’re Really Losing

Here’s something that gets lost in most discussions of AI and human intelligence: there’s a crucial difference between processing information and developing wisdom. Intelligence is about pattern recognition, logical reasoning, and analytical problem-solving. Wisdom is about judgment, discernment, and the ability to navigate complex situations with incomplete information.

AI excels at intelligence-type tasks. It can process vast amounts of data, identify patterns, and solve well-defined problems faster and more accurately than humans. But wisdom requires something that AI lacks: lived experience, emotional understanding, and the kind of contextual knowledge that comes from actually inhabiting the world as a conscious being.

The cognitive decline we’re experiencing isn’t just about getting worse at IQ test-type tasks. It’s about losing the mental habits that develop wisdom: questioning assumptions, considering multiple perspectives, tolerating ambiguity, and making decisions based on incomplete information. These capabilities require practice and can’t be outsourced to algorithms without significant loss.

When we delegate our thinking to AI, we don’t just lose analytical capability. We lose the opportunity to develop judgment. Every decision we outsource to an algorithm is a decision we don’t learn to make ourselves. Every problem we let AI solve is a problem we don’t learn to navigate independently.

Wisdom emerges from the accumulation of cognitive struggles, mistakes, and hard-won insights. It’s the mental equivalent of developing calluses through physical work. You become more capable by repeatedly engaging with difficult challenges. AI assistance prevents this conditioning from occurring by eliminating the friction that builds cognitive resilience.

We’re essentially trading the development of wisdom for the convenience of intelligence-on-demand. It’s like choosing to always eat at restaurants instead of learning to cook, then wondering why you can’t prepare a meal when the restaurants are closed.

The Path Forward: Reclaiming Cognitive Sovereignty

So what do we do about this? The research suggests several approaches, none of them easy and all requiring deliberate effort to swim against the current of technological convenience.

First, we need to recognize that using AI tools effectively requires maintaining the ability to use them selectively rather than reflexively. The studies show that AI works best when users can evaluate its output critically and integrate it with their own thinking. This means preserving and strengthening our cognitive abilities rather than outsourcing them entirely.2212

Second, we need educational approaches that treat AI literacy the same way we treat media literacy. As a set of skills for evaluating and critically engaging with algorithmic systems rather than passively consuming their output. Students need to learn not just how to use AI tools, but how to maintain their cognitive independence while doing so.23

Third, we need to deliberately practice cognitive tasks that AI can’t or shouldn’t handle: creative problem-solving, ethical reasoning, interpersonal navigation, and the kind of contextual judgment that requires human experience. Just as we need physical exercise to maintain bodily health in an increasingly sedentary world, we need intellectual exercise to maintain cognitive health in an increasingly automated world.21

Finally, we need to remember that convenience isn’t always valuable, and efficiency isn’t always optimal. Some cognitive struggles are worth preserving because the struggle itself creates capability. Some mental effort is worth maintaining because the effort builds strength. Some thinking is worth doing slowly and deliberately because the process develops wisdom that can’t be generated algorithmically.

The Choice We’re Making

The research is clear: AI has the potential to significantly diminish human cognitive abilities if we use it carelessly. But it’s not inevitable. The outcome depends on how we choose to integrate these tools into our lives and our society.

We can use AI to augment human intelligence while preserving our cognitive independence, or we can use it to replace human thinking while convincing ourselves we’re becoming more productive. We can treat AI as a powerful tool that requires skilled operators, or we can treat it as a substitute for the skills that make us human.

The choice we make will determine whether future generations look back on this moment as the beginning of a golden age of human-AI collaboration or as the moment we accidentally automated away our own intelligence. The irony is that making the right choice requires exactly the kind of careful thinking, long-term planning, and independent judgment that AI discourages us from practicing.

We’re at a crossroads, and the path we choose will quite literally reshape the human mind. The question isn’t whether AI will change how we think. It’s already doing that. The question is whether we’ll retain enough cognitive autonomy to direct that change consciously, or whether we’ll sleepwalk into a future where thinking has become a lost art.

The evidence suggests we’re currently heading toward the latter. But unlike most technological trends, this one is still within our control to redirect. We just have to care enough about human intelligence to make the effort to preserve it.

And that might be the real test of whether we’re already too far gone.


 

Footnotes

  1. https://www.telegraph.co.uk/business/2025/06/17/using-ai-makes-you-stupid-researchers-find/ 2

  2. https://www.euronews.com/next/2025/06/21/using-ai-bots-like-chatgptcould-be-causing-cognitive-decline-new-study-shows 2 3 4 5 6

  3. https://www.polytechnique-insights.com/en/columns/neuroscience/generative-ai-the-risk-of-cognitive-atrophy/ 2 3 4 5

  4. https://ia.acs.org.au/article/2024/is-ai-making-us-dumber-.html

  5. https://www.popularmechanics.com/science/a43469569/american-iq-scores-decline-reverse-flynn-effect/ 2 3

  6. https://www.pressenza.com/2025/07/the-decline-of-the-intelligence-quotient-in-the-digital-age-cognitive-reconfiguration-and-global-trends/ 2 3 4 5

  7. https://www.skeptic.com/article/outsourcing-our-memory-how-digital-tools-are-reshaping-human-thought/ 2 3

  8. https://steer.education/thought-pieces/why-outsourcing-memory-to-google-is-a-false-economy-for-a-healthy-mind/

  9. https://media.kasperskycontenthub.com/wp-content/uploads/sites/100/2017/03/10084613/Digital-Amnesia-Report.pdf

  10. https://media.neliti.com/media/publications/351625-digital-amnesia-the-smart-phone-and-the-df6574a8.pdf

  11. https://aisel.aisnet.org/hicss-51/os/dark_side/2/ 2

  12. https://citsci.syr.edu/sites/default/files/GAI_and_skills.pdf 2 3 4 5

  13. https://www.unleash.ai/artificial-intelligence/is-ai-causing-a-decline-in-cognitive-and-creative-skills/

  14. http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1967693 2

  15. https://www.nature.com/articles/s41599-025-04787-y 2

  16. https://www.vktr.com/digital-workplace/the-ai-productivity-paradox-why-im-working-more-and-loving-it/ 2

  17. https://labs.amazon.science/blog/the-modern-productivity-paradox-why-we-need-agents-that-think-with-us-not-for-us

  18. https://en.wikipedia.org/wiki/Flynn_effect

  19. https://www.polytechnique-insights.com/en/columns/society/declining-global-iq-reality-or-moral-panic/

  20. https://time.com/7295195/ai-chatgpt-google-learning-school/ 2

  21. https://www.ergo.com/en/radar-magazine/digitalisation-and-technology/2025/cognitive-offloading-mental-performance-digital-helpers 2

  22. https://jle.hse.ru/article/view/27387

  23. https://www.ie.edu/center-for-health-and-well-being/blog/ais-cognitive-implications-the-decline-of-our-thinking-skills/

Tags:

AI Productivity Critical Thinking CognitiveScience Tech Ethics

Share this post:

Is AI Making Us Dumber? The Great Cognitive Heist of Our Time