Is AI Making Us Dumber? A Cautionary Reflection

Is AI Making Us Dumber? A Cautionary Reflection

We live in an era of unprecedented access to tools that can extend our thinking, creativity, and problem-solving — but are they also quietly eroding the very skills we once prized?

Recent discussions, like the excellent ColdFusion podcast, have reignited a vital question: Is AI making us dumber?

The answer isn’t simple. It’s nuanced, fascinating, and more relevant to our lives — and classrooms — than ever.


Cognitive Offloading: A Double-Edged Sword

The concept of cognitive offloading — shifting mental tasks onto external aids — isn’t new. We’ve been outsourcing cognitive work since humans first drew maps or scratched tally marks on bone.

But in today’s digital age, the speed and scale of offloading have accelerated dramatically.

  • We use Google Maps instead of remembering routes.
  • We rely on search engines instead of storing facts.
  • We lean on AI writing tools instead of wrestling with a blank page.

Researchers, including those at Portland State University, have found that habitual use of GPS can dull spatial memory — we become less aware of landmarks, less confident in wayfinding, and less able to build mental maps.

A Forbes article on AI and cognitive offloading warns that we may also lose critical thinking stamina: the ability to wrestle with ambiguity, tolerate frustration, and develop nuanced conclusions. If an algorithm serves you “the answer,” why bother thinking through the question?

Even the Australian Computer Society has raised alarms, noting that over-reliance on AI tools risks turning humans into passive consumers of machine outputs.


Tool or Crutch? It Depends on the User

AI can be both a tool and a crutch. Which it becomes depends largely on the user.

When I reflect on my own journey into Debian and self-hosting, AI has been invaluable. Without the help of large language models, online documentation, and community forums, I probably wouldn’t have gotten half as far.

Configuring Debian, setting up Docker containers, configuring my home network, automating backups — these are technical tasks that require real learning curves. But with AI:

  • I can get unstuck when I hit cryptic error messages.
  • I can speed up routine tasks.
  • I can learn by doing instead of just reading passively.

But here’s the catch: AI didn’t make me a sysadmin overnight.

There’s a difference between:

  • Completing tasks with assistance
  • Mastering the underlying skill

The danger comes when we blur that distinction.

This is what the ColdFusion episode rightly emphasized: AI lets people do things they couldn’t previously — but it doesn’t automatically give them the competence or judgment behind the task.


AI and the Erosion of Agency

One of the most under-discussed risks of over-reliance on AI is the erosion of human agency and autonomy.

The scholar Melanie Mitchell has raised alarms about this, noting in a recent X post that as AI becomes more pervasive, we risk quietly ceding important domains of decision-making.

This isn’t just about small conveniences. Over time, over-dependence can hollow out:

  • Our confidence to act independently
  • Our resilience in the face of uncertainty
  • Our sense of ownership over our own choices

The philosopher Evan Selinger coined the phrase “outsourcing our autonomy,” and it perfectly captures the trade-off: when tools do too much for us, we sometimes forget they were only supposed to assist us.


The Education Angle: Where It Matters Most

Nowhere is this issue more acute than in education.

A generation of students is growing up with AI companions that can:

  • Solve math problems
  • Write essays
  • Generate art
  • Provide instant summaries of complex readings

If used poorly, this can short-circuit the very developmental tasks education is meant to foster:

  • Critical thinking
  • Perseverance
  • Creativity
  • Metacognition (awareness of how we think)

An article in Information Age warns that offloading too soon and too often in educational settings may rob young people of the chance to struggle productively — the place where deep learning happens.

Students need scaffolding, not substitution.


The Rise of Vibe-Coding and What It Reveals

Let me make this personal again.

When I started self-hosting, I leaned heavily on what you might call vibe-coding — pasting together config snippets, trying things based on context clues, and asking ChatGPT to help troubleshoot. I pulled off things I wouldn’t have dared attempt solo.

It felt amazing. It was also humbling.

Because even though I got things running, I was acutely aware: I didn’t own the knowledge. I was surfing on top of it.

This distinction matters:

  • Augmented action isn’t the same as authentic mastery.
  • Rapid results aren’t the same as robust understanding.

And that’s okay — as long as we stay honest about it.


Real-World Consequences

AI offloading isn’t just an abstract risk; it’s already producing real-world failures.

Consider the infamous case of facial recognition false arrests, detailed in The New York Times. When humans defer too quickly to flawed machine recommendations, the results can be catastrophic — wrongful detentions, algorithmic bias, unjust outcomes.

In the corporate world, a Forbes Australia article warns of another risk: if we flood the internet with AI-generated content, we poison the very well that future AIs rely on to learn, undermining both human and machine intelligence.


Finding the Balance

So where does this leave us?

Here’s the framework I propose — for myself, and maybe for you:

  1. Use AI as a scaffold, not a shortcut.
    Let it help you reach higher, but don’t skip the climb entirely.
  2. Be honest about what you know — and what the machine did for you.
    Avoid the temptation to claim skill you haven’t earned.
  3. In education, prioritize skill-building first, AI assistance second.
    Let students wrestle with difficulty; it’s where mastery is forged.
  4. Stay curious.
    Use AI not just to answer questions, but to help you ask better ones.

A More Mindful AI Future

AI should be a mirror and magnifier of human capability — not a quiet thief of our autonomy.

The challenge ahead is to:

  • Stay mindful of what we offload
  • Preserve space for deep learning
  • Protect agency, curiosity, and mastery

AI is not here to make us dumber. But if we’re careless, we may do that job all by ourselves.


I used AI to help me draft an outline and then proofread this post. If you're wondering whether that makes me a hypocrite, I'm wondering the same thing.