No, AI does not make us stupid

September 28, 2025

A recent MIT Media Lab paper claimed to have shown that heavy usage of AI causes a measurable reduction in brain activity, critical thinking, and independent learning.

I can see how this hit a nerve: in a world that often rewards fake over substance, just going "tab tab tab" and otherwise giving your brain a rest is quite tempting. And we've probably all seen examples of this happening.

But this is not how I experience AI. In the two things on which I spend most of my time - software engineering and writing documents - after a day of working with AI agents, I'm spent. It feels like drinking from the proverbial firehose. Not only is everything much faster - it's also much more complex. And I have to make many more decisions in shorter time (both high-level conceptual and low-level detail decisions, and of course these also have interdependencies).

Software engineering

My tech stack...

For software engineering, I mostly use AI like this:

  • I develop a concept or workflow for some piece of software, often using e.g. Claude or ChatGPT as a sparring partner. Before, or while working on the concept, I might have read a paper (such as this one on AI-augmented textbooks, seen in Matt Devost's excellent newsletter Global Frequency) or a blog article (like this one on a multi-agent architecture). And these papers or blog articles often have accompanying GitHub repos. I often add the papers, or parts of them, to my Claude/ChatGPT conversations, and usually also "earmark" parts of the code in the associated GitHub repos that I find potentially useful.
  • I draft a couple of UI variants in v0.
  • I mostly use Cursor for writing "backend" code. For example, database schemas, storage/retrieval functions, analytics code, code around LLMs, or the glue code between frontend and backend.
  • For writing tests, I use Qodo - in my opinion the best AI for code review I have seen so far. It's extremely good at catching bugs that most human reviewers fail to notice (for example, infamous "mystery bugs" that only show up during runtime, and not always because the software is multi-threaded, asynchronous, etc., which makes it nearly impossible to think of everything that might go wrong in advance).

And yes, I need all these agents because they all have different strengths and weaknesses - like how some people are particularly good at UI while others spot every weak point in your code.

...and what it has changed for me

Before AI, I could only focus on one aspect: the high-level "why? what for?", or the low-level "how exactly?" of the implementation. And I had to find someone else to do the thing for me that I wasn't focusing on.

Now, with AI, I can work across the entire spectrum of a software - directly connecting the "why" with the most low-level but first-principle-based design decisions about the inner workings.

AI does not make us stupid

And particularly for the low-level aspects, I can now dig much deeper much more quickly because AI helps me with the research.

Because I have to make all these decisions, and I now have the opportunity to make them in a much more informed way, I feel much more like I learn new things every day than just delegate my thinking to some AI and cruise through the day on autopilot.

Writing documents

My "AI way of working"...

When I say "document", I mean things like project proposals, solution briefs, blog articles (including this one). Not simple emails or other messages, for example.

I've never been able to just raw-dog my way from initial prompt to finished document in one chat. Instead, I approach writing documents very similarly to writing software, and I use different tools:

  • ChatGPT and Claude for drafting and critiquing outlines, sentences, wordings, headers, titles, paragraphs. I personally find the "critiquing" aspect particularly useful.
  • Spark for technical or market background research.

Equipped with these tools, I "construct" documents iteratively. I draft an outline. I write individual paragraphs, titles, or phrases; and do some initial background research. This is the "hunter/gatherer" phase.

Then I put it all together. I ask an AI to give me a general critique of what I have so far. Based on that, I typically use some parts of the suggestions from ChatGPT/Claude, adapt others, and throw out some (actually quite many, usually). Or I ask for 10-20 different variants of a text passage, to help me think of wordings, content, or missing logical links that I might otherwise not think of.

As the material piles up, and my chats get longer and longer, it becomes increasingly harder just to find previous suggestions that now may have to be reworked because they are good but don't fit with the current version of the document anymore. Not to mention that when I change one paragraph or other part of a document, I most likely will have to adapt others too.

Add to that different form factors for a document: often I need the long-form document, a two-sentence summary, and perhaps an email.

...and the impact

Similar to software engineering, my new, AI-augmented way of working now lets me operate across a much wider spectrum of a document - all the way from high-level "why" and "who's the target audience", down to individual data points that need to be researched. And just like with software engineering, this means I make many more and versatile decisions in much shorter time.

My new job: air traffic controller for AIs, my own ideas, other people's ideas, and everything in between

My job now, with AI, feels like being an air traffic controller:

Rather than just "using" AI, it feels like I'm orchestrating dozens of incoming and outgoing flights: my own ideas, background research, suggestions from one AI, critiques from another AI, input from other people, and revisions that feed back into the loop.

Each "flight" has its own trajectory, timing, and dependencies. My role is to keep them from colliding, to decide which gets cleared for landing, which needs another holding pattern, and which will be rerouted somewhere else. That kind of constant coordination isn't passive downtime. It's constant decision pressure. Exhausting at times, but an effective mental workout.