
In 2013, a federal task force wrote a 267-page report that concluded the following: pilots relied too often on automation and should be required to improve their manual flying skills.
This followed a 2011 study showing many pilots had trouble manually flying a plane or handling automated controls.
A few years ago, the FAA released a much stronger statement on the topic:
“The FAA urged pilots in coming years to sometimes hand fly ‘entire departure and arrival’ routes, or ‘potentially the entire flight.’ Such practices are supported by years of FAA research and Congressional directives. However, they generally are shunned by airline management focused on utilizing automation to improve fuel economy and maximize smooth rides for passengers.”
This is, by and large, a debate around automation and its effect on human performance and cognition. Though much more is at stake with air travel, we are dealing with the same thing in knowledge work right now.
Generative AI, when first launched, became a way to “defeat the blank page” or spruce up some copywriting.
It then became a sparring partner.
Of course, it completely restructured the craft of programming.
And every day, it is getting both more powerful and more accessible (for example, it is functionally very easy to get Claude Cowork to do remarkable things for you even without “prompt engineering” or any advanced technical skills.).
All of this has drastically improved the productivity of most knowledge workers, including your author.
However, the ability to use AI tools well – let’s say, for marketing purposes, in a way that produces better results than your competitors – is contingent on the very abilities that AI tools abstract away (writing, domain expertise, and at the very center of it all, critical thinking).
In other words, a person who is good at writing and critical thinking and has domain expertise will get better results from AI than someone without those things, yet over-reliance on AI tools tend to degrade those very things. Thus the atrophy paradox.
The FAA understood something that most industries haven’t caught up with yet: proficiency is use-dependent. Stop using a skill, and the skill quietly degrades, even if you don’t notice it degrading, even if your outcomes stay the same, because the tool keeps compensating for the loss.
Marketing has no equivalent mandate. And we’re adopting AI tools faster than aviation ever adopted autopilot.
The paradox isn’t that AI tools are bad. They’re not. They’re extraordinarily useful. The paradox is that the tool’s effectiveness depends on a skill that the tool itself erodes. The better the autopilot, the worse the pilot. Unless you deliberately maintain the underlying capability.
The Brain on Autopilot
There’s a growing body of research on what happens cognitively when we hand work over to machines.
In 2025, researchers at the MIT Media Lab used EEG to measure brain activity while participants wrote essays, some with AI assistance and some without.
LLM users showed less brain connectivity, weaker linguistic markers of deep processing, and lower overall cognitive engagement.
A separate study published in Societies surveyed 666 participants and found a significant negative correlation between frequent AI tool usage and critical thinking abilities. This was basically due to cognitive offloading (the act of transferring mental work to an external system). The more you offload, the less the underlying cognitive muscles are engaged.
There’s a principle in learning science called the generation effect: you remember information better when you generate it yourself than when you passively receive it. Writing activates more brain regions than reading. Feynman Technique in a nutshell. Struggling with a problem encodes the solution more deeply than being handed the answer.
A very interesting 2000 study by Eleanor Maguire at University College London showed that London taxi drivers (who must memorize 25,000 streets to earn their license) had measurably larger posterior hippocampi than the general population as well as bus drivers (who drive the same streets but follow fixed routes).
And when cabbies retired and stopped navigating, the hippocampus shrank back (like a muscle you don’t use).
Now, when I use GPS, I believe that I am at a distinct advantage to not using GPS. I remember (vaguely) printing out maps from Mapquest in the earliest days of my driving career.
So it may very well be the case that you don’t need to build a manual muscle for everything (and in fact, it would be very beneficial to abstract away certain cognitive processes to focus on more valuable and generative things).
The science shows a pretty clear pattern, though, and it poses some interesting questions for SEOs about intuition, skill, and domain expertise:
- When you use AI to generate a keyword strategy, you arrive at the output, but do you develop the strategic intuition that comes from wrestling with the data yourself and discerning the relevance and strategic impact?
- When you use AI to write the brief, do you build the audience empathy that comes from struggling to articulate the insight?
- When you use AI to draft the content plan, do you encode the editorial judgment that comes from making hard tradeoffs about what to cut?
Again, it’s possible that some of these things are better off abstracted away. It’s almost certain, in fact (e.g. few would argue for the manual implementation of internal links).
The Sameness Trap
There’s a second-order effect here that compounds the individual skill loss, and it’s happening at the market level.
When everyone offloads the same cognitive work to the same models, you get convergence. Not for the simplistic and inaccurate assumptions that the models will give everyone the same outputs. That’s not the way they work. But because most people aren’t directing them well. They’re prompting from the same starting points, with the same assumptions, accepting the first outputs, and shipping.
The result is a signal-to-noise crisis, which is clearly happening and not getting better. Every year, I speak to several dozen marketing leaders and VCs to understand what major trends, challenges, and shifts are happening in our world. The most common theme: it’s hard to stand out. There’s too much noise. Too much content. A flood, a saturation, a sea of sameness.
We’ve heard some version of that every year now.
Platforms are, of course, doing what they can to combat that through better filtering. Helpful content updates, algorithm changes, penalties, etc.
My contrarian take here is that AI is not the cause of this, or at least not the root cause. The root cause is cognitive offloading, and AI just makes it faster and easier to do that. The same marketer who ran a checklist playbook of homogenous content and ultimate guides used to do so quite manually. Now they do so at the click of a button.
The differentiator had always been the context, the creative thinking, and the je ne sais quoi that allowed a piece of content or campaign to stand out from the noise.
Usually, this springs from some idiosyncratic judgment that came from wrestling with the problem yourself, from your particular experience and context and taste (that ever illusive concept that everyone in San Francisco seems to have suddenly converged on as being of critical importance).
I wrote in a previous Field Notes about Aaron Franklin, the pitmaster in Austin whose barbecue is worth a four-hour wait. In a great MasterClass, Franklin talks about the tannic acid in post oak wood and how it affects the bark on a brisket. It’s that level of vocabulary and fidelity, that resolution of understanding, that is invisible to the consumer but what makes the entire difference between Franklin and everyone else.
A Necessary Caveat
I want to be careful here, because the wrong reading of this essay is that AI tools are bad and we should all go back to doing everything by hand. That’s not the argument. De-skilling, in and of itself, isn’t inherently bad. Every technology de-skills something.
Calculators de-skilled mental arithmetic but democratized math. Literacy de-skilled epic feats of memory but created new powers of analysis. Spreadsheets de-skilled manual accounting but enabled financial modeling at scales previously impossible. Google deskilled library research but made information universally accessible. In each case, we lost a skill and gained a capability.
The question I ponder is: which skills can I afford to lose, and which ones can’t I?
I think the answer requires a distinction between two types of cognitive work.
The first type is routine execution: formatting, first drafts, data cleaning, pattern matching across large datasets, translation between formats, summarization.
This work can be offloaded without meaningful loss, and often should be. The cognitive effort it requires doesn’t build strategic capability. It’s the mental equivalent of bus driving: you’re covering the same route, and the repetition doesn’t compound into deeper understanding.
The second type is judgment: knowing which question to ask, recognizing when data is telling the wrong story, developing audience intuition, building editorial taste, sensing when a strategy has run its course.
This work is contextual, experiential, and built through repetition in ways that can’t be compressed into a prompt. It’s taxi driving. It requires navigation, not route-following. And it atrophies when you stop doing it.
Nassim Taleb puts it well in Antifragile: “When you have trial and error you outperform someone who knows because convexity matters a lot more than knowledge.”
The Hand-Flying Mandate for Marketers
So what’s the marketing equivalent of the FAA’s hand-flying requirement? If we accept that certain cognitive skills atrophy without deliberate practice, what do we actually do about it?
A few things I’ve been thinking about and, in some cases, implementing:
First – this is probably underrated – but you can use AI to rapidly learn almost anything (including and especially how to use AI).
Back in the day, I used to write a lot of R and Python. I wasn’t very good at it, but I spent a lot of time doing it. Now, I’m using Claude to help me build scripts, and through the process, I’m watching it “think” and asking it to explain certain things, which paradoxically, is helping me learn more about R and Python than I did when writing it myself.
Second, it probably helps to have some sort of a writing practice, and not just if you are a writer. Writing is thinking in some sense. It depends a lot on what the utility of your writing is, of course. But I write so I can wrestle with ideas, deeply understand concepts in my core, and make better decisions. I think maintaining a practice here is useful.
It’s also useful, arguably, because the English language is the hottest new programming language. Being articulate is now somewhat like being technical. The better you are able to explain what you want, the better you’re able to get it. So I think a writing practice is going to be even more useful even as AI writes more of our collective content.
Every once in a while, pull back the hood and do a manual check, even for a boring, rote process. Scan a website page and do a heuristic analysis. Run a manual SERP. Not every time, but every once in a while. Keep the fingerspitzengefühl sharp.
Protect the thinking time. The pressure to use AI comes partly from speed expectations. If AI makes content production ten times faster, organizations tend to fill the reclaimed time with more production, not more thinking.
And probably, spend more time offline. Have conversations with customers as well as people in general. The digital echo chamber results in memetic isomorphism, which decays competitive advantage.
I don’t know where the line will be drawn between what’s best to accomplish manually and what’s best to accomplish with AI, so right now, I’ve been trying to do everything with AI. Mainly to stress test it and learn the systems.
But I also have a training routine that I follow to keep my skills and thinking sharp.
Right now, there’s probably more skill to be gained by pushing the outer limits of AI tools. It’s a renaissance when you feel it “click” and realize how much you can build, create, and of course, automate.
But it does warrant the longer term question: what skills do I want to protect?
Want more insights like this? Subscribe to Field Notes


