I had an interesting conversation with a colleague after my last post, about whether it was possible to get the best of both worlds, or if any new tool would necessarily make you dumber. And I had to stop and think – what was it about an IDE with autocomplete that seemed unobjectionable, while LLM-based AI was a path toward damaged skills? I found myself thinking about the distinction between tools and agents drawn by Ben Thompson in his article Tech’s Two Philosophies.
In Google’s view, computers help you get things done — and save you time — by doing things for you.
…technology’s second philosophy [is] orthogonal to the other: the expectation is not that the computer does your work for you, but rather that the computer enables you to do your work better and more efficiently.
In this definition, a “tool” is a technology that amplifies our abilities, helping us to leverage our skills more effectively. “Agents”, on the other hand, are technologies that perform tasks for us, obviating the need for us to do them at all. When we use tools, we’re continuing to reinforce core skills, whereas using agents negates the need for these skills.
This makes sense to me, but feels incomplete. Because even tools replace skills. I used to be able to memorize phone numbers, but now I have a phone that does that for me. If I didn’t use an IDE with autocomplete, I’d have to remember function names. I use a higher order programming language instead of assembly, which prevents me from reinforcing my understanding of low-level memory constructs.
And, in fact, these are usually the examples people trot out when talking about AI. It’s just the same as using an IDE, they say. Or using a higher order language. Just one more step up the ladder of abstractions. In order to make sense of this, I think we need to make another distinction: that is, between deep and shallow skills. Being able to play a musical instrument, speak a foreign language, weave a tapestry, paint a beautiful oil painting, or architect and implement a complex piece of code – these are all deep skills that take years to learn, thousands of hours of intentional practice, and a lifetime to master.
On the other hand, learning the command line arguments to a bash command, memorizing phone numbers, or learning how to navigate a particular website – these are all shallow skills. We pick up shallow skills all the time, and we forget them all the time, and this isn’t a big deal because the cost to learn them again is so low.
As such, we can define tools as technologies that specifically replace shallow skills. When my IDE autocompletes a function name, I no longer have to remember the name exactly, or the function signature – the IDE provides this information, and I can put it into my short-term memory instead of really learning it. But the cost of not knowing the exact function name is low, and easily remedied if necessary. I haven’t memorized phone numbers in decades, but I could start doing it again if I had to. Even becoming effective at a new programming language (or going back to assembly, heaven help you) is fairly straightforward. The specific language is shallow, the core software engineering and problem solving skills are deep.
But when I tell an agent to write code for me, I’m replacing a deep skill. I’m no longer exercising the skill I’ve spent decades learning, no longer learning new techniques, reinforcing existing knowledge, thinking critically about how the code works, evaluating whether it needs refactoring, etc. Over time, this is corrosive to the deep skills I’ve developed over a lifetime, and which have genuine value.
Using an agent is a lot like management. As a manager, I define goals, create tasks with sufficient details (prompts) for someone else to do them, then assign them to engineers on my team (coding agents). I expect other engineers (code review agents) to look over the PRs. This isn’t software engineering – this is engineering management. I have a hypothesis that the reason AI tools are so attractive to even well-meaning senior leadership is that although they were once strong technologists, they eventually moved into management, found success, and let their own deep technical skills wither away. Prompt engineering makes sense to them, because it’s what they do. And when they play with the tools themselves, it’s easy to build toy apps or throwaway prototypes. Surely production code requires just a little more effort!
It’s easy to make fun of management, and it’s common for individual contributors to deride the value of managers, executives, or that most hated class, middle managers. But what the pointy-haired boss jokes miss is that management and leadership are themselves deep skills, and critical ones, and that effective managers haven’t thrown away their technical skills for nothing – they’ve traded them for a different set of deep skills.
BUT. When you let your deep skills gather dust, and replace them with shallow skills (the whole sales pitch of AI is that anyone can pick it up easily), you’re reducing your value in one area with no corresponding gain in another. It’s like giving up chess to focus on tic-tac-toe. Part of the value of working at a company with interesting technical challenges is gaining experience and increasing your personal long-term value, independent of the company. But the promise of AI is an increase in productivity, with a loss in personal value, not a gain. Whether or not it delivers on the productivity is a question we can debate (I think you know my opinion); whether or not it’s damaging your skills has already been demonstrated in multiple studies.
To be clear – there’s nothing wrong with using an agent. We all do it all the time. I rely on other people to grow the food I eat, build the home I live in, clean the water I drink, develop the medicine I consume, and a thousand other things that I don’t even know about. None of these are skills I’ve developed, and consciously or unconsciously, I’ve decided to delegate these responsibilities to other people, who in turn delegate some tiny set of their responsibilities to me.
The problem comes when we replace ourselves with agents. Even in the optimistic scenario in which you’re able to generate quality product, you’ll be replacing your deep skill with a shallow one, and all of the skills you could be building, maintaining, and expanding will wither away over time.
This, then, is the answer to our question. Tools make us smarter, because they clear away low value tasks, and allow us to focus on high value activities that exercise our deep skills. Agents make us dumber, because they perform the tasks that require deep skills, and replace them with tasks that only need shallow skills.