I am back to thinking about artificial intelligence, which means that I am once more considering the Stafford Beer joke which is one of the central set texts of this blog:
“When businesses adopted the computer, they tended to use it to automate their existing processes. It was as if they had been given the opportunity to hire Mozart, Einstein and Galileo, then put them to work memorising the phone book so that executives could look up numbers more quickly”. Discuss[1]
This is definitely a worry; if you’ve heard me on a podcast telling this joke, you’ve heard me noting that if you believe most articles in the business press about AI, we’re on the verge of creating a godlike superintelligence which we will use to send more emails. The problem is not so much that sceptics aren’t taking AI seriously, but that too many people want a revolutionary, disruptive technology which doesn’t really change anything. What would management look like if, rather than going “an even cheaper and worse way to do customer service!” or “it’s like junior analysts, that don’t need sleep!”, we started thinking – what management tasks could potentially be replaced by AI? As Baldur Bjarnson (thanks to Dave Gerard for the link!) says, it’s hard to take management’s embrace of AI seriously when they appear to only want it to change the world for other people.
I think this is interesting partly as a bit of futurism fun, but also because it might sharpen up our understanding of what management really is, and what are the things which managers do that aren’t really important, but just have to be done by managers because nobody else will do them. These thoughts are somewhat influenced by Henry Farrell and his gang’s essay on social technology[2].
(I’ve occasionally argued in the past that we used to think that memorising long texts was an important part of being a poet, or that accurate addition was an important part of being a mathematician; as technology improved, so did our insight into what these things were really about. The same point, about accurate representation of images and its relationship to art, is now a debate that only fogeys even have).
So, here’s a sort of “ladder of discomfort”. If any of you guys have more LinkedIn clout than I do (practically zero), do try this out on your networks and report back. I’ve thought of five managerial-type things which might be AI-substitutable, and arranged them in ascending order of how uncomfortable I think people would be in actually doing them:
1. Would you be comfortable in using an LLM to draw up your marketing strategy with a goal of increasing share by 10%?
I am pretty sure that a significant percentage of relevant managers are already doing this, at least “as a first step” or “just to help come up with ideas”.
2. Would you be prepared to allow an LLM to make hiring decisions for hourly-paid employees? For junior managerial employees? For senior management?
Obviously this is also being done right now for very junior posts, and I think you might get surprisingly far up the tree before people started to demur. If only as a cybernetic teddy bear, or as a way of rationalising decisions already made, but there are plenty of human managers in employment who also have this role.
3. Would you let an LLM agent handle your procurement, dealing with suppliers and negotiating prices autonomously? How much supervision would you feel the need to provide, in the best case?
My understanding is that the current state of agentic AI is not up to this task yet, but it’s definitely a live work programme. There is also quite a lively problem of security at this stage. Even with those problems assumed solved (and I think there would be a substantial “reject premise” element to the survey), I think quite a material proportion of managers would be reluctant to outsource this.
4. Rather than having a defined organisation chart, roles and divisions, would you be interested in a model where AI responded to strategic priorities by assigning people to functional and problem-solving teams on the basis of its assessment of their abilities, compatibility and the urgency and importance of the tasks?
In my view, this is table stakes for “are we really going to be able to use AI as a social technology for management”? but I don’t think many actual managers would really be comfortable with it. Although I have a suspicion that quite a few of them, answering a survey on LinkedIn, would recognise this as the trick question it kind of is, and give an insincere answer.
5. Imagine a new AI-enabled email tool, which doesn’t have a “To:”, “CC:” or “BCC:” bar. You just type in the email or report that you want to produce and press send. The LLM, trained on all of your company’s data, decides who needs to see your email, with what priority and decides who should be told who else it has been sent to. Assume for the purposes of this thought experiment that the LLM is much more advanced than any current chatbot.
My gut feeling is that almost nobody would go along with this. And now, I think, we have learned something about the essence of management.
[1] It should be noted here, that this quote in this form is very much apocryphal, and summarised by me! On Friday, I’ll try to reproduce the accurate quote (or at least, the most conveniently available online version of it, as Stafford used it in several lectures), along with some of the great man’s gloss on it, in the latest episode of the “Beer Tasting Notes” series.
[2] Henry has the charming habit of saying at conferences that the audience should credit his co-authors with all the great insights and attribute any mistakes to him. I don’t believe for a moment this is true, so I invite my audience to do the opposite. Unless it’s something co-written with me of course, if someone’s prepared to share the blame for my crap I need all the help I can get.