Fish Food: Episode 647 - Superagency: Amplifying Human Capability with AI
AI extending human, how AI is changing jobs, ChatGPT for business, WPP's 30,000 AI agents, how social media has evolved, and what we lose with AI art.
This week’s provocation: On amplifying human capability through AI
There’s something very attractive about the idea of ‘superagency’, the emerging concept that sits at the intersection of AI, organisational design, and talent. It refers to the amplification of individual or team capability through AI, allowing people to operate with outsized influence, speed, and creativity. The idea is that we’re creating a compounding effect where AI augments human capacity in scope, scale, and speed, potentially turning a five person team into the equivalent output of fifty.
Superagency is not just about augmenting human expertise and impact. It’s also about the adept delegation of tasks to AI agents or copilots, theoretically freeing up time to allow humans to focus on strategic, creative, and empathetic tasks that require the kind of judgment which only humans (so the thinking goes) can bring. Superagency occurs when humans and AI collaborate in such a way that individuals can do the work of entire teams, non-experts can make expert-level decisions with AI support, and time-consuming, cognitively heavy, or creative tasks are accelerated or augmented. Done well, it’s about scaling human intent, creativity, and impact.
I’ve been thinking about this concept since reading that post by Microsoft's head of HR AI strategy Christopher Fernandez that I shared a couple of weeks ago. In that he talked a lot about the potential for AI agents to amplify and connect people’s ideas across the organisation and to free up insights that are caught up in the confines of the org chart. Every staff member at Microsoft will be encouraged and supported to develop their own AI agent to help scale their expertise. We might even see these agents as a form of personal expression or a way of demonstrating value beyond a CV.
I love this angle for how AI extends, deepens, and connects a core of human capability and sensibility. My own framing for combining human and AI capability is to consider it as a scale across three vectors:
Task automation: AI as a direct replacement.
Capability extension: AI as an amplifier, catalyst not crutch.
New horizons: AI as an exploration partner to open up new possibilities, help us to get to places that we couldn’t have got to on our own.
Considering the balance of AI and human agency across this scale can help us to consider the specific roles that each can play:
But I’d like to flag two specific risks that we have with the increasing agency of AI in the workplace.
Risk One: The illusion of expertise
One of the biggest potential risks in Superagency is that using AI tools can have the less than desirable effect of leaving us thinking that we are experts in a topic when we are not (hello Dunning-Kruger). Some studies have shown that in using GenAI tools users can misattribute the model’s skill to themselves. This University of Washington study into how the use of AI tools affects investor judgments found that whilst the confidence of the investors went up, often due to misattributing the AI’s abilities to themselves, their accuracy of judgment went down.
Another Yale/Princeton study into the use of AI in scientific research found that scientists were attracted to the productivity and objectivity gains that AI can bring but that: ‘AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do.’ These illusions can leave us blind, says the study, to the formation of monocultures in which certain methods and viewpoints can dominate, making science more vulnerable to errors and less innovative.
This becomes particularly dangerous when the AI can appear extremely knowledgeable and believable, leading to overconfidence in its abilities. A recent study by the University of Tokyo found that large language models can be fluent yet nonsensical in a way that mirrors a brain disorder, making errors feel authoritative. So-called automation bias can mean that people accept AI output uncritically, even after being aware of previous hallucinations or limitations. This recent Apple machine learning paper on the strengths and limitations of reasoning models articulates well how in Chain-of-Thought (CoT) reasoning LLM models have not been designed with the capacity to deal effectively with particular complexities and in these contexts accuracy collapses beyond certain points. As Stuart Winter-Tear notes: CoT is ultimately a ‘highly structured form of mimicry’ and ‘the tragedy isn’t the failure of reasoning - it’s the illusion of it’.
That last example shows the need for deliberate design in matching models and tools with specific problem domains or scenarios in the business but there are also a couple of systemic approaches for how we can design in failsafes to help avoid this. The idea of red teaming can apply to both humans and AI contexts. In the human version of red teaming a specialist team actively looks for vulnerabilities in the process, stepping in to workflows to identify poor outputs, misattribution or over-reliance on the AI. In the AI version of red teaming, a different model would be used to stress-test the main model’s functionality and cross-check high-stakes outputs under real-world conditions. Teams can even have this as a habitual part of their workflow (I’ll often use another AI model to check and improve the outputs of the main model I’ve used, for example).
Another check against misattributed expertise or poor judgement is to use the Feynman Technique (named after the physicist Richard Feynman), which is the idea that to demonstrate that you really understand something you have to be able to explain it or teach it simply to someone who knows nothing about it (often imagining a child). You can only simplify a complex topic effectively when you really understand it, and so it’s a great test of true understanding.
Having staff restate answers or solutions in plain English before acting (or at least to be more aware of how the AI is using inputs to arrive at outputs) helps avoid Dunning-Kruger. Having a culture that supports humility and psychological safety makes it more likely that staff will be open about the limitations of their expertise rather than trying to bluff it with AI. Young surgeons often learn their craft using a ‘see one, do one, teach one’ technique. Since the consequences of failure are high, they begin by observing a procedure first, before then conducting it themselves under strict supervision, and then teaching it to other students. These three stages of learning cement the understanding and ensure that there is no opportunity for bluffing.
Risk Two: The erosion of agency, motivation and engagement
One of the not insignificant risks with the increasing encroachment of AI agency is that the resultant erosion in human agency has adverse affects. We like to talk about AI doing all the mundane tasks, freeing humans up for ‘higher level’ work. But if AI agents are going to be doing all the execution work, there is a risk of eroding staff autonomy, and the satisfaction that comes from making a decision, acting on it ourselves, and seeing positive results.
This study into algorithmic management (the management of humans by algorithms such as is seen with Uber and Instacart drivers) raises concerns about turning so-called ‘good bad jobs’ into disengaging piecework, and also shows how some workers actively try and game the system to regain some level of autonomy. Integrating AI agents into workflows may boost initial performance and efficiency but over the longer term risks decreasing human engagement in the work, and even worse skills atrophy and boredom. This study by researchers in China for example, found that human-generative AI collaboration enhances task performance but undermines human’s intrinsic motivation. As Rahim Hirji puts it:
‘Here’s the sequence I keep seeing. AI improves output. Fewer people are needed. Human roles move from creation to coordination. Coordination removes authorship. Disengagement creeps in. Nobody measures that. So it doesn’t get fixed. And what looks like progress becomes something else entirely. A team that looks productive from the outside but is slowly becoming irrelevant on the inside.’
If AI is freeing up humans for ‘higher level’ work, what is that ‘higher level’ work? What do we expect people to do differently? How can they maintain rights over AI outputs and visibility into decision rules so that the feeling that our decisions matter is not completely eroded? How can we make this huge transition in a way that makes work more, not less, meaningful?
How many leaders right now are thinking about these questions? McKinsey’s report on Superagency notes that employees are ahead of leaders on this mindset in wanting to experiment with and understand what the human/AI dynamic is going to mean for them. Unfortunately what seems to be happening is a siloed approach that disempowers employees (sidenote: someone told me the other day about a company where they had blocked the use of specific AI tools but staff were just using them anyway on their phones) and which is disconnected from the reality of how stuff gets done (see ‘bottom up trap, top down fantasy’).
If the majority of people are already disengaged from their work, AI can either help to remedy that or it will compound it. It’s our choice. We have an opportunity here to ensure that people have more agency not less, because AI removes bottlenecks and enhances their ability to execute ideas, explore options, or solve problems. But only if we take a considered approach. Dan Pink famously wrote about the power of three intrinsic motivators in the workplace: autonomy, mastery, and purpose. If we don’t manage this change well we risk destroying all three.
Rewind and catch up:
AI, and inflection points in the creative industries
If you do one thing this week…
This new PwC survey (PDF) is an interesting read - it claims that between 2018–2024 industries that are most able to use AI achieved three times (27%) the growth in revenue per employee compared to the least exposed sectors (8.5%). Alongside this they say that the skills required in ‘AI-exposed jobs’ are changing rapidly (66% faster than less exposed roles in fact), a change which is particularly dramatic in jobs which are automatable. These roles, they suggest, are evolving towards higher value and more complex tasks. Perhaps (see above).
Links of the week
The latest ChatGPT for Business updates really shows where we’re headed. It now connects to internal tools and cloud services (Sharepoint, Drive, Teams, Gmail) meaning it can pull context and data straight from your company systems. Admins can use MCPs (there’s a big old BCG report on agents and MCPs here if you’re so inclined) to build connectors to proprietary databases, and it can also capture audio, transcribe speech, and summarise actions. HT Zoe Scaman
Also HT Zoe, ElevenLabs latest V3 text to speech update is very impressive indeed. HeyGen have also launched their new AI Studio which features a voice director (control emphasis and tone so your story sounds as you want), voice mirroring (upload your voice and let your avatar reflect your natural pacing, tone, and emotion), gesture control (map natural movements to your script, such as facial gestures and hand motions that feel real). I’ve used HeyGen and in a matter of seconds it had a video of me speaking French like a native.
As previously highlighted, Meta are going all in on automating the full advertising process.
Meanwhile WPP have launched ‘AgentBuilder Pro’ which they say is a significant upgrade to their existing AI agent builder tool which is already powering 30,000 agents inside the network, apparently.
I liked Richard Huntington’s deceptively simple value grid for positioning brands and competitive value.
This was a good short take on how social media has evolved, moving from a follower-based chronological feed to a discovery-based algorithmic feed and now how it may be evolving back again via ‘algorithm hangovers’, and a return to community (Discord, Snap)
And finally…
When I did my talk to creative production agencies about AI recently, I tried to articulate this idea but Dr Rebecca Marks probably does a better job in this short video. She talks about Walter Benjamin who wrote (in the 1930s) ‘The Work of Art in the Age of Mechanical Reproduction’, a response to changes in visual culture in which he describes how in the process of mechanisation we risk losing what he calls ‘aura’ - a hard to define quality that can only come from authorship and the process of creation. Will AI in art mean that we lose the ‘aura’? But then when photography was invented people thought it would replace painting but instead it led to new forms of expression for artists that resulted in the creation of movements like Impressionism, Modernism, Abstractism.
Weeknotes
This week I began a mini-European road trip and I’ve been in Paris for a couple of days now. I’ll be delivering a leadership workshop over a few days before heading to Amsterdam to do some work next week with my Diageo client.
Thanks for subscribing to and reading Only Dead Fish. It means a lot. This newsletter is 100% free to read so if you liked this episode please do like, share and pass it on.
If you’d like more from me my blog is over here and my personal site is here, and do get in touch if you’d like me to give a talk to your team or talk about working together.
My favourite quote captures what I try to do every day, and it’s from renowned Creative Director Paul Arden: ‘Do not covet your ideas. Give away all you know, and more will come back to you’.
And remember - only dead fish go with the flow.
This is really good, as always. The confidence thing is a problem. Without a very strong knowledge and understanding of the subject matter, you can easily be fooled by recommendations AI makes, which sound very plausible but are wrong when you dig. I find myself explicitly asking Claude to use first principle thinking.
Very informative and thought provoking. AI’s disruption of the humanness of work will be a topic of discussion, debate and research for years to come. Psychological safety within healthy organizations will win the day.