Fish Food: Episode 638 - Is it worth learning how to prompt?
The future of prompting, the path to ASI, AI 'steerability', the power of MCPs, and the 14 most life-changing nonfiction books of all time
This week’s provocation: The future of prompting
Learning how to prompt better was a real moment for me in the early days of using LLMs. Understanding how to use even quite basic prompting techniques dramatically improved the quality of outputs from AI tools and gave me the motivation to truly integrate their use into my daily workflow. Effective prompting directly improves the accuracy, creativity, and relevance of AI outputs, and enables more useful, meaningful, and actionable answers. It’s a force multiplier of value.
But given how fast AI capability is improving, surely interacting with AI tools in this way will soon disappear? Several people that I’ve spoken to recently seem to think so, but I’m not so sure. Whenever I run the IPA Advanced Application of AI in Advertising course I have to update the deck quite significantly to take account of all the latest developments (things are changing so fast) but good prompting still provides the foundation - for several reasons.
More than anything, to do a good job AI’s need to understand user intent and context. They will undoubtedly get much better at using advanced contextual awareness and memory to infer meaning from relatively vague inputs. But the subtlety of good prompting, particularly for nuanced or complex requirements, means that it remains (and will likely do so for some time) the best way of articulating this. AI capabilities may be advancing rapidly but as they do so the main bottleneck in value is currently shifting from model performance to the user's ability to articulate intent clearly.
More than this, effective prompting gives you an excellent understanding of how to work with AI. There’s really no substitute for conversing with an AI to understand it’s strengths and limitations, and how to get the most out of the machine. It teaches you how to refine and evolve instructions to continuously improve AI results. It allows for the development of the kind of idiosyncratic habits that enable you to generate unique and singular value.
As AI becomes more powerful and versatile, skilled prompting enables us to effectively exploit new functionalities like chained reasoning and multimodal inputs. The mistake that many make, I think, is looking for the perfect prompt right from the start. Despite what all the LinkedIn gurus say, there is no such thing. So learning how to interact with an AI to fulfil specific needs is a foundational skill.
I also think that prompting can help us to think better in a AI/human process. It can develop a users' critical thinking skills, forcing them to clarify their objectives, assumptions, and constraints explicitly. Thoughtful and intentional use of AI can help us to be more thoughtful and intentional about what it is that we’re actually trying to do.
Having said all that, prompting IS likely to change in some notable ways. The increasing ability to interact with AI through dialogue, gestures, and even voice tone and emotion understanding, will enable AIs to interpret a far wider set of inputs than just text, data or images alone. It’s likely that AI systems themselves will increasingly handle prompt refinement and optimisation, automatically rewriting or clarifying user inputs in real-time. Highly personalised or custom-trained AI systems will become familiar with individual and grouped user behaviours, preferences, and communication styles. AI interfaces that are embedded seamlessly within productivity and workflow applications will infer context from user activity. These context-rich integrations will mean that prompts are more likely to become short contextual nudges rather than carefully crafted instructions. Iterative refinement and the use of quick feedback loops where the AI continuously refines outputs based on simple user reactions is already important, but it’s likely to get more so.
But even as AI grows more intuitive, certain tasks (notably creative, technical, or those that involve complex reasoning or specialist domain-specific contexts) will still benefit significantly from highly specific prompting. And the underlying skill of articulating clear intent and relevant context, remains an excellent foundation for future interaction paradigms. So yes, it’s still worth learning how to get good at prompting.
P.S. if you did want to get better at prompting, this comprehensive Google guide is a pretty good place to start.
If you do one thing this week…
‘We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.’
AI 2027 is a pretty incredible month by month prediction by some credible voices in AI (including a former OpenAI researcher, and the founder of the Center for AI Policy) on how it will mature over the next couple of years.
They think that it's realistic to believe that ASI will be achieved by the end of 2027 - far sooner than many predict - not least because of compounding, exponential progress in the rate of research and innovation.
Key predictions include:
2025 - the spotlight is already on AI agents but by the end of this year we'll see agents that will become much more useful and accurate than those that currently exist
2026 - this is where the compounding impact of self-improving systems really kicks in, as AI models start to work on and enhance themselves, leading to rapid progress
2027 - by this point the top performing AIs could be outperforming all human researchers, not just at structured tasks but also in solving major problems in science, maths, and medicine
The authors then present an option for the reader to choose between two different post-2027 scenarios: a 'slowdown' and a 'race' ending. In the latter, we accelerate towards misaligned AI (where AI is not aligned with human goals and wants to generate it's own worlds).
It's a slightly terrifying prospect, but the authors also present a second scenario where the brakes are put on development and we are able to retain all the benefits accrued and go on to achieve even more breakthroughs.
It's a fascinating piece of work, and a stark warning of a very real fork in the road that we'll need to navigate as a human race. If you want to dive even deeper into this there’s a three hour (!) podcast discussion on the predictions by two of the authors here.
Links of the week
Another week another AI model - this time it’s Meta, who announced their new Llama 4 family of models (‘the beginning of a new era of natively multimodal AI innovation’). OpenAI closed a round of funding - the largest ever for a private technology company, valuing them at $300 billion. And Amazon released Nova Act, an AI agent that can complete tasks via a web browser, and are also experimenting with a new agentic ‘Buy for me’ feature which allows shoppers to easily buy from other sites from within the Amazon app (HT Dan Calladine for that last one)
Some interesting research from Anthropic showing that large reasoning models (which show their working as well as the eventual answer) don’t always show a true description of what the model was thinking as it reached its answer.
Speaking of alignment issues, this was a good post from Helen Toner on why the core challenge of AI alignment is ‘steerability’ (HT Sean Betts)
‘Model Context Protocol (MCP) is changing how AI works by establishing a shared language that allows different AI systems to collaborate without human intervention’. I mentioned MCPs last week and I liked how Rahim Hirji described how this enables AI agents to come together ‘like ants forming a superorganism’, and how the power of MCPs comes from three things: context preservation, role clarity, and handoff efficiency
An interesting HBR piece on how GenAI is transforming market research
Managing uncertainty well feels like a foundational need in today’s world (especially given what happened this week). Some common themes with the work that I do here, but JP Castlin had a good take on how to actually manage uncertainty
A good presentation from Doug Shapiro on the state of media focusing on four techtonic trends: stagnation, fragmentation, disintermediation and concentration of power (HT Pete Marcus)
A good set of nine principles for succeeding in the era of AI, from Ian Leslie, including ‘Don’t be human slop’.
And finally…
This was a pretty good list of the 14 most life-changing nonfiction books of all time. I’d actually read a few of them, but there were plenty on there to add to my Tsondoku.
Weeknotes
This week I’ve come out to Dubai to do some work with PwC, and also with a bunch of media types who want to understand better how to navigate technological-driven change. I’m tagging on a few days of holiday which I’m properly excited about.
Thanks for subscribing to and reading Only Dead Fish. It means a lot. This newsletter is 100% free to read so if you liked this episode please do like, share and pass it on.
If you’d like more from me my blog is over here and my personal site is here, and do get in touch if you’d like me to give a talk to your team or talk about working together.
My favourite quote captures what I try to do every day, and it’s from renowned Creative Director Paul Arden: ‘Do not covet your ideas. Give away all you know, and more will come back to you’.
And remember - only dead fish go with the flow.
Great edition. So much in there. Where do you find the time for all the research??
Great points. Prompting = thinking, planning, learning. It’s part of the knowledge process so agree on its importance. Reminded me of the 00s when we would think like librarians to construct searches ;)