Fish Food 670: AI versus human reasoning
How humans and AI learn, AI notetakers and company culture, Nano Banana Opus four-point-something, a theory of dumb, and what comes after LLMs
This week’s provocation: The differences between how LLMs and humans learn
I’m not a technical AI expert but I have tried hard to learn more about how LLMs work, mostly so that I can better understand the role that AI can play, its potential and its (current) limitations, and how humans can best work with AI in a 1 + 1 = 3 kind of way. Listening to a Dave Snowden podcast on ‘Reclaiming Human Sense-Making in an Age of AI Worship’ this week gave me a number of new ways to think about the differences between humans and AI. If we can better understand these differences I think it helps us to define practices and approaches that get the most from each when we bring them together.
Dave is a complexity expert, the creator of the Cynefin framework (I’ve referenced his work in more than one of my books) and inspired by his points on this I wanted to explore some of the fundamental differences in how LLMs and humans learn.
Inductive and abductive reasoning: LLMs, says Dave, use inductive reasoning, generalising or extrapolating from past training data to summarise, recognise patterns and predict futures. Humans excel at abductive reasoning, working from incomplete information or observations to form our best prediction, making unexpected connections between apparently unrelated things through hunches and imagination. Humans have greater flexibility in reasoning for complex, deeper, context-dependent problems.
Sample efficiency and handling novelty: LLMs typically require massive datasets and hundreds of thousands of past instances or labeled examples. Humans however, can handle complete novelty without replicating known patterns. They can learn new concepts from one or a few examples through analogical reasoning (drawing parallels between two situations or systems to understand or solve a problem) and mental prototypes (imagining a representation of the most typical or ideal example of a category or concept). In the absence of a huge dataset to inform its thinking, it would require many many training rounds for an LLM to achieve comparable levels of understanding.
Processing scope and sensory integration: Humans can scan a small proportion of the available data and then simultaneously integrate multiple additional sensory inputs including smell, visual, auditory, proprioception (the body’s ability to sense its position and movement) to help understanding. Humans evolved to make first-fit pattern matches for rapid decision-making under uncertainty. This enabled us to better deal with life-threatening situations and make fast decisions about the best course of action. LLMs process language inputs and learn through statistical patterns and probabilistic best-fit estimates to predict the next most likely word in a sequence. Whilst some aspects of how LLMs process language are similar to how the human brain does it, LLMs do not learn from the huge volume of sensory data in the way that humans can. They have no embodied cognition or learning.
Explicit and tacit knowledge: It is also generally accepted that less than 5% of human knowledge exists in an explicit or codifiable format (text, digital, searchable). Michael Polyani’s 1966 book The Tacit Dimension established the principle that much of what humans know is tacit knowledge (or know-how), which is unwritten, unarticulated, and may even be deeply personal (he famously said ‘We know more than we can tell’). Whilst LLMs are remarkable, they can only access explicit, text-based knowledge, and miss out on the vast majority of knowledge which is embodied, tacit, and context-dependent. LLMs learn through pattern recognition, humans learn and understand through experience.
Abstraction and the capacity for metaphor: Dave Snowden talks about how art came before language in the evolution of humans, and about how human-created art breaks us away from material things so that we see things through an abductive lens. This is why art is important to our capacity to innovate and think differently, to handle novelty, and to make us see things through abstract layers so that we see novel or unexpected connections. Art helps us to look at what could be, without being bound by the past. This is, notes Dave, the opposite of making a probabilistic forecast based on training datasets. Dr Rachel Barr puts it nicely when she says that humans learn through a slow build up of lived experience but AI learns rapidly through one big data set, and this ultimately means that LLMs are not good at understanding semantic meaning in language.
Differences in memory architecture: When you recall a memory our brains are not playing back a recording, they are actively reconstructing the experience each time, and neurochemicals (particularly proteins that are involved in memory reconsolidation) subtly alter the memory with each retrieval. This essentially means that we literally cannot remember something exactly the same way twice. But rather than being a flaw, this serves several critical functions for us. Our ‘fuzzy’ memory helps us to establish general patterns and principles rather than storing exact playbacks. And it helps us to update and prune our memories based on relevance and significance (important, frequently-accessed information stays vivid while irrelevant details fade). LLMs, on the other hand, struggle to generalise to handle genuinely novel situations in the way that humans can do intuitively. They retain everything equally, meaning that they can’t naturally distinguish between what matters and what doesn’t, and that noise can interfere when a contextual response is needed. Where humans update memories based on the current reality, LLMs are frozen at their training cutoff meaning that they cannot refresh their knowledge without a significant intervention or retraining. LLMs may have perfect memory, but this actually limits their adaptive capability and contextual flexibility which creates brittleness.
Goal complexity and motivation: Our brains are extraordinarily complex machines for survival optimisation that have been shaped by millions of years of evolution. This means that decisions pass through neural circuitry which has encoded layers of competing motivations from seeking social approval, to avoiding failure, pursuing mastery, and protecting our reputation. Humans pursue unassigned goals (like wanting to understand the difference between how humans and LLMs learn 😀), and develop passions and interests independently. We are able to have persistent objectives that may span decades but which shape thousands of decisions and a coherent behaviour across different situations. We can experience genuine conflict when motivations (e.g. career vs family time) collide. When an LLM helps you, it has no internal experience or motivation of wanting to be useful. It needs to be prompted.
Social cognition: Humans evolved for distributed decision-making in small groups with role-based responsibilities and a kind of ‘social scaffolding’ of knowledge. We are designed to think with others and we are excellent social learners, absorbing huge amounts of information through observation and imitation. LLMs process individually, and only learn from static training data, missing the dynamic, interactive quality of genuine social learning.
What this means for Human / AI collaboration
Understanding these fundamental differences enables us to collaborate much more effectively with AI engines. LLMs can look like they have a deep understanding of a question but of course what they are really optimised for is identifying patterns and predicting the next most probable word in a sequence to mimic human-generated text. They are set up to minimise the difference from training data meaning that, by design, they trend towards the average and most probable.
This means that to get anything exceptional or different humans have to push them to go beyond the average. For now, this means prompting in a more deliberate way. Building context and goals directly into prompts and not assuming that the AI understands what matters or why you’re asking. Using cross-domain thinking when working with AI to disrupt norms, or using metaphor or unusual analogies to help reframe a problem. Or giving the AI specific roles to broaden perspective.
The most productive human/AI collaboration recognises what each brings distinctively. The abductive leaps, adaptive generalisation and contextual judgement that can only come from humans. The speed of information synthesis and exceptional pattern matching within known distribution that can only come from machines.
One final thought - a lot of these characteristics of LLMs relate to fundamental limitations which are inherent in how the models have been designed. When he resigned from Meta recently, Yann LeCun said that text-based language models are a dead-end, and that the future will be shaped by ‘World models’ that primarily use visual learning and are more aligned to how humans learn. I’m quite fascinated by this thought and I’m aiming to write something about it in next week’s edition.
Rewind and catch up:
AI Transformation and ROI - Bottom-up trap, top-down fantasy
How to be Interested (Part Two)
In Praise of Working at the Edge
Photo by Tim Mossholder on Unsplash
If you do one thing this week…
Interesting research by Careful Industries on the risks to organisational culture and information quality of using AI notetakers: ‘One of the main reasons people found AI notetakers useful was as an antidote to an overwhelming culture of meetings, expectations of presenteeism, and unclear standards of information management. Our suggested answer to that is not a technological fix. Having fewer, better meetings will be more effective than transcribing every single one.’
Links of the week
Nano Banana Pro has been the latest thing to flood our feeds with ‘I just created..’ posts, but to be fair, it is a big leap forwards not only in image interpretation and rendering but much more sophisticated incorporation of text, persistent characters, and the ability to summarise outputs into infographics. Good short video from Will Francis here, and a useful guide from Ruben Hassid here
Another week another model. After Gemini 3.0 we now have Claude Opus 4.5. Even better than previous Anthropic models at coding, handling complex, multi-step tasks that require sustained reasoning and big context windows, and also better at creating spreadsheets, slides, and documents. You can see that in action here
‘Today’s eye-popping valuations are based on the assumption that LLMs are the only game in town’. This week Gillian Tett wrote in the FT about what may come next after LLMs (sidenote: I’m going to look at this in next week’s edition)
James Marriott (who wrote the excellent but troubling essay on the ‘dawn of the post-literate society’) linked this week to Lane Brown’s cover story in New York Magazine called ‘A Theory of Dumb’ (sub reqd), which referenced evidence of an apparent reversal of what’s been called the ‘Flynn Effect’, which describes the long-term increase in average IQ scores across the world throughout the 20th century
Thought provoking read of the week, on ‘how we disappear chasing lives that aren’t ours’.
And finally…
TIME magazine have created a neat way to analyse and access over 100 years of their journalism using AI. Prompt the engine and it can show articles relating to opposing sides of an argument, or summarise themes, or turn articles into audio. Nicely done. (HT Dan Calladine)
Weeknotes
This week I was working with a client running sessions on critical thinking in the age of AI. We were looking at specific techniques for working with AI in ways that support high quality outcomes and decisions. The more I’ve thought about this (and the more I see of how people are using LLMs) the more I think that this is an unusually enlightened approach. I also did a trip up to Salford to work with the BBC (I love the BBC) on how they can use AI for audience understanding and marketing. Next week I’m really happy to be working with the strategy team at adam&eveDDB (whose work I admire) on how they can integrate AI into their planning processes.
Thanks for subscribing to and reading Only Dead Fish. It means a lot. This newsletter is 100% free to read so if you liked this episode please do like, share and pass it on.
If you’d like more from me my blog is over here and my personal site is here, and do get in touch if you’d like me to give a talk to your team or talk about working together.
My favourite quote captures what I try to do every day, and it’s from renowned Creative Director Paul Arden: ‘Do not covet your ideas. Give away all you know, and more will come back to you’.
And remember - only dead fish swim with the stream.






A strong reminder that human intuition, tacit knowledge, and creativity are essential to complement AI’s pattern recognition for truly effective collaboration.
I talk about latest AI trends and insights. Do check out my Substack, I am sure you’ll find it very relevant and relatable.