Fish Food: Episode 642 - Decoding the value of AI
Prioritising where the value is in AI application, sycophantic ChatGPT, cognitive debt, Zuckerberg on why we don't need agencies, and the greatest Karaoke songs of all time.
This week’s provocation: Where’s the real AI value?
When it comes to developing a strategy for AI implementation there seems to be a lot of rabbits in a lot of headlights right now. How do you make sense of all the potential that AI brings to so many areas of the business? How do you understand where the real value is? The myriad possibilities are almost paralysing to consider.
I don’t claim to have all the answers to these pretty significant questions (unlike some of the LinkedIn gurus) but there’s a really useful way of thinking about this which has worked well for me in my work with leadership teams. Simplistic value vs impact frameworks are usually not that helpful. Value to who? What does impact actually mean? How do you even know how to measure these things?
But there is one model which I think is particularly useful in helping us to make better strategic decisions about AI, and avoid some of the pitfalls and assumptions that often get embedded without us even realising it. The Desirability, Viability, Feasibility (DVF) framework has been around since the early 2000s. Popularised by IDEO, it was conceived as a filter or lens through which new product ideas, services, or features could be evaluated, and whether they are worth pursuing. It was born from Design Thinking and I think part of the reason I like it so much in the context of AI is that it applies a human-centred approach to solving potentially big problems and evaluating value. It aligns business and customer needs.
Desirability - is it a solution to a human problem that is worth solving?
Desirability can relate to solutions for employees as much as it does to services for customers. AI is most effective when it augments human decision-making, or enhances a user experience, or alleviates a pain point that people actually care about. It guards against developing AI solutions simply because the technology exists, without a clear understanding of the purpose and user benefit of the idea. This becomes crucial for adoption and acceptance.
Questions here may relate to whether the solution addresses a real need, or whether it aligns with values, or meets and exceeds expectations, or (in the case for product innovation) whether there is a market for it. It’s very easy to get carried away with the potential benefits of AI to the business but if we don’t work back from real customer needs or real employee needs the risk of baking in assumptions and being ‘dangerously efficient’ are significant. Maybe AI isn’t the right solution to this challenge at all. As an example of this I once ran a session with a leadership team where the exam question they wanted to answer was how they could apply AI to enhance a particularly important customer journey. The trouble is that in framing the question in that way they were already making an assumption that AI was the answer. Maybe it is but equally, maybe it’s not. Desirability forces a focus on the relevance to humans, whether they be staff, customers or key suppliers.
Viability - does it create or protect real value?
Viability examines both direct returns (revenue, efficiency) and strategic returns (competitive advantage, risk reduction, learning). This considers what the business will gain. It asks the question about whether the AI initiative is aligned to business goals, profitability and potential, and whether it’s sustainable to maintain and improve. Viability can compound over time, so it may be important to consider longer-term strategic benefits and not just short term efficiency gains.
Another example…working with another leadership team in a recent session some of the team began to talk about the different ways in which AI could enhance their customer experience and the CEO made the (very good) point that before starting to originate ideas about AI the team actually needed to align around what their vision was for customer experience. Again, it’s very easy to do things with AI that look and sound great but which are not generating real value for the business.
Feasibility - can we build and scale it successfully?
Feasibility looks at technical and operational aspects, considering not just whether the tech can be built, but whether it should be built given the current picture of talent, skills, data infrastructure, technology. Questions focus on capabilities, resources, technical maturity, timing, constraints, scalability, build, buy or bolt on.
DVF is rooted in an understanding of how it benefits users, not just the business. It can be applied at multiple stages of a design, innovation or strategic process to reevaluate, realign and refine. Early on, it helps filter broad concepts to inform a strategy, and later it can assess the value of more specific use cases or detailed opportunities. It provides a structured way to move beyond the hype and looks at the opportunities from multiple angles, not just the efficiences or benefits that AI can bring to the business. It can be an excellent way to develop a common understanding of value, supporting better cross-functional working, less competing agendas, and greater agility. It stops AI being a solution in search of a problem.
If you do one thing this week…
‘Go to the main page of Wikipedia…Click any link that leads to a Wikipedia entry. For that page, click the first hyperlinked word in the main text. Keep doing it…Now, try this for several pages. All of them will lead you to the Wikipedia entry on Philosophy’.
I tried this and it’s true.
This essay by Utsav Mamoria on how to lead an intellectually rich life is quite possibly the best thing that I’ve read all year. I loved the message, the stories, and the way that Utsav tells them. I liked that he described how, in a world of information overload, we’re all falling into Epistemic Anxiety (’the feeling of uneasiness, tension, and concern when you want to know the truth. You worry that your knowledge is incomplete and full of errors, and you may believe it…You either lack the resources, methods or agency to get to the truth’). I loved his story about Dorothy Hodgkin - the woman who saved a billion lives - and the lesson about relentless curiosity. I liked his thoughts about documenting your intellectual journey (perhaps this is what this Substack is for me). So much I could pick out.
Well worth spending the time. (HT Josephine Beauvoir)
Links of the week
OpenAI had to roll back on a ChatGPT 4o update this week because users said it was too ‘sycophantic’. Which raises interesting questions going forwards about the personality of AI chatbots. Plus ChatGPT is now getting better for shopping, with experiments around showing product cards for certain queries and rumours of a big Shopify tie up
‘Cognitive Debt is where you forgo the thinking in order just to get the answers, but have no real idea of why the answers are what they are.’ In last week’s edition I mentioned John Willshire’s memorable articulation of a phenomena that I’ve been writing about for a while, and he’s written more about it here. I’ve yet to come across a better way of phrasing one of the big hidden risks of AI.
Very related to how deeply we think about stuff, I liked this Nabeel Qureshi post about not being willing to accept answers that you don’t understand: ‘Intelligent people simply aren’t willing to accept answers that they don’t understand - no matter how many other people try to convince them of it, or how many other people believe it, if they aren’t able to convince them selves of it, they won’t accept it.’
‘Connect your bank account, you don’t need any creative, you don’t need any targeting, you don’t need any measurement, except to be able to read the results that we spit out’. Mark Zuckerberg thinks that Meta can take over everything that agencies do.
Anthropic have released a new report on effective coding for AI agents which is quite dense, but there’s a useful TL;DR summary here which actually captures some good principles for setting up agents well (‘Agent design ≠ just prompting’, ‘memory is architecture’ etc)
And finally…
Something we didn't know we needed but is strangely fascinating nonetheless is Daniel Parris’ statistical analysis of the greatest Karaoke songs of all time. No sign of my favourite though - You’ve Lost That Lovin’ Feelin’ (sorry Righteous Brothers).
Photo by Kane Reinholdtsen on Unsplash
Weeknotes
This week I was mostly out in the Middle East again working with my banking client out there, but right now I’m with some good friends in Suffolk cycling up and down the non-existent hills (10 years ago we used to go to the Alps for our trips but we’ve gone to progressively flatter destinations - a sure sign that we’re all getting older if ever there was one).
Thanks for subscribing to and reading Only Dead Fish. It means a lot. This newsletter is 100% free to read so if you liked this episode please do like, share and pass it on.
If you’d like more from me my blog is over here and my personal site is here, and do get in touch if you’d like me to give a talk to your team or talk about working together.
My favourite quote captures what I try to do every day, and it’s from renowned Creative Director Paul Arden: ‘Do not covet your ideas. Give away all you know, and more will come back to you’.
And remember - only dead fish go with the flow.