Arkaro Insights

AI & the Octopus Organization with Stephen Wunker

Mark Blackwell Episode 36

What if the smartest way to lead through AI isn’t more tech, but a new anatomy for your organisation? We bring on innovation strategist and author Stephen Wunker to unpack a vivid blueprint inspired by an unlikely teacher: the octopus. Instead of sprinkling “AI fairy dust” on old processes, we dig into how distributed intelligence, clear guardrails, and human-centred leadership can turn a mass-disruption moment into a durable advantage.

Stephen contrasts the adaptable octopus with the heavily armoured ammonite to show why resilience beats rigidity when the environment shifts. We explore nine brains (distributed decision-making connected by a nerve ring), three hearts (analytic, agile, and aligned rhythms), and RNA editing (managers as adaptive translators who can’t rewrite DNA but can change what teams do today). The ideas get practical fast: Stripe’s AI frees fraud teams to focus on fascinating edge cases; HelloFresh rewires customer choice, production planning, and sourcing with one data-driven nervous system; and Amazon’s long-standing principles and open data flows illustrate how a giant moves fast without flying apart.

You’ll hear how to set the “fence” lines that actually expand creativity, why explore-and-exploit must be a daily discipline, and how to build a phase plan that links customer promise, operations, platforms, and talent. We challenge senior leaders to revisit first principles, invite middle managers to become stewards and coaches, and show early-career talent how to “dance” with AI—questioning, prompting, and integrating context to deliver better judgement than either human or machine alone.

Ready to become the octopus, not the armour? Follow the show, share this conversation with a colleague who’s shaping AI strategy, and leave a quick review with your biggest takeaway—what boundary will you set to unlock more intelligent freedom?

Send your thoughts to Arkaro







Connect with Arkaro:

🔗 Follow us on LinkedIn:
Arkaro Company Page: https://www.linkedin.com/company/arkaro
Mark Blackwell: https://www.linkedin.com/in/markrblackwell/
Newsletter - Arkaro Insights: https://www.linkedin.com/newsletters/arkaro-insights-6924308904973631488/

🌐 Visit our website: www.arkaro.com

📺 Subscribe to our YouTube channel: www.youtube.com/@arkaro

📧 For business enquiries: mark@arkaro.com

Stephen Wunker:

This is a real danger mark about the transition to AI. This is going to be the greatest economic work practice revolution in our lifetimes. There's going to be a lot of dislocation in the economy and probably within corporations as well. We have to attend to the emotional side of that, to keep people aligned in certain values, to understand what's important, to be guided by certain principles. If we're not, if we start feeling like automatons in the service of AI, we're going to become worse at our work, we're going to be unhappier in our lives. Everybody's going to lose.

Mark Blackwell:

Welcome to the Arkaro Insights podcast. Our mission is to inform and equip business executives to manage change and innovation and make sense of complexity to succeed in the adaptive world. And today we have a guest who can take us a long way along this journey. Stephen Wonker will be speaking on his new book, AI and the Octopus Organization. His talk is about how the octopus serves as a model for how organizations can get the most benefit from AI. Wonka was a longtime colleague of innovation thought leader Clayton Christensen, led the development of one of the world's first smartphones, and created pioneering companies in mobile marketing and commerce. He now runs New Markets Advisors where he consults on innovation strategy and capabilities for organizations such as Microsoft, Meta, Nike, and the World Bank. He has written over 100 articles for publications including Forbes, the Harvard Business Review, and the Financial Times. Wonka has published five great critically acclaimed books on innovation, Capturing New Markets, Jobs to Be Done, Costurbation, The Innovative Leader, and His Latest AI in the Octopus Organization. He has degrees from Princeton, Columbia, and Harvard Business School. Good morning, Stephen. Welcome to the Arkaro Insights Podcast. It's really good to have you on board. Thank you so much. Well, thank you for having me. Well, it's a great pleasure. I mean, I have to say, the minute I read your book and got the ideas of what you were trying to convey, I just said, how on earth do I persuade Stephen to come on board? Because this is completely uh the story. I'm using Mark. You got me. Because this is exactly the thing that's fascinating me at the moment. So having someone who's really thought about it is just brilliant. For those that haven't read the book, we're going to talk today about AI and the octopus organization. And Stephen creates this fabulous metaphor of the octopus, and there's another that I'm going to let him tell us about in a minute, which really gives us some insight of comparing and contrasting old businesses versus new businesses and what needs to happen to be successful in the future. So let me steal your fire. Stephen, tell me about this book.

Stephen Wunker:

This book is not a technology book. Don't be scared by AI and the title. It is about the effects of technology and what we need to do to get the most out of technology. So there are a lot of companies out there, I'd say most companies I've seen, that when they use AI, they're sort of sprinkling a little AI dust on top of their current practices. They're adding a feature here or there, maybe a chat bot, and that's fine, but that is not going to be transformative. Whereas AI is the most transformative technology we are probably going to see in our lifetime. And so we need to think about the operations of the organization and the ways we manage to really get the most out of the technology. So the book is both a call to do that as well as a look inside corporations that are doing that today, and hopefully a playbook for how you can do that in your organization.

Mark Blackwell:

I mean, I get it. I mean, I'm excited by AI. I can see some of its challenges, but I'm excited. But you create an analogy of AI to, if I'm right, an asteroid hitting the earth.

Stephen Wunker:

What on earth is that about? The meteor that killed the dinosaurs was a really asymmetric threat. Species had evolved over millennia to deal with the known predators, right? The giant dinosaur fish in the ancient seas. And most of those species died when the meteor struck because it was just such a different sort of threat they were not prepared for. And yet the octopus, this soft-shelled, rather defenseless creature, was able not only to survive, but to thrive as all of the pre-existing competitors were wiped out on this mass extinction effect. It is super adaptable, which is one thing we can learn from it. We can also learn from its anatomy. But we we see AI today as one of those mass extinction effects. It is going to change so much about marketplaces and the way companies do business and the way customers buy, that if you're not ahead of that curve, you're probably going to be swamped by the wave that it creates.

Mark Blackwell:

So we're going to talk a lot more about octopuses. Or octopi, should I say?

Stephen Wunker:

Octopuses, it is a Greek, not a Latin term.

Mark Blackwell:

And so thank you for my erudition on that one. That's great. So octopuses. But there's another creature that she's sort of opposed, or the is to is the Ammonite. Why did you select the Ammonite and what does that represent?

Stephen Wunker:

Well, I will admit the Ammonite was uh suggested to us by ChatGPT because we were looking for foil. And I had never heard of the Ammonite, but ChatGBT had. So the Ammonites were plentiful in the HNCs. There were actually over 10,000 species of Ammonites. And they're these very tightly coiled, thickly armored, nautilus-looking like animals that could range from a few inches to several feet in diameter. They were everywhere. And they had these great defenses against the known predators. However, those defenses, those giant shells, actually became a prison when the meteor struck, because the meteor kicked up this humongous cloud of dust that covered the earth for three years. There was a lot of sulfur in that dust. The sulfur created acid rain, and the acid rain dissolved the shells of the young Ammonites. So they they all were very quickly wiped out. A lot of corporations are like the Ammonites. We have very tightly armored, thickly armored defenses against the known predators. And then when the meteor strikes, the asymmetric threat, they're utterly unprepared to deal with the fallout.

Mark Blackwell:

Gotcha. So I'm hearing that honed on years of MPA and it's in the 20th century, but just not suitable for what's going to come. So I think we've been talking about adaptive organizations. But your story is that AI is the catalyst for something that we've been feeling the need for, but is suddenly going to come.

Stephen Wunker:

That 100%, right? The Internet wiped out some organizations. That's probably the closest thing in our lifetime to uh one of these meteor strikes. But AI has a much greater transformative potential than the internet. And so the mass extinction event is quite a threat, but it it's also an opportunity for the octopuses out there as their known rivals get knocked out. There is a lot of additional room to grow for those who are prepared to adapt. Gotcha.

Mark Blackwell:

So I didn't tell you earlier on, but earlier in my life, I did train as a veterinarian and finish my studies. So that's why I was intrigued by the way you took the anatomy and the neurology and the physiology of the octopus to try to extend the metaphor further. And we're going to cover four ideas. The idea that there are nine brains effectively in an octopus, that there is a neural network connecting these, which is, you know, are going to be more than a simple nervous system, and we can explore that later on. Not one, not two, but three hearts, which is quite special. And then fascinating tricks that they are able to do with proteins and RNA. And we're going to cover that shortly. But where do you want to kick off first, Stephen, in terms of this uh anatomical exploration of the octopus?

Stephen Wunker:

Let's start with the nine brains. Look, an octopus has this alien seeming anatomy to us humans. It's so different than the creatures that we're familiar with, partly because it's so prehistoric. It just evolved in a very different time. So an octopus has one central brain in its head, but actually, two-thirds of the neurons of an octopus are not in its head. They're distributed throughout the body. And each arm has a neural cluster that operates as sort of a mini-brain. So effectively, there are nine brains, and the arms' brains are all connected by a nerve ring. So one arm can be opening a clam, and another is exploring a cave, and they can do this independently, communicating with each other without needing to involve the central brain, which is worried about how do I get away from that shark. Yep. This enables massively parallel operations while also having synchrony and coordination. And it's a wonderful analogy for how AI distributes intelligence throughout an organization that doesn't cause chaos. It can still be highly coordinated. And yet it enables this parallelism of action that allows organizations to be much more fast moving, much more responsive to their external environments than ever before.

Mark Blackwell:

And then we can tell the story about what it might mean to an organization. Because there are so many connections, I think, for me at least, between the ideas. So we've got one idea of distributed intelligence, which is, I think we can sense from um what we're seeing in AI. You've got the neural network that's teaching us something, I think, about middle managers. Maybe we could explore that that later. Three hearts.

Stephen Wunker:

What are we getting at there? So an octopus has three hearts, they serve different purposes. One of the remarkable things about an octopus is it can actually give itself a heart attack. It can stop one of the hearts in order to direct energy to other hearts that are urgently needed for something, like escaping from a predator. And so it has these distinct rhythms for different purposes. And organizations need to develop something similar, different rhythms for different purposes. We call them the analytic, the agile, and the aligned hearts. Gotcha. Usually corporations are quite good at the analytic heart. The bigger enterprises aren't so great at being agile, the agile heart. And then the aligned heart is essential given the amount of change that's happening with the advent of AI and other uh other phenomena as well that are occurring in the environment and the economy. So having all those three operating together for different purposes, for different reasons at different times, that's really important.

Mark Blackwell:

I think you're actually right. I just want to challenge you a little bit, you know, where I'm my perspective of the world is. It's not as odd, this odd anatomy of hearts and uh and neural tissue as we might think. In humans, do you know that we've got a thing called the vagus nerve? It's an enormous nerve that when I was a student, they didn't really know what it did, but now there's so much science coming out of that about the gut-brain communication and the neural network around the heart. So it's not completely bizarre to think about uh heartfelt leadership as a neuronal event. It's not completely bizarre to think about gut feelings, and you know, I'll think about that overnight. It's not an irrational thought, given the amount of communication from the gut to the brain and how we process information. So I resonated well with the the different types of hearts giving communication. And I maybe we can explore later, but I think you're you're right about corporations being very analytical. One of the other things about large corporations, as well as not being so agile, they're not necessarily the best place for heartfelt leadership, giving heartfelt inspiration and trust. That's something we can explore about what leaders might need to do in the future as well.

Stephen Wunker:

Yeah. So, you know, this is a real danger mark about the transition to AI. This is going to be the greatest economic work practice revolution in our lifetimes. There's going to be a lot of dislocation in the economy and probably within corporations as well. We have to attend to the emotional side of that, to keep people aligned in certain values, to understand what's important, to be guided by certain principles. If we're not, if we start feeling like automatons in the service of AI, we're going to become worse at our work, we're going to be unhappier in our lives, everybody's going to lose. So alignment is really essential.

Mark Blackwell:

I'd say alignment is a table stakes. Maybe, think about it, to make AI successful, we have to be honest about what we're good at and honest about what we're not good at. So let I take away the boring, tedious stuff, freeing us up to be creative, which is emotional. If we can get have leaders to switch us on to that task and give us from being 20, 30, 40% of our lives being creative to 80% of our lives being creative. What an opportunity that creates. If leaders can step up to the challenge.

Stephen Wunker:

We use examples in the book of how AI actually makes work more fulfilling. So look at Stripe, the payments company. They have an AI-enabled suite of products for online checkout that presents the forms of payment that they think customers were going to most want to value that will lead to conversion from browser to buyer on a website while also minimizing fraud. Now, what that means for the people in the fraud prevention team is that rather than auditing transaction after transaction in an extremely boring manner, their attention can now focus on the edge cases or the highest risk cases. So they are challenged in their jobs. They're using the full suite of their skills in a way that makes them more fulfilled in what they do and also makes them more productive for the organization. That's the promise of AI for the front line. It should help people be more fulfilled, not make them feel like automatons.

Mark Blackwell:

With that mindset, let's embrace it and be excited about the future. One more I think, and I'm going to circle back to the wall for in a minute, but there was the one bit of the anatomy, or not so much anatomy, but the protein production system and how the RNA works in an octopus.

Stephen Wunker:

Why did you include that in the This is actually how we came upon the octopus analogy? My colleague Jonathan Brill, who's the futurist at Amazon, had been using this story uh for a couple of years in in his talks about how most of us are governed by our DNA, right? We humans, we're governed by our DNA. There's not much we can do about that. And we can change RNA a little bit, but uh it's a very small set of degrees of freedom and it takes a long time. Whereas an octopus and other creatures in the cephalopod family are able to edit their RNA in a matter of hours. So you can take an Antarctic octopus and put it into the Caribbean, and it'll be just fine because it can change that RNA. So I'll take us back to high school biology for a moment. DNA is inherited from our parents and it changes very slowly over millennia. RNA is the molecule inside cells that goes from the DNA to the rest of the cells. So it creates this sort of messenger function. And that's akin to managers in organizations. There's not a lot we they can do about their DNA, right? The culture, the legacy of fixed assets, that's going to change very slowly. But the managers can translate from that into the actions of people in their teams. Right? So they can edit the RNA. They can be the RNA of their organizations. And we need to do that like an octopus. This is how the octopus was able to survive the meteor strike, because it could edit its RNA. So similarly, we need to do the same, recognizing there's stuff we can't change. We can't change the DNA very much, but we sure can change our management actions and the instructions and encouragement we give to our teams.

Mark Blackwell:

And so in the real world, what does that mean? What are the different behaviors that middle managers should be doing?

Stephen Wunker:

So, for one, they need to get people onto common AI platforms. If half the people are on it and half aren't, then they're using it for different purposes. Then it's this very disconnected set of upgrades here and there. It doesn't create a systemic change. They need to have a clear vision about where they're headed. So, what are the business purposes of using AI and other technological tools? Yeah. What is their organization going to look like? What are the common practices? What are going to be the metrics that assess whether they're actually making this sort of progress or not? What change in behaviors do people need to have encouragement along? Because everybody is going to need to be a change manager. That's a set of what these middle managers need to be doing. They need to become coaches, they need to become stewards of AI. And they also have to keep critically thinking to not just be governed by AI, but use that as one input among many. AI is not going to capture everything from a context. So to make sure that the inputs of AI are united with other inputs to actually make the best decisions.

Mark Blackwell:

You know this, but for the for the benefit of the listeners, right? In one survey, 80% of business strategies has got AI as being core to the business strategy. Yet in those same businesses, only 15% of the employees believe it. They think it's just baloney. Another common problem, people aren't buying in because they just see it as the shiny new toy, which is bought in because it's technically interesting. Maybe, you know, FOMO don't want to be seen to mean missing out on AI, but not really alive with solving problems on the ground in workflow processes, and certainly not totally connected with the business. So what you're saying about alignment of the brain and then the translation role of middle managers is sounds to be spot on from what I'm hearing elsewhere. Absolutely.

Stephen Wunker:

I think people intuitively get that AI is a major transition. And a lot of boards get this, and there's pressure on top leadership to make a transition, but it is often being thought about in technological terms. Of course, technology matters, but the technology is also moving so fast that if we focus on that, we're missing the broader picture of how the organization needs to transform much more holistically than just layering on technology. There are many managers that get that too, I think, but there has not been a way to look upon that overall transition in a thorough, holistic sort of manner. And I think the octopus model does that.

Mark Blackwell:

Echoing words that um people like Ostavaldi use a lot is exploit and explore. I mean, managers are typically exploiting and optimizing. Most managers, except for the few involved in innovation or new business development, are not in large corporations focused on the exploring. And that's one of the transitions it sounds like you're gonna we're gonna have to make as well to get us through.

Stephen Wunker:

The world is changing so fast with AI and other sort of macroeconomic shocks too, right? If we're not constantly exploring, if we're not developing growth options, then we are highly exposed to acting like the world did over the past 50 years. I think one thing that is for certain is that the world is not going to act like it has over the past 50 years. So conventional wisdom is an extremely hazardous thing in unconventional times. We have to keep exploring and developing our options.

Mark Blackwell:

So if I can start bringing these four ideas together, so we've got the heartfelt leadership, the alignment of the organization, an agile part of the organization. We've got connectiveness and the RNA of the middle managers acting as coaches, bringing it together. We have distributed decision making when leaders forcing down decision or enabling decision making may be a better way of doing it. What type of organizations are anywhere near that at the moment? So that we can think about this for real.

Stephen Wunker:

So let's look at HelloFresh. They are the world's largest meal kit delivery company. Now, they had a boom time in the pandemic as people wanted to cook at home, but then things sort of tapered off after the pandemic and they needed something new. So they decided to lean really heavily into AI. And this worked wonders throughout the organization. It started with the customers. So customers had traditionally had this thick set of options to choose from for their meals for the month. But HelloFresh actually knew that you usually substituted chicken for pork and I like things a little bit spicier. They knew people's personal profiles on what they they chose and then they how they rated meals afterwards. And so they're now able to give people hyper-tailored suggestions for what they would really love. Great. So that's a change in the customer value proposition. But this also meant that operations had to change. No longer would there be these long production runs of particular products. The kitchens, which are huge industrial kitchens, have to create hyper-tailored meals basically for each person. You have this almost infinite variety of meals. And that meant that the job of a middle manager in operations changed too. No longer were they creating the Excel spreadsheets with these long production runs. Operations became so complex that only AI could do the production planning. And then that affected sourcing in terms of the ingredients. So everything had to be connected through a neural necklace of from customer value, proposition, what people actually chose to the operations, to the sourcing. AI had to be responsible for that. And then the middle managers had to be the stewards of the model. They needed to make sure that the choices that were being made made sense. And then they had to shepherd their workforces through a massive change in everything that was done. And that that's been a great success for them. They really leaned hard into AI and they reinvented how their organization worked.

Mark Blackwell:

That's amazing. And I'm guessing that one of the opportunities is that some someone in the organization being born creative in selecting menus and coming up with new dishes and the like, which is making things that aren't necessarily AI couldn't do. As you look at it, you know, how long do you think this transition is going to be? What type of industries are going to be most amenable to change? Where is going to be the biggest challenges, do you think?

Stephen Wunker:

So, uh as I said, my co-author, Jonathan, is the futurist at Amazon. He estimates that roughly by 2030 there is going to be a 30 times improvement in quality of AI outputs, while the cost per token per computing unit will decline on the order of around 100,000 times. That's a humongous change. It is a faster rate of change than almost anything we've seen in our lifetimes. So call it five years. And actually, the technology is going to progress a lot faster than the organizations will, on average. But as with the internet age, there are going to be a few organizations. There are going to be the Amazons out there that get it. And they're going to move very fast and take advantage of this. So look, I started consulting in the late 90s at a major strategy firm. And I saw this evolution from 1996, 97, oh, let's go put our brochure on a website to 98, 99, the sky is falling, and we have to do something massive right now. Neither of those is healthy. There needs to be a phase transition that you plan out. Sure, there needs to be some agility and adjustment in the plan, but you need to get started now. We can't wait to the equivalent of 1988 and 99 and then do humongous, incredibly risky things because some competitor is the wolf at the door.

Mark Blackwell:

So I'm guessing that Amazon is moving ahead in this type of area. Yeah, obviously, because your colleague works there. So is there anything we can learn from what Amazon does in their work practices to help us think about how this can be happened?

Stephen Wunker:

So I'll give you two. And these actually came from Jeff Bezos in the early days of Amazon. He insisted, back 25 years ago, that all Amazon services had to be able to talk, technologically speaking, to each other. So there were no closed sets of data. Everything had to be opened and flowing as data around the organization. That's really important. AI is only as good as the data that it uses. So ensuring that that data is commonly available is quite important. And then secondly, he enumerated a list of principles. There's now 16 of them, that govern how the one and a half million Amazon employees operate. Things like it's always day one. So that creates cohesion in this enormous enterprise that could very quickly lose that sort of coherence. Principles are what actually enable it to operate both with great agility, but also with cohesion and purpose. So it's much more than the sum of the parts.

Mark Blackwell:

Now that's another great topic of mine. The idea of rules and emergence is counterintuitive, but absolutely true, is the greatest creativity is probably evolution. You know, that's the most powerful creative force we've ever seen.

Stephen Wunker:

Let me give you an example from the book of some research we cited on playgrounds. So if you put a bunch of kids on an open playground, they will tend to cluster towards the big sort of play sets at the center, the slide, the swings, things like that. However, if you build a fence around the playground, the kids are going to use all of the playground. They'll go right up to the fence. There's always an implicit fence in people's minds. But if people don't know where the fence is, they tend to stick to the center of the playground. Usually in organizations, that fence is actually further out than a lot of the staff think it is. By setting clear boundaries around the use of AI, around strategies, around the degrees of organizational change, whatever it might be, you actually enable a tremendous amount of freedom, just like for those kids on the playground.

Mark Blackwell:

So right. I don't know if you're agreeing with me, but I think one of the worst things you can say in an ideation session is blue sky thinking or out-of-the-box thinking. Innovation comes from rules and constraints. It's a paradox to many, but it's shown, evolution shows us it again, is that if Ukraine, Bezos seems to have 16 high-value rules in his organization, but once those constraints are put in, innovation clearly happens. And it's impossible to argue with the Amazon story and you know where things come from. Like the cloud computing is almost a other people would have seen that as a byproduct, and yet Amazon makes it a core revenue center. So as we're beginning to come to the end, you know, what advice would you give three groups? You know, business leaders, current middle managers, and people entering the workforce right about right now. What would be a couple of takeaways from this podcast to think about?

Stephen Wunker:

Senior leaders need to lean in and champion adaptation, and they need to fundamentally rethink what they do. What is your company's business? How does it make money? How does it compete? What do customers value? How do you operate? All of those things need to be questioned, rethought in the AI age. Middle management needs to lean into new skills. They need to become change managers, even if they're in a super tactical function. They need to be change managers, and they also need to embrace critical thinking. To use AI outputs, sure, but to unite that with all sorts of other contextual cues to help make great decisions and then coach their teams to do this as well. They are not anymore going to be much information processors. AI does that quite well. But that doesn't mean their job is going away, it just changes. Now, for people entering the workforce, if they can dance with AI, not do things on their own, not just rely on AI, this is more of an elegant two-person dance or person and machine dance, where the whole is much greater than the sum of the parts. If you can coax AI to give you great outputs, question it, think about what are really the important questions to ask. Unite its answers with other sorts of cues, just as mail managers should be doing, then these entry-level people can be supercharged in their capabilities. There may well be fewer of these entry-level people, but they can do much, much more if they can get the most out of AI.

Mark Blackwell:

Thank you, Stephen. Thank you very much. I trust you've touched a few heartstrings and inspired our listeners. But I really enjoyed our time together. Thank you very much. And I just can't wait until you tell me what your next great book is. One of the, by the way, I should have mentioned this early on in the beginning is I first met Stephen because of Jobs to be done. He was the he was the first man who introduced me to Jobs to be done, and he's taken us all the way through to this, so I am sure we're going to. have another greatly inspiring book in a few years' time, if not sooner. Thank you again, Stephen. Really enjoyed it. Thank you, Mark. I appreciate it. Goodbye.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Just Great People Artwork

Just Great People

The Sixsess Consultancy