Arkaro Insights
Arkaro Insights provides B2B executives with tools and techniques to thrive in an complex, adaptive world.
About Arkaro
Arkaro is a B2B consultancy specialising in Strategy, Innovation Process, Product Management, Commercial Excellence & Business Development, and Integrated Business Management. With industry expertise across Agriculture, Food, and Chemicals, Arkaro's team combines practical business experience with formal consultancy training to deliver impactful solutions.
You may have the ability to lead these transformations with your team, but time constraints can often be a challenge. Arkaro takes a collaborative 'do it with you' approach, working closely with clients to leave behind sustainable, value-generating solutions—not just a slide deck.
"We don't just coach - we get on the pitch with you"
Connect With Us
💬 We'd love to hear from you! What topics would you like us to explore in future podcast episodes? Drop us a message or connect with us to learn more about Arkaro's approach.
🔗 Visit us at www.arkaro.com
👥 Follow our updates: Arkaro on LinkedIn - https://www.linkedin.com/company/arkaro/
📧 Email us at: mark@arkaro.com
Arkaro Insights
Rewire or Retire: Why AI is a Leadership Issue, Not a Technology Problem
You cannot just be a leader that takes a can of digital paint and paints over the analogue cracks of your business. It will not survive the disruption that's coming.
In this episode, Marco Ryan, former Chief Digital Officer at BP and co-author of Rewire or Retire: AI for Leaders, challenges us to rethink how we lead in an AI-driven world. Marco argues that AI isn't fundamentally a technology issue—it's a leadership issue. And the choice facing every leader is clear: rewire your approach or retire gracefully to let others lead.
We explore why most executives are "nearsighted" when it comes to AI, how to find the AI whisperers already in your organisation, and why wisdom and judgment don't always sit at the top of the table. Marco shares practical advice on using AI as a "strategy buddy" and explains why digital curiosity—asking "what if?"—matters more than technical expertise.
Whether you're overwhelmed by AI or just getting started, this conversation offers a clear-eyed look at what leadership demands in the age of artificial intelligence.
If this conversation pushed your thinking, subscribe, share it with a colleague who’s wrestling with AI strategy, and leave a quick review to help others find the show. Then tell us on LinkedIn: where will you rewire first?
Marco Ryan is a Non-Executive Director, author, and former Chief Digital Officer at BP with over 30 years of experience in digital transformation and board-level leadership. He has held senior roles at Wärtsilä, Thomas Cook, and Accenture, and is currently Cyber Leader in Residence at Lancaster University Management School. Marco is the co-author (with Alastair Lechler) of Rewire or Retire: AI for Leaders and 51 Essential AI Terms for Leaders.
Links & Resources
- Marco's website: marcoryan.com
- Book: Rewire or Retire: AI for Leaders – available on Amazon
Connect with Arkaro:
🔗 Follow us on LinkedIn:
Arkaro Company Page: https://www.linkedin.com/company/arkaro
Mark Blackwell: https://www.linkedin.com/in/markrblackwell/
Newsletter - Arkaro Insights: https://www.linkedin.com/newsletters/arkaro-insights-6924308904973631488/
🌐 Visit our website: www.arkaro.com
📺 Subscribe to our YouTube channel: www.youtube.com/@arkaro
Audio Podcast: https://arkaroinsights.buzzsprout.com
📧 For business enquiries: mark@arkaro.com
The challenge I think for people is to realize is that AI is forcing change on your sector and your organization and on you, the individual. Therefore, this is as much about change management and therefore about people and mindset as it is about technology. So the first thing is it is no good at the top of the organization if you don't understand the ramifications, the constances of that disruption. You cannot just be a leader that takes a can of digital paint and paints over the cracks of the analog cracks of your business, right? It will not survive the disruption that's coming.
Mark Blackwell:This is Mark Blackwell. Welcome back to the Arkaro Insights podcast. This is the show where we help B2B executives thrive in a complex, adaptive world. I'm your host, and today we are looking directly into the eye of what our guest calls the AI tsunami. To help us navigate these waters, I'm joined by Marco Ryan. Marco is a true veteran of the digital frontline with over 30 years of experience. He's held the reins as chief, global chief digital officer, leading one of the world's largest digital resets, and has served in senior leadership roles at Vatzila, Thomas Cook, and Accenture. He's a seasoned board member and a digital strategist, author of a number of books, including uh 51 Essential AI Terms for Leaders, as well as something I think we're going to focus more on today rewire or retire AI for leaders. Marco, it's a pleasure to have you on the show today.
Marco Ryan:Thanks very much, Mark. Good to be here.
Mark Blackwell:Great. We've got a lot to cover today. And so I'd love to talk more about the rewire or retire philosophy that you've brought in your latest book. And thinking about what this means for organizations in particular and how they might transform from a sort of a siloed organization to being more customer-centric. And exploring it's just not AI, it's just not the technology. It's fundamentally a reset for how we think about business. Let's see what you say, but I'm I'm expecting something along the lines like that. But if I can just kick things off with a provocation, last podcast was Niels Van Hoov, and uh he talked about how AI is coming into integrated business processes like sales and operations planning and sales and operations execution. And we had a fascinating quote on LinkedIn when we were talking about the podcast. And I think it's worth sharing with you to tee up this podcast, what the quote from uh the reader was. He said, in my view, there is still a big gap in terms of AI maturity in senior leadership. Every week I receive a call from a consultant claiming they have the magic AI capability that will revolutionize our planning. I think it's all about understanding which are the key decisions that will enable value and working backwards to see how AI can enable augmented decision making. So we're gonna dive into the detail, no doubt, but what's just your immediate impression when you saw that?
Marco Ryan:Well, there's quite a lot to unpack in there, isn't there? Uh I mean, I think the first thing is that you know the the the difficulties for a lot of senior executives, you know, they are, um, let's be generous, they're they're s they're sort of um nearsighted when it comes to AI, right? I wouldn't quite say they're blind, but they're nearsighted, right? They haven't really understood the what, the if the how, the company, the implications of it. And so they are dependent. It's a little bit like what happened with with digital transformation. They're dependent on experts or people who are more versed in it, more far-sighted in it, to lead them to a help of whatever. So this is the first challenge, is that most executives are their digital intelligence, their D, what I call DQ, is is relatively low. Yeah, they've got really good high IQ, otherwise they wouldn't be in a sea limited position. Let's be honest. Most of them, particularly the male ones, overestimate their EQ. Um and as a result, also we have pretty low DQ. But I think the challenge is that that it's a bit like the Wild West at the moment. There's so many things that AI can do. And I think when you don't know much about it, you tend to grab on to people that come to you with something where they've created a niche or a product or a solution. I mean, cutting to the chase, the short answer is I don't think you need half of them because actually, you know, AI is something that we can all use and we can all adapt, we can all adopt, and we can all augment. And so part of my sort of my passion, my my sort of mission is to lift the scales, if you like, is to you know, help leaders understand that it isn't that complicated to understand the basics or use it. And B, you don't have to keep buying solutions from third parties who have who've come to you with everything sort of packaged up. You know, almost without exception, they have not created the AI that sits beneath it. Right? So it's not like it's true thought leadership, it's just sales and marketing packaging. And I think for most people, they want something that's more tailored or more customized. And bluntly, you know, I think you can do that yourself. So I guess if there's one message out of this, is you know, don't be afraid and have a go, because actually you I think you'll find it quite liberating what what you can achieve or what your organizations can achieve at little or no cost by just having a go.
Mark Blackwell:Brilliant, thanks. So you mentioned DQ. Can you just help me clarify calibrate and our listeners calibrate what you mean? Because here's me. So I'm now doing podcasts where many of them cover AI. I'm I'm trying to read bits and pieces. I play around with new tools as they come. And I feel like the more, you know, the more there is to know. And I feel like, am I ever really going to get on top of this? I mean, what should what is what's the expectation of a leader to have a good DQ?
Marco Ryan:I mean, the first thing is, you know, what is DQ? And then what's the expectation? I mean, it was a term that I mean there are if you if you Google DQ, you'll find about five different people telling you what DQ is, right? But but broadly, when Alistair uh Lechler and I We wanted a way for non-technical leaders to be able to sort of calibrate where they were in a journey of AI awareness and AI confidence. The the DQ is a it's a metric, it's a tool, it's a self self-assessment tool. But effectively, it's a bit like an IQ test. You know, you are marked out of 200. It isn't a straightforward, you know, you answer 200 questions and that's it. It's a series of different things about understanding, about how you use AI. It asks you, you know, it's all about the sort of the, both the theoretical and the applied, and it basically assesses you against your peers, against you can search by sort of sector, by geography, by role. So you can quickly align and see how do I compare to other CFOs or you know, chief HR officers. Where where do I and how how do I compare to HR officers in the UK versus HR officers in the US? So that the idea is that you are able to sort of benchmark where you are. Now, what's behind it is the need for leaders in the boardroom to become more, well, not just the boardroom, leaders full stop, become more curious, digitally curious. Now, so part of my sort of, I guess, the research out of the book and part of my sort of passion is that I don't think that AI and digital is a technical thing. I think it's a leadership issue, right? First and foremost. So yes, it's based on technology and we need the technology to make it work, but really it's about how people, you know, want to think and evolve and change and do and create value in their companies by using a technology that is that is AI enabled. So the whole part of the sort of the digital curiosity piece is leaders need to become more curious. I mean, the you know, the the what if question, what if we were to, right? Not just I wonder if we could, but what if? And the what if needs to be quite disruptive. What if, you know, I worked in a global organization called BP, and there was a startup that did X? What if they were to eat my lunch? So in when I look back to my BP days, you know, we were setting up and running a UK's largest, you know, electric vehicle charging network, BP Pulse. And uh, you know, we had a whole series of things to do about the platform and how physical charges and and the and the app and the user experience. And there was a whole thing we were doing around what if a startup came in and created something that was better and disaggregated us. That what if question is fundamental to digital curiosity. And most leaders are not digitally curious enough because they are embarrassed, they're frightened, they're, I don't know, they're they're worried, they're overwhelmed perhaps by their lack of understanding of AI and in some cases the technology. In most cases, there's a lack of understanding of the consequences. Great leaders don't need to be, you know, deep experts. If you think about people being T-shaped, we don't want everyone to have a deep vertical, you know, technology. That's for the technologists, and you need great technologists. But leaders should be, you know, that vertical bar, horizontal bar across the top, where you are building out your awareness of whether it's you know, finance or technology or supply chain or HR or customers or whatever it might be. You need that broad skill set. And for me, I think that part of the new skill set you need is to be really digitally curious. You don't need to be the technical expert, but you do need to be able to ask the right questions. And more importantly, you need to understand the consequences of those questions or the answers to those questions, right? So DQ is that is a way to shine a light on where are you in that journey and to try and encourage you to become more curious and to build your knowledge and your confidence to a way that you can lead with authority in a world in which AI is constantly lapping at the shores of your business.
Mark Blackwell:That's great. So curious is the is a key word for me. Uh because I think it helps understand. You don't have to know it all, but you're asking the right questions, and knowing it all is an impossible task as the as the evolution of this technology comes along. Curiosity is good because I think it might hint about the way we need to think about what leaders need to do as facing this way of AA tsunamis. If I can just remind ourselves, our listeners, of some stats that we had from an earlier podcast with Stephen Wonker, who gave about AI and the octopus organization. And to show the gulf between what leadership currently think they're doing and how the organization is experiencing it, there was a survey that executives think they've got this nailed because 80% of them say our strategy has got AI as our future. Yet in the same companies, only 15% of the employees believe it to be true. They think it's just fluff. So, what is it that we've got to try and change in our organization and the way we think about running businesses to help prepare ourselves for what we call the tsunami of what's going on?
Marco Ryan:I mean the tsunami, I think, was when we were in our conversation, that was more of a side I don't necessarily refer to it as that, by the way. I think it's more about the fact that it is it is pervasive in its all directions and you know you can't stop it. But that um so I think the the um I think the challenge is that it changes changes about um leadership, right? About about people, right? The disruption that AI is causing is is an enforced disruption. You have no choice in it. Um and therefore, in the old days of business, you know, it was very much a push. You know, you you build a product and you advertise it, and people either bought it or didn't, but you were pushing, that was it, the product was the product. Now we moved into an evolution where you know we are trying to adapt to customer needs and we're trying to be much more agile than our products and services change to adapt or whatever. And those companies that get understand the customer needs and build products that meet the customer needs are more successful than those that aren't. So this the challenge I think for people is to realize is that AI is forcing change on your sector and your organization and on you, the individual. Therefore, this is as much about change management and therefore about people and mindset as it is about technology. So the first thing is, you know, it is no good at the top of the organization if you don't understand the ramifications, the constances of that disruption. You know, you cannot just be a leader that takes a can of digital paint and paints over the cracks of the analog cracks of your business, right? It will not survive the disruption that's coming. The second thing is, of course, you're right, that most of the people in the organization who are deeply familiar with this, who've lived it, breathed it, are at the bottom end of the organization. So, you know, you the the you've you've got this sort of idea where you almost need to invert that kind of you know, the pyramid, right? And actually, it it it plays well to people who are servant leaders because they are there, they are curious, they are there to serve, to empower, to let others, you know, lead. And where the talent and the curiosity is native, perhaps, around AI is in the the larger, you know, lower parts of the organization and the traditional thin top of the pyramid. So it you know, this is part of what we found in the book, that this idea that you need to sort of flip almost mentally, I'm not saying literally, but mentally you need to sort of flip the model where actually leaders need to go out and have those micro-learning moments to go and find people to understand, to explore, to change, to listen to other people in the organization who are perhaps the experts, but don't have the experience or the seniority. What AI does not do well is judgment. Not yet, anyway. I'm sure it will with quantum computing and everything else, but it doesn't do judgment well. So the human in the loop, which we can proxy for human judgment, is an integral and a critical part of current AI deployment. And that is a human characteristic, and a lot of judgment is based on experience, right? So there is a role for senior leaders who aren't technically, you know, literate or who are overwhelmed by AI. There is a role around this kind of how do I help steward the organization? This is very much the kind of the in the rewar retire analogy, this is very much the retirement analogy. Retire is not going to get the gold watch after 40 years service. It's how do I actually stand back and allow my and deploy my skills to allow others to accelerate through who are going to lead in this technology sort of age. So change management is a people issue. AI is disrupting an organization, leadership needs to be much more adaptive and open to sort of almost flipping the hierarchy and learning and finding ways to empower and enable the organization so that it isn't just a can of paint where you've got a strategy and you know, box tick. Because, as you say, people down in the organization, where they're actually doing the work will realize that that's just words.
Mark Blackwell:Exactly. It's fascinating. We had um Scott Anthony to talk about ethic disruptions, and he is, of course, a student of Clayton Christians and the great author of The Innovator's Dilemma and the Innovator's Solution, and he retold the Bethlehem Steele story, not from the perspective of Newcore, but from the perspective of Bethlehem Steel, and talked about the three ghosts that they were haunted by and they were just failed to move on from the past, which reminded me of how you think about driving with a rear view mirror mirror. You know, if you can't force yourself out, you're stuck. So that the real great leadership moment will become moving from an exploit type mindset which has worked very well to being more explore, which means you have to be curious.
Marco Ryan:Exactly. And I think, I mean, there, you know, you you you mentioned there briefly rear view mirror leadership. It's one of the things we talk about again in the book. Uh this idea that um, and this is not meant to be as critical as it will sound, but for many, many years, organizations, um, you you go into a meeting, and we've all been there, right? And and you have 190 PowerPoint slides, of which about 180 are, you know, finance, and most of them are um comparisons to the same quarter last year versus this quarter, you know. And it's a useful data point, but in many ways, because the winds of change were you could buffet yourself against the winds of change, you know, inside the organization, it was a very useful way to steer the organization. But in in a world where, I mean, if you just think about in the past year, how much AI has evolved and what is possible and how much it has changed things. And then you're going to sit down into in a board meeting and tell me that I've got to look at last year's decisions and last year's environment and try and steer or predict, you know, what I'm doing now or even next year based on that. That that idea of, you know, I think I liked it trying to drive a Formula One car, you know, round a racetrack by just looking at your rearview mirrors, it doesn't hold water. I'm not saying it isn't a valuable input. It is, but it should not be the dominant input that so many CFOs have used it for forever. So that is a fundamental change that you need to find a way in which you can steer and drive your business, which is far more adaptive and far more able to respond to, you know, a constant of disruptive change, which is outside of your control. And that's the bit I think people forget. They feel that they need to control it, and therefore they'll do this. And organizations that don't adapt to this will atrophy. I mean, that the you know, you're on a glide path, and the difficulty is that it won't look that dramatic at the moment. Um, and that goes back to this rewire retard, you know, oh, I won't make the changes now on my watch because I've got sort of a couple of years to go and I can steer it and we know what to do. And, you know, the number of times I've heard when I sat in public, you know, companies in the exco or in the boards, I've heard, yes, but Marco, you don't understand the investors, you know, or we've got to steer this for the markets, you know. I I do understand that that's how the business is valued and driven, but also those investors are savvy, right? And they're looking and understanding that there is huge disruptive change coming. And if an organization is not ready for it, hasn't prepared for it, hasn't got the right skills, hasn't got the right capabilities, the right technologies, you know, they will they will die. And the what AI is doing is acting almost, and I'm mixing my metaphors, is acting almost like a steroid on on the pace of that change or that atrophying. So it's a, you know, sort of ignore it at your peril. And I mean that as an individual, as a leader, you need to be make, there's a conscious choice you need to make. Am I going to am I going to adapt myself and embrace AI and make it part of my leadership skill set? Rewire? Or am I basically going to say, um, no, I I'm I'm too old, I've had enough, it's not for me, which is a perfectly rational choice, right? But in which case, you have an amazing opportunity to help steer, you know, the organization and provide stewardship around culture, values, experience, insight over that period, which is absolutely fundamental because we need that human judgment. We need that human experience in the loop to make it safe for the others to do the acceleration.
Mark Blackwell:I mean, absolutely. We've reached the conclusion a number of times on this podcast that the need for the old fashioned skills of human connection and relatedness are only going to become more important, not less important, in the future when adopting to AI. So I think we've answered the first question or comment from our LinkedIn guest who kindly shed his thoughts about the importance of leadership and how it connects to his challenge as a senior. Executive within an organization. Much more transactional. Think about the second comment. He's sitting there responsible for choosing which tools to put into the organization. And every day he's getting bombarded with emails and telephone calls from sales reps of the latest thing to improve his planning. Ultimately, something's got to change, right? But how does he separate the weak from the chap? This, you know, this overwhelming, as you call it, the digital paint potentially. What would advise would you give to him on a very transactional basis?
Marco Ryan:Well, I think the first thing is, you know, what problem are you trying to solve? And I think a lot of people are trying to plug AI solutions into a failed process or a problem that they haven't really addressed the root causes of. And, you know, that's and there's nothing magical about this. Is it just old-fashioned sort of root cause analysis? You know, what is what is at the heart of your problem? And, you know, where the worst thing that people can do is is to sort of take AI and apply it to a problem without you know the without fixing the root cause, because all you're going to do is is exacerbate the problem that was originally there, right? Because it's just, it'll do it faster, better, quicker, and harder, but it your problem doesn't go away. So the first thing I think is don't be afraid to have a really hard look about where the right place or the right, and that's typically where you it is too difficult, or you've failed in the past, or you know, you're using a lot of exports, but there's a lot of sort of, I've got to take it out of this system into that system to do that with a lot of joins, if you like, that means that your process is not efficient because your technology is not able to do it seamlessly. That's a good place to look. So the first thing is really identify where are where are the issues that you're trying to fix. Some of those, don't be always tempted by what are called the quick wins to try and prove. Some of them should be gnarly problems because they might take a bit longer, but they will fundamentally change, you know, if you fix them, they will change the outcome for your business. Getting a so one is don't be afraid uh to identify the problems. Second is get a blend of problems, you know, a couple of quick ones, you know, that you can prove, but a couple of gnarly ones too, that that really help you get confidence. Third is that, you know, you don't necessarily need to buy this stuff in. A lot of this you can do yourself. I mean, the irony isn't lost on me that you can ask AI how to use AI to solve an AI-created problem, right? So, and I and I sometimes I'm a little bit rude about, you know, this to people that I say, you know, I say, I don't know where to start. And I sort of say, well, you know, can you speak? Yeah, well, of course I can speak. You know, can you type? Of course I can speak. Well, then that's it. There's no barriers to using it because, you know, you can sit, I sit in my car, right? Uh and when when I'm driving on a long journey, I use Chat GPT or Clawbush, everyone you want to use, one of the large language models, and I literally talk to it. I press the microphone button on the app, I dictate in, and I ramble. And I'll say, I'm driving, you know, I've got an hour and a half in the car, I'm going to go on a podcast with Mark from Makaro, you know, I've got a number of things that I really want to talk about. You know, I want to make sure I mention this, I do that, he's mentioned this, he's mentioned that. Give me, you know, five or six things that I can really think through, and it'll come back and it'll read it out to me. So it's voice to voice, right? And then I'll have that conversation. And by the time I reach wherever it is, I'm going, Birmingham, wherever, you know, that whole transcript is available for me to use. Now, all I've been able to do is talk, right? Press a button and talk. I don't need to know anything about AI. So using it in that way as a strategy augmentation brainstorming tool, you just literally need to be able to talk. Conversely, you need to be able to type, but people go, well, I don't know how to write a prompt. Guess what? You type in, dear whichever version it is, you know, ChatGPT, there are other LLMs. Um, I don't know how to write a prompt, but what I want to do is this, help me structure a prompt in a way that will do this, right? Out it comes, tells you how to write the prompt. So that getting over this barrier, the fact that's the one of the first things I say to people is just have a play. You know, go do this, talk to it, speak to it, test it. Do something that's that's of no risk to you, you're passionate about, but you might be the chief executive whose whose weekend passion is making sourdough uh bread, right? Or you know, you might be somebody who likes to make fine furniture. It doesn't matter what the subject is. You know, you go in and you say, This is what my subject is, give me some ideas or books to read or new podcasts to listen to. It will do that. Low risk, right? So that's step three. Have a go, have a play. Fourth thing is you need talent in your business. You need to have people who are there. I call this the AI whisperer. There are whisperers in your business, right? These are people who are excited about AI, who are probably really advanced at AI at home. Well, you almost force them to have a sort of a digital or AI lobotomy when they come to work because you know, security policy says we're not allowed to use anything other than copilot. So, you know, find that AI whisperer. They're probably in the middle management layer, they're probably a bit frustrated, they've probably been there 10, 15 years, and they're the people who really understand your business and are really good at playing with and experimenting with AI. And give them some freedom or give them some, you know, make them the people that do that quick win that you've identified. And then lastly, and lastly, talk to industry about what's available out there, because there might be something that is, you know, specific or niche or hard to do as a on-your-own or self-help or internal or time critical, where somebody has done the work, has optimized it, and it does make sense to buy it in. But I would say that's the exception, not the rule. And when SAP and Salesforce and all these big enterprise platforms are all now AI enabled and everyone's got the AI on. You've probably got enough AI technology in your business already, frankly, and you've already probably got Microsoft and therefore you've got Copilot. Just use it better. And if you don't know how to use it, talk to Chat GPT and say, how do I use it better? Right? It'll make you an instant expert.
Mark Blackwell:Well, yeah, that on that point, Marco, can I just ask you a really basic practical question that I come up over again? So this is in the context of the MIT paper from January of 25, which said, like, up to 90% of organizations, people are using Claude, Chat GPT, and so on in their organization. Because that's what they're used to in home, it's comfortable. It's low barrier to entry for most people, and but there's resistance to the bought-in, forced down, top-down solutions because people aren't comfortable with them. And, you know, I've been on calls with CP, oh, we're not allowed to record the transcripts of our calls on this meeting because it's against policy, because it might be taking away company secrets. People have always had a pen and paper, and you think what's the difference between the two? Simply, you know, if your company is allowed to use COPOL, what simple things that can they do to enable employees to use their Claude or their chat GPTs without leaking secrets into the world?
Marco Ryan:Yeah, I mean, there are there are um settings that you can as an individual and at an enterprise level where effectively you can turn off the ability to share your output back into the LLM. So it it's a like you know, read-only but no write privileges. Um that would be the very obvious first thing. The second thing is you can set up sort of you know barriers and ways and almost run it locally as well if you wish, so it only trains on your own data. So there are a number of very practical, simple things that that bluntly any technologist will be able to fix in in a matter of minutes. But guess what? If you don't know how to do that, guess what you do? You type in, I want to run Claude or a Chat GPT at work, my company is concerned about this, you know, what are the what are the things that what are the counterarguments I can take in, you know, and how can I turn things off on your settings? And it will tell you. I mean, as I say, you know, it you people have this aha moment, you know, and think, is it that yeah, it really is that simple. You just ask it the stuff and it will tell you. Uh and then there's this concern of, oh, well, it hallucinates and it's, you know, it's gonna, not everything it tells me is true. That is true. Not everything it tells you is true and it will hallucinate. But this is where human judgment comes in. In most cases, you know, um, and use sort of sensibly, it will give you a very accurate answer. If we go back to and it's getting better and better and better as more people use it, right? And the principle behind this is slightly the wisdom of crowds, you know. So this this is something I think, you know, came out in the probably in the 90s or early 2000s, this idea of the wisdom of crowds, which is if you ask one or two people where something is, you get a number of answers. If you ask 10,000 people, you know, you you will get enough information that you will be statistically accurate where it was. I think the original thing was they were predicting where a submarine had been lost at sea. It's worth it's worth looking it up, actually, and Googling it. But this idea, of course, is that you know what you've got with these large language models is effectively wisdom of crowds on steroids. You will get what part two is we are largely uh been trained to the sort of Pareto analysis of you know the 80-20 rule, right? 80% of the the value or the facts and 20% of the things or whatever, whatever you're uh you're applying it to. And well, certainly when I was doing research for the book and I was chatting to Lieutenant General Sir Tom Coppager Sims, who um was at the time um uh the effectively the chief digital officer for the for the UK defence, uh, when he then went on to be the deputy commander of, I don't know, Cyber Command or something. Basically, I mean, you know, Mr. Digital in the in the army, um uh and someone you should chat to, he's definitely he's fascinated to talk to. And he was talking about how, you know, in mission command and things have changed in the army, and obviously when you're when you're you know um deploying and doing things at war these days, and you've got a lot of training, you've got a lot of technology. But the old thing was the commanders, you you had to understand a commander's intent, and then you know, you would sort of wait for about the 80-20 and you would go. What's interesting, he was saying, was that in in an AI-fueled world, it's almost kind of reverse 2080, right? It's all actually probably about 3070. But you know, you can you can only have about 30% of the facts, and and the pattern matching and the algorithms and the AI can fill in so much that you probably get to a higher level of fidelity and a higher level of understanding with a lower level of original fact than you did in the old days of having to wait for 80% before you, you know, you made a decision. So what we're seeing is uh, you know, that actually AI is changing the dynamics around how much fact, not how much how much fact you need, but how the breadth of facts you need to get a really high guaranteed outcome, right? And that ultimately is, of course, what AI is really good at, this idea of sort of pattern matching and predicting, right? So all of that brings you into a world where actually you can fundamentally change how you think and how you deploy and what you use and how you start, right? It you know it's a mind shift change back to our very first point. Providing that you are curious, and so what if, and I wonder if, you might be surprised how quickly you embrace and how quickly you find and start to become creative with new ideas.
Mark Blackwell:Brilliant, so you are now seguing naturally into the final point of our LinkedIn commentator who was very perceptive, I thought, and really firmed in on the 90s how AI can enable augmented decision making rather than replacing decision making. And I wonder if we can explore that a bit and get some of your feedback on how you might see the world to be working. We had um a professor of possibility studies, Vlad Glavener, who gave a fascinating presentation. I wanted him to talk more about it, what he called slow AI. And as much as his view of slow AI, you know, we instinctively go for the dopamine fix of a quick solution by giving me the answer. And he's working with a guy called Ron Baghetto to think about giving more emphasis to AI generating questions that can inspire the curious mind rather than giving the dopamine fix of a quick answer. I wonder your immediate reflections on that type of thing.
Marco Ryan:I love this idea of sort of that slow AI. I mean, if you think back to what I was talking about when I sit in the car and I'm using AI to strategy, I I it it's asking me questions, right? Because I I tell it to. I tell it, you know, I sit down and say, ask me questions to provoke the discussion so that it becomes a conversation rather than, you know, as you say, the dopamine fix of give me an answer. For people who are unfamiliar with AI, the quality of your prompt makes an enormous difference to the quality of your of your answer. So if you were to do a prompt that just said, you know, rewrite this paragraph for me to make it better, right? That would be quite different from acting as an editor for the spectator, um, writing an article that is, you know, 2,000 words long uh on a deadline um in the style of Truman Capote, you know, uh, or or for example, or embracing some of the findings of Jim Collins's good to great in into the article, I want you to, dot, dot, dot. You will get a very different outcome, obviously, for that for that answer. So there is a thing around, you know, AI, to answer your question, this idea of kind of you know, stay AI and augmented AI and AI, etc. The lazy AI is where you ask AI to do all the work, right? Um, because you need really good data, clean data for it to be trained on, and B, you need to have trained it to sort of give you the answer. And what large language models have done have made us lazy is that you're you're pretending that somebody else has done all the training and done it really well. All the data in there is accurate and good, and you're being lazy with your prompt, and as a result, AI is being lazy in what it returns to you. It returns you something better than probably you could get to, but it's still not perfect. Where you we go back to this idea of the human in the loop, where you're using it, where human experience or human judgment is, and you're starting to use augmented AI, that's really interesting because then you're saying, what is AI better at than humans, right? And and and what are humans better at than AI? And if I merge those two together and put writing instructions around it, you know, the instructions to the jockey, or I put barriers of how things can be used, then it then it's a walled garden that's very different. So I don't think that all jobs will get replaced by AI. I think AI is far scarier than people think in some ways, because when quantum computing comes on and the power of AI goes on, its ability to ingest and we're lazy using it, it will become as good as or seem to be as good of, if not better than humans, and therefore in theory could replace. But I don't think that that's here right now. I mean, you we're years away from that. I'm not saying it won't happen in our lifetime, but we're years away from that. So there are roles it will replace, and frankly, it should replace, because those roles were vastly inefficient or misuse of human intelligence or human capability. There are some rules it cannot, there are some jobs it will not replace. I, you know, I can't imagine at the moment a really good AI electrician being able to come into my 18th century cottage and fix things in an old cottage. I can imagine a skilled electrician coming in with a whole series of robotic probes or AI diagnostic tools or things that will help him do the job faster, better, quicker, you know, more efficiently. Again, augmented, you know, has its role, right? Um there are things where you know we see things like AI kitchens, where, you know, the recipes are sort of created by on-demand and all of the food prep and all of the cooking and all of the stuff are all done by effectively robots, AI-enabled robots. So you can't say that it won't ever replace manual labor, but I think you know, we're we're at that sort of exploration end rather than the pragmatic at scale end. So will it replace every job? Probably not, or not at the moment. You know, is augmented better, definitely. To your third point around slow AI and asking questions, that's the intelligent use of AI. You know, if you think of AI as this incredible resource, it's the internet on steroids, it's it's an intelligent internet that can interact and have human-like qualities, and therefore we're more comfortable talking to it. It feels more natural, then I think that's good because it will promote that creativity, that curiosity, which is a uniquely human characteristic. And at the moment, it excels what pseudo-humans, robotic humans, or you know, human brains, if you like, robotic brains can do. That creativity. You can create amazing digital art, digital music with AI, you know. Um and it it's not that it can't be creative, but it requires the prompt. It requires you to tell it what to do, right? And then it probably gets it slightly wrong. So it requires at the moment that degree of oversight and interaction. So I love smart use of AI, slow AI, using it intelligently to augment how you work as a leader, use it as a strategy buddy, use it as a diagnostic, use it uh I mean, one example I'll give you which which is uh I I won't name the the company or the individual, but I was I was pulled into a um uh a large multinational uh in the in in the well, let's say FTSE 250, so people can't trace it. Um was asked to talk to the board about AI and in their board meeting. And I noticed that three of them had iPads and sort of four of them had the usual, you know, Amazon forest of PowerPoint. And I said to the chairman, so how many pages and how long did it take you to read it? And he said, Well, I got it ten days ago. If I'm honest, I haven't read every single page. I've sort of, you know, and I've made a series of notes, and I said, Great. And so what are the sessions now that you're most comfortable with or the ones you're least comfortable with? And he said, Well, I'm least comfortable with this session actually around digital, what's going to happen and transform. And I also we've got an update and a brief on some stuff on technology, which we'll listen to or try and ask some intelligent questions, but I'm not sure any of us, maybe one of our board members is, but the rest of us are experienced, but we're not. Okay, fine. So I said, Right, I've got hold of your board pack, all right, and I put it in a clean area and I've ring fenced it. So it's not going outside this sort of, you know, this virtual room. I said, right, I'm now going to write you a prompt and I want to show you the answers and how quickly it does it. And so of course I wrote a prompt as acting as a board chairman. You know, here's a thing, I want you to summarize the key things from each of the papers. I want four or five deep insights, I want four or five tough questions, and what should I expect the answers to those tough questions be? What might the options be, and what might the consequences be? And as a consequence, I want some follow-up insights or thoughts that you can then give to the executives. I said, to me that's how a board non-exec should work, right? Read the pat, find some questions, understand the concept, put it in, and of course, within seconds came it spouted out like this. And he said, Well, that's impossible, he can't have read all of that in that time. And I said, No, it is. And and he looked at it and he said, But this is far better than I than I've produced. And I said, Yeah, and it's not that I'm doubting your experience, but the process of what you were going through, you didn't apply your knowledge or insight. What you were done, you were suffered with the boring work of effectively creating summaries and insights, which in most cases you ask one of your assistants to do, right? And there was this sort of Damascus and Road commercial moment. Fantastic. But the point is that now that he realized how you can use it to augment what you do, that didn't change. The importance of the board meeting. It didn't change the human dynamics. What it made it was a very different board meeting around really getting to the nitty-gritty of the facts rather than feeling that, and I've been there, you go and present to the board, you know, they sit there with blank faces, you're not quite sure how it's been received, you don't really know whether you've hit the mark, you spent months preparing for it, you know, it's just inefficient, right?
Mark Blackwell:Thank you. That's been a fabulous session today, Mark. Thank you very much. I think you've brought us the question that we have to face, rewire or retire, is one we have to address. But at the same time, I think you've convinced me that the rewiring isn't as hard as many people might think it would be. So let's welcome it and listen through the podcast again. I'm I'm convinced by your message to do that. And there is a place for wisdom. Wisdom and judgment are exist in the world. And absolutely, you know, so we don't need to be frightened. We just need to learn how to dance with AI by understanding what it is good at and understanding, I think more importantly, what we're good at, and be comfortable with that.
Marco Ryan:And the wisdom and judgment you just mentioned, right? Doesn't always sit at the top of the table.
Mark Blackwell:Totally. That's a very, very the inverted pyramid was a great takeaway. Thank you for that. Really worth reinforcing on that. So where can people find your book? Where can they learn more about you?
Marco Ryan:So the book is on is best in Amazon. You can order it through bookstores, but uh the easiest, quickest way to get it is is through through Amazon. And if they want to find a bit more about me, then just go to marcoryan.com. Um and um you'll see there's you know there's a whole load of free PDFs and papers and things to download. You can see what I look like as a speaker if you want me to come and speak at an event, whatever it might be. Um, yeah, all there. MarcoRyan.com. Thank you.
Mark Blackwell:Thank you, Marco. And again, we'll try and get as much as we can in the show notes. And even better, if you've been inspired by any of the discussion, please come and find us on LinkedIn and share your thoughts and ask the provocative questions for our next guest. Thank you very much, Marco. If you found yourself nodding along to Marco's rewire or retire philosophy, you'll definitely want to explore some earlier conversations that we've had that built the foundation for today's session. Marco's point about digital curiosity and the low DQ at the top is the perfect companion for my conversation with Ray Eitelporter on AI governance. If you're worried about where to start, remember Marco's advice. Don't ban it, don't fear it, just find the AI whisperers in the middle management and empower them to lead your first implementations. If you were struck by Marco's metaphor of driving a Formula One car while looking in the rearview mirror, go and have a look at the podcast with Scott Anthony, and we talked about epic disruption. He digs deep into the three ghosts that keep leaders frozen in old habits, even when they know there is a tsunami coming, as the tsunami described by Stephen Wonka in AI and the Optobus Organization. For those in supply chain who are intrigued about the level of detail that we need and how we can trust the systems and the view on explainability, and perhaps something about the 3070 flip that we spoke about today, make sure you listen to the episode with Niels Van Hove, a fascinating guide about augmented decision making. And so finally, a quick test might suggest. Why don't you try Marco's Damascus Road moment for yourself? Next week you've got a meeting. If you've got meeting notes in advance or maybe a PowerPoint deck, put that into a secure large language model like Claude or ChatGPT, and again check the settings to make sure it's so secure. And just ask for the five toughest questions you should be asking at the meeting next week. So all of these episodes are available on our YouTube. We do ask you to subscribe. We've got some great guests booked for future episodes. And please look at the show notes where we'll connect you with Marco's website, marcorian.com, and also where you can find more information on our own website, arcaro.com. And please do share your comments on LinkedIn when we post this video. Really looking forward to seeing what you thought about today. Thank you for listening. In a world of AI, it is you and your human judgment is not just a backup, it is the thing that is the steward of the entire system. So stay as adaptive. Thank you.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.