Arkaro Insights
Arkaro Insights provides B2B executives with tools and techniques to thrive in an adaptive world and navigate successfully in a VUCA world.
About Arkaro
Arkaro is a B2B consultancy specialising in Strategy, Innovation Process, Product Management, Commercial Excellence & Business Development, and Integrated Business Management. With industry expertise across Agriculture, Food, and Chemicals, Arkaro's team combines practical business experience with formal consultancy training to deliver impactful solutions.
You may have the ability to lead these transformations with your team, but time constraints can often be a challenge. Arkaro takes a collaborative 'do it with you' approach, working closely with clients to leave behind sustainable, value-generating solutions—not just a slide deck.
"We don't just coach - we get on the pitch with you"
Connect With Us
💬 We'd love to hear from you! What topics would you like us to explore in future podcast episodes? Drop us a message or connect with us to learn more about Arkaro's approach.
🔗 Visit us at www.arkaro.com
👥 Follow our updates: Arkaro on LinkedIn - https://www.linkedin.com/company/arkaro/
📧 Email us at: mark@arkaro.com
Arkaro Insights
How to Use AI Without It Going Wrong | Ray Eitel-Porter, Author of Governing the Machine
How can your organisation use AI without it going wrong? With 95% of organisations failing to see a return on their AI investments, this question has never been more pressing for business leaders.
In this episode of Arkaro Insights, I'm joined by Ray Eitel-Porter, co-author of "Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential". Ray has spent over eight years helping companies implement AI responsibly. He previously led Accenture's global responsible AI practice and currently advises multinational companies and the public sector. He is also a Senior Research Associate at the Intellectual Forum, Jesus College, Cambridge.
There's a common misconception that AI governance blocks innovation. Ray challenges this view head-on. For him, AI governance is precisely what allows organisations to innovate confidently – the framework that helps you scale AI whilst knowing the right questions have been asked and the right safeguards are in place.
We explore the striking gap between executive ambition and workforce reality: 80% of executives believe AI is core to their strategy, yet only 15% of employees share that belief. Ray shares practical examples of how organisations have closed this gap, including a UK public sector body that transformed workforce trust in AI from 25% to over 90% through effective training.
Ray brings the discussion to life with case studies from PepsiCo, Nestlé, and Shell, showing how AI governance can reinforce brand values and enable responsible scaling across global operations.
We also tackle the shadow AI challenge – up to 90% of employees using personal ChatGPT, Claude, or Gemini accounts for work – and why technical controls alone cannot solve this problem.
Looking ahead, Ray explains why AI agents represent the next frontier of governance risk, and why automation bias – our tendency to over-trust accurate AI – may be the most counterintuitive danger of all.
A key message: AI governance isn't just for large corporates. The principles scale down to SMEs. Where a multinational needs sophisticated platforms, a smaller business might achieve the same ends with clear ownership and an Excel spreadsheet.
About the guest
Ray Eitel-Porter is co-author of "Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential", available from Amazon. Connect with Ray on LinkedIn. https://www.linkedin.com/in/rayeitelporter/
About Arkaro Insights
Arkaro Insights is the podcast for B2B executives seeking tools and techniques to thrive in a complex world. We cover change management, innovation, and commercial excellence – with particular expertise in the agriculture, food, and chemicals industries.
Visit www.arkaro.com or connect with us on LinkedIn.
Connect with Arkaro:
🔗 Follow us on LinkedIn:
Arkaro Company Page: https://www.linkedin.com/company/arkaro
Mark Blackwell: https://www.linkedin.com/in/markrblackwell/
Newsletter - Arkaro Insights: https://www.linkedin.com/newsletters/arkaro-insights-6924308904973631488/
🌐 Visit our website: www.arkaro.com
📺 Subscribe to our YouTube channel: www.youtube.com/@arkaro
Audio Podcast: https://arkaroinsights.buzzsprout.com
📧 For business enquiries: mark@arkaro.com
The idea that AI governance is somehow against innovation is completely wrong. For me, AI governance is actually the thing that allows you to innovate. And that's one of the reasons, frankly, that I set out to write the book was because I was seeing an increasing number of organizations wanting to use AI, but being nervous, frankly, because they read in the news and people heard about stories of AI going wrong in one or other way. And the idea behind AI governance is quite simply to make sure that AI delivers on the return on investment that you propose and that you hope for, and that it does so without going wrong along the way. And so it should be the thing that makes you comfortable scaling your use of AI and allows you to sleep at night because you know that the right questions have been asked, the right tests have been done, the right people are in charge, and that therefore you know that everything possible will have been done to make sure that the AI delivers as you intend.
Mark Blackwell:This is Mark Blackwell. Welcome to the Arcaro Insights Podcast. This is the show where we help business executives with tools and techniques to thrive in a complex world. Now, as you may be following this podcast for a while, you know that every now and then we do touch on one of the more interesting issues to face us, actually the big disruptive challenge of AI. I put into one of my AI tools the other day, okay, I've done the podcast on the following subjects. What should I do next? And it came up with a great mind thinker like moment of governance. And today we have Ray Etel Porter as our guest on the show. Ray is a recognized authority and consultant on AI governance and responsible AI, currently advising multinational companies and the public sector. He is co-author of the influential book Governing the Machine: How to Navigate to the Risks of AI and unlock its true potential. Ray has worked in the AI governance space for over eight years, focusing on how companies can use AI without it going wrong. He has previously led Accenture's Global Responsible AI practice and established the firm's internal AI governance program. Ray is also a senior research associate at the Intellectual Forum Jesus College, Cambridge. Ray, welcome to the show. How are you doing?
SPEAKER_01:Thank you very much, Mark. I'm delighted to be here to talk about my favorite topic.
Mark Blackwell:So maybe just to kick things off, how would you define AI governance? What is it that people, if they picked up your book, can get as well that they might otherwise have got?
SPEAKER_01:Well, let me let me start by giving you the sort of the way that we technically define AI governance, but then I'll I'll sort of elaborate a tiny bit because we do in the book actually try to think through how would we differentiate between some of the different terms that you'll hear mentioned quite a bit. So people talk about trustworthy AI, they talk about responsible AI, they talk about AI governance. And in many ways, those terms are used interchangeably, but we did try to differentiate a little bit between them. So for us, um, trustworthiness is sort of the objective. That's what we're trying to achieve here, is that people trust AI for good reason. And the way that you achieve that is by implementing responsible AI practices. So responsible AI describes the range of practices that you need. And governance is the, if you like, the people process and the technology that delivers on the responsible use of AI. So for me, AI governance is very much about how do you implement the people process and technology things that you need to do to make sure that your AI is going to behave as you intend and in a responsible way.
Mark Blackwell:Got it. So one of the ideas I think a lot of people think about governance. They also think regulation. They think eliminating downside risk. I know I've got to do it, but it's a bit tedious and boring. And really, it's blocking innovation. You know, there is a school of thought in the world that thinks like that. But the reality is, if we've touched on this podcast before, there's lots of papers that showing that implementation of AI isn't going as well as maybe one would have hoped. Um say we've got the MIT paper showing 95% of organizations aren't seeing a return on investment and others, similar data points on that. So is there any way that we could reframe governance as to be as well as mitigating downside risk, but enabling upside risk?
SPEAKER_01:So, yes, I think that the idea that AI governance is somehow against innovation is completely wrong. For me, AI governance is actually the thing that allows you to innovate. Um and that's one of the reasons, frankly, that um that I set out to write the book was because I was seeing an increasing number of organizations wanting to use AI, but being nervous, frankly, because they read in the news and you know, people heard about stories of AI going wrong. And and the idea behind AI governance is quite simply to make sure that AI delivers on the return on investment that you propose and that you hope for, and that it does so without going wrong along the way. And so it should be the thing that makes you comfortable scaling your use of AI and allows you to sleep at night because you know that the right questions have been asked, the right tests have been done, the right people are in charge, and that uh therefore you know that everything possible will have been done to make sure that the AI delivers as you intend.
Mark Blackwell:So maybe in in this podcast we could try to sort of think about both issues, minimizing the downside risk and enabling the upside risk. In other words, deliver a return on your investment and make money for the business. So, first question is who should therefore, in that context, be in charge of the governance process?
SPEAKER_01:So, in one way, I don't think it matters too much who it is in terms of where they sit in the organizations. Traditionally, if you go back a few years, it was almost always the CDO or someone from a technical side, chief and chief AI officer, chief data officer, whatever the organization called that role, was generally the person leading it. Then over a period of time, we started to see general counsel or privacy and legal leadership getting involved and sometimes leading. And occasionally we see information security sort of taking the lead, although not as often. Ideally, you would have a business leading function, as in someone with P ⁇ L responsibility who's actually tasked with delivering business value leading. I find it harder to get those people to sort of step up and want to lead the program. But the good thing about one of those leaders being in charge is that they obviously have a positive value generating target ahead of them. And so they will almost automatically see governance as something that's going to help them. But I think the overarching criteria that matter is that whoever takes up the charge and leads, they have to be someone who's really committed. They have to be someone who has budget, and they have to be somebody who has enough political clout that they can actually get different parts of the business to collaborate. Because you can't do it just within a technical silo or just within a legal silo or just in information security. You've got to get all these different parts of the business working together. And that, as with any kind of major change program, that requires someone at the top who can um get people into a room and get them get them all.
Mark Blackwell:Totally. And we one of the big themes that we have in this is that change must involve the people who are impacted by the change.
SPEAKER_01:Absolutely.
Mark Blackwell:It cannot be imp change cannot be imposed on an organization. It must be done with the organization and engage them in the process. So, in that context of engaging people in the process, I like to comment on some of the statistics that we've come across so far in our work. Surveys show that um 80% of executives, teams, or businesses believe that AI is core to their strategy, and that's what's going to make them successful, possibly pressurized by boards to do that. Yet if you survey the workforce, about the 15% people say, Yes, I believe it. There's a big disparity. Comment on that from the context of what you're trying to achieve in game.
SPEAKER_01:Yeah, there is. And actually, let me give you perhaps a couple of stories, and I suspect this is going to be a theme we'll come back to during the course of our discussions. I was actually at a conference last week in New York speaking at the AI Summit and listening to one of the other sessions, there was the chief information officer for Paramount who was talking about their AI journey. And he said, he said, you know, we did we we thought we did everything right. We were giving co-pilot licenses to everyone across the organization. We organized training, we'd put in place policies and guardrails and so on, and we thought we'd really done this thoughtfully. But we didn't see the uptake that we were expecting by any means and and weren't seeing value coming back from it. And he said it wasn't until we actually sat down with the different business teams and really helped them to understand how they could use it in their context that people started to be comfortable using it. And then they spotted, you know, they they with us identified where it could help them and how they could do it and so on. And I think that illustrates the fact that actually it's not so easy for many people to understand or take the time to experiment and think about how they're gonna use AI in in their you know work processes. Plus, on the part of some people, there's a bit of reluctance because they think, well, if I get too good with AI, maybe they won't need me and maybe the the AI will take over. So I think for all of those reasons, we do actually have to work harder. And it's not just a, oh, here's a 30-minute training video you should watch, and then you're gonna become a co-pilot whiz and use it to do all sorts of wonderful things. There's a lot more to it than that.
Mark Blackwell:Totally aligned with that, right? That's a very good example, and it supports another podcast that we had with Rick Challon, who was talking about user needs mapping, says, you know, for any process, an IT process, start by identifying the jobs to be done and start by identifying your customers and have total clarity on where there is room for improvement before you think about anything else.
SPEAKER_01:Exactly.
Mark Blackwell:Yeah, your work echoes that. And another thing we could we've sort of talked about more from a theoretical place is maybe you could find some case studies, but it builds on the the people issue. And one of our more popular podcasts was with Hilary Scarlett, who talked about the spaces model. So, what's this? Basically, it's saying that even in the AI or age, we are still two million year-old people wandering the savannah looking out for lions, right? And so we will respond negatively more than we'll respond positively if we see a threat. And most things are threats, and it's to do with things like what is my status, what's my social position in the world, what's the purpose of this change, what's the autonomy I've given to work on it, how certain am I of what's going on in a world full of change? Is this equitable and fair? And uh finally, social connection. Maybe I don't know if you've got any other case studies which really show how good governance can really start tackling some of these fundamental people issues and responses.
SPEAKER_01:I mean, I think it does come back to the training topic that that we have talked about. Um and I think that leadership is is critical in that. There is a um, I think there is a uh uh an interest on the part of people to to have the training. So I'll give you another example, which is um I sit on um the uh the AI governance and ethics committee for the local government and social care ombudsman. A bit of a mouthful of that, but this is essentially a medium-sized organization. It's an arm's length body within the UK, and it's responsible for handling all of the complaints and concerns that might be raised about anything that happens in local government or in social care within the community, again, basically, you know, at the local government level. And as you can imagine, this is a very sensitive area, right? So you would want to be extremely cautious about any use of AI. But as you can probably also imagine, with UK government finances in the situation they are, there's budget pressures and the number of complaints people that are coming in is not going down, quite the opposite. So the organization's a bit, you know, in a difficult position. So a group of people got together, and this was led by the sort of head of IT and somebody from the sort of business side, if you like, uh, an expert on the sort of adjudication side. And these are not, you know, PhD data science people or what have you, but they they could tell that AI would probably be able to help in some way. And they formed a small committee to look at this. They did get a little bit of external help, um, which was very valuable, but not massive amounts. And um, and they established a set of principles. And the number one principle is that humans make all decisions. And I think that's really important. So all decisions, because these are critical decisions, obviously, to people. So humans make all decisions. But what they're looking at is how AI could help in people's workflow. And when you when you understand what their typical employees do, they are highly skilled people who go through dozens or hundreds of documents as part of a complaint to understand the history and what's gone on. Um, these are typically stored in a number of different databases. They're sent in by, you know, a complainant and then people who you're on the other side of the case and so on. They're all not labeled very well, they're not ordered in any way. So the first thing the poor investigator has to do is to open all these PDF files or Word documents or whatever and try and sort them into some kind of chronological order and figure out which ones came from the plaintiff, which ones came from a doctor, and just get the whole thing straight. That's a perfect use case for AI. It can do that really well and very quickly, and then the expert can focus on actually reading the material and making a decision. So you're not taking away the human agency at all. You're just saving the expert a lot of annoying time that's spent sorting, you know, documents. And what I thought was particularly interesting to your point earlier about involving people and taking them on the journey. So they actually developed a four-hour, half-day in-person workshop for AI, and they made that mandatory for every single person in the company. And when they did that, they were nervous that a lot of people would push back and say, look, I really don't have time for this, or you know, it's not relevant, whatever. Quite the opposite. They were overwhelmed with demand for the first course and the second course and so on. And they surveyed people before they did the training and after the training. And before the training, people's level of confidence and trust in AI was around the 20 to 30%. After the training, it was over 90%. Wow. I think that just really shows that if you take the time, and I'll be honest, you know, even in larger private organizations, corporates, etc., I don't see many, if any, taking four hours of in-person training with everyone across the firm, but you can see the value of it just in those reflected scores.
Mark Blackwell:That's amazing. Your autonomy is is really relevant to discussions because there was one interesting statistic in the MIT report which talked about the shadow AI economy. And forgive me if I get the numbers wrong, but I hopefully got the gist right, which is you know, in in the companies that they surp surveyed up to 90% of employees had their own ChatGPT clawed Gemini account, which they were using for business. But actually the usage of the corporate mandated, and like use that word mandated carefully, was very low, something like 40% topped if I'm right, or something like that. What should you do if you find yourself in that position? Because it's understandable how it happened.
SPEAKER_01:Yes, it is. And I mean I think it it one of the lessons from that is that um of course you should put technical guardrails in place, but at the end of the day, with a technology like AI, and particularly in terms of chatbots and and what have you, it is everywhere and it's going to be even more everywhere than it is at the moment. So you're not going to be able to control it with pure technical guardrails. And again, therefore, it comes back to training. The only answer is going to be getting individuals to understand and recognize how they have to behave themselves and why they have to behave in that way. And I think it's not dissimilar in some ways to information security and things like phishing attacks, let's say. You know, um, there are clearly firewalls we put in place and there are all kinds of checks that we have in our computer systems, but still, phishing emails will get through and they're written very well these days. And the only chance you've got of stopping the clicking on the wrong link is by making people really aware of that, doing regular trainings. And, you know, I see in a number of companies that they send you false, you know, made-up phishing attacks to keep you on your toes. And if you click on the button, the big red thing comes up and says, Oh, caught you out on that one. And if you do that again, you'll have to do an extra training course or something like that. And I think that really thinking of ways to engage with people and make them take personal ownership for what they should and shouldn't do with AI is both good from the point of view of helping them to understand the positives, because at the end of the day, we want people to use the technology in the right way because we want to get the value from it, but also making people understand and aware of where they need to be careful.
Mark Blackwell:Again, you're right, very much like fishing, very much like safety in the workplace. It's keep giving a very positive engagement to it. So many of our listeners come from agriculture, food, chemicals background. So I can just touch on some of the case studies I read in the book. One that I liked, and it comes back to this, you know, the avoidance of fear and the social esteem, was PepsiCo and their policy in in factories. Can you talk to me about that?
SPEAKER_01:Yeah, so I mean, again, they've they have really engaged with the workforce to try and make them part of the solution. They know that they need and want to use AI, um, but they have a very widespread spread training program to really help people at all levels across the organization um make the most of these tools. So they want. People to be improving their career opportunities. And they also see it funnily enough as a sort of outreach into the community. Because if people are taught this within the workplace, they will take it out into their personal lives and potentially into you know social things that they do, charities they might work for, etc. So they they see it as a way of you know spreading the um the right behaviours with with AI and and the right skills. And again, that's very much welcomed by the workforce.
Mark Blackwell:And again, another interesting food one that I picked up was Nestle. And their policy on promoting food, which just jumped out of me as a non-obvious one, but quite an interesting, quirky one.
SPEAKER_01:Absolutely, yes. I mean, they've gone on record as saying that they will not use AI to create pictures of food because food is their essential ingredient, if you like, and they want their customers to be completely confident that any picture they're shown is authentic. It's real food. And I think that's really interesting. And obviously, as techniques get better and better, it'll be really quite hard for a customer to differentiate between an AI-generated image of a grain of wheat or whatever and a real photo. So I think standing for something and making this as a public commitment is quite important and will help brands to differentiate in certain ways.
Mark Blackwell:And again, that's an example of bringing value into the process, not eliminating risks. Yeah. You know, another big part of the spaces framework it comes to mind is equity, making sure, and I think you've alluded to that in in the previous about everyone getting trained fairly so that they feel there is no age gap just because you're not doing it. And so that's part of the process, uh, undoing it. Absolutely. Yeah, absolutely. And so we've got, I mean, is there any other case studies that come to mind of something that you might might want to share with the audience whilst we're here?
SPEAKER_01:On the sort of training and workforce enablement.
Mark Blackwell:I mean to the people side of factors.
SPEAKER_01:On the people side of factors. Those are the ones that particularly jump out. I guess the one other thing that I that I might mention, which may be straying slightly into different territory, is the emphasis that um some companies put on really aligning their AI principles with their corporate values. And I think this really plays to the point about making sure that people identify, people within the company in the workforce identify with the way that the company is going to use AI. So, what I mean here is if you're a pharmaceutical company or you're a petrochemical company or you're a bank, your core values will be slightly different. Fundamentals will be quite similar, but they will probably be expressed in slightly different ways because one company is going to focus much more on patients and treating outcomes and health and so on. One company may focus much more on safety and so on in the petrochemical industry and so forth. And if you've done a good job and your workforce really kind of understand and get your values as a company, you want to make sure that your AI principles reflect those core values so that they fit with the ethos of your company.
Mark Blackwell:It makes a lot of sense. It's getting closer to that sense of purpose. And getting remember that statistics that you know, 80% say it's in our strategy, but only 15% of the employees believe it.
SPEAKER_01:Exactly. Exactly.
Mark Blackwell:So maybe moving a little bit more towards the legislation side of things, if we may. And try to guide us because it's seen I'm I read a lot of stuff in the newspapers and I don't know how true it always is all of the time, but what's going on right now? Whilst we have an expert here, we may answer this. So to start with, it's right that Shell chose to model their their governance polic policies on the EU situation. And yet I'm hearing stuff that maybe the EU has overcooked it. You know, does it make sense to have you know a company to model on one set of legislation, or is that completely impossible?
SPEAKER_01:Yeah, so I this is a very interesting topic. Um there is a feeling amongst a certain group of companies and politicians, etc., that the EU regulations, the so-called EU AI Act, is too strenuous, is too strict, and could result in Europe being at a disadvantage when it comes to AI. As a result of um pushback from a number of European companies, a sort of group of, I think it was 150. I can't remember the number exactly. European companies signed a, you know, signed a sort of petition, as it were, to say we think, you know, if something needs to be done, this is too strict. And obviously the US has been putting extreme pressure on the EU to um water down some of the provisions in the EU AI Act. Uh, and then you had the Draghi report, which also suggested that this could be an impediment to competitiveness. So, in response to all three of those influences, the EU has put forward a proposed set of changes to the Act that would water it down in some ways. Now, whether those go through or not is still very much up in the air. The EU has a very complex legislative process. It's been put forward by one of the triumvirate of legislative bodies, if you like, that make up the EU. It's anybody's guess, frankly, whether it will go through, and if it goes through, how much how modified it would be in that, you know, legislative process. So there is that concern out there. If you had asked me pre-Trump administration, I was pretty confident that the EU AI Act would become a sort of de facto global standard in much the same way as GDPR for data protection, data privacy, has effectively become a global standard for multinationals. If you talk to most multinational firms, they will say they more or less follow GDPR because it's really quite painful to have different processes and standards in different countries. It's just costly to do that. It's hard to enforce. And so what often happens is that you look for the strictest set of rules and you say, well, you know, if we follow those, we're okay in that regime, and we'll certainly be fine everywhere else because everywhere else is less, you know, stringent than that. Um, I was pretty confident that that would happen with the EU AI Act. Post-Trump, I'm less confident. But interestingly enough, when I was in the States last week, I met a couple of was talking to a couple of US companies that were multinationals, but they were kind of multinationals, if you like. And they were still saying, we're going to follow the EU AI Act. It's just too difficult to have different rules all over the place. Um, and if you think about it from the US perspective, it's tough because you have different state-level AI regulations now. There's quite a disparity in the US. So it's really hard for a company in the US. Oh, well, if we're in California, we've got to do this. If we're in Texas, we've got to do this. If we're in New York City, we have to abide by this extra rule for hiring using AI and so on. Um, so having one standard which is quite high and quite tough is probably not a bad way to go. And then what Shell have done, and indeed I know other companies that do the same, they basically allow then exceptions if you ask. So effectively, if there's a particular country or region or whatever that feels that some aspect of that stronger regime is not good for their business, um, then they can put forward an exception request and that gets reviewed. And there can be modifications or relaxations at the local level. But you start from that stricter um starting point.
Mark Blackwell:Got it. So it's there are ways to make it work still, despite the changes, what you're saying. Yeah, I think so. So that's one complexity that I feared about that is maybe not so much. One that I am getting a bit concerned about, especially having read your book, is AI agents. And maybe you could explain to the listeners and viewers why this might be an issue.
SPEAKER_01:Yes.
Mark Blackwell:Or may not be an issue.
SPEAKER_01:Well, let's let's actually go back even one step further. AI kind of begins with what I call traditional AI. Traditional AI is machine learning, data science, it was called for a period of time. Um, and this is the use of large amounts of data and machine learning models to um essentially predict patterns out of the data. But it's an analysis that is uses different approaches, but it's a little bit like a regression analysis, but far more sophisticated in terms of the way that it will spot the patterns within the data. Then we had the introduction of generative AI. Generative AI is totally different because generative AI actually creates new content. So it still looks at historical data, and that's how it figures out what to do. But what it does is it actually creates brand new text or brand new images inferred from what it's seen in the past. But that element of creativity in making something new means, and the fact that it is probabilistic. So it is working out what is the most probable set of text to output based on the instructions that I've been given and the historical data that I've seen, or the most probable image that would answer this request based on all the images that I've seen in the past. Generative AI, because of that, is much harder to control because it's creating something new and because it's probabilistic and it's not like computer code, which where you can determine exactly which bit of code led to which piece of output, you can't do that with generative AI. And therefore, trying to ascertain why it said something or it produced this element of a graphic or something, you can't trace it back to a particular line of code. Makes it much harder to understand, much harder to control. With agents, you go one step further because generative AI will only make recommendations to you. So it will put up on your screen some text or a picture or whatever, but that's where it stops. And it's up to you what you do with it, basically. So you've got that immediate human breakwall. Human in the loop, as you say, is a human in the loop by default because it can't do anything else. The thing about an agent is that an agent actually has the capability to execute things. So it looks at historical data, it infers a pattern, it reaches a conclusion in much the same way as traditional and generative AI. But then it can then use tools as well. So it can go out and use third-party tools to do things, but then it can actually go and execute if you give it permission. So imagine that you ask ChatGPT to come up with a recipe for you've got a dinner party coming up on Saturday, and you say, I've got uh, you know, um, you know, one person who only eats fish, what's a nice recipe? You give some parameters, etc. Um, give me a nice recipe for six people and a list of the ingredients. Great. And then you go off to the shop at the supermarket and you buy the ingredients. With agents, you could say, go and place my order with my online supermarket and have it delivered. And by the way, I'm not going to be at my normal place, I'm going to my friend's house, so have it delivered over there. And the agent will be able to go through all those steps and actually, you know, make those commitments on your behalf. Now, the degree of autonomy that you give to it is up to you. But in principle, you could give it full autonomy, give it your credit card, and it will go off and it will just do everything. So, as you can see, the risk is higher because the impact it can have is that much higher. There is no automatic human breakwater in there. It can go off and do something quite bad.
Mark Blackwell:It reminded me of my uh O-level probabilities of there's the multiplication of risks down the stage that you might have something which has got an 80% certainty, but 80% times 80% times 80% times 80% gets to quite a low number very quickly.
SPEAKER_01:Exactly. And and that actually raises another important part about agents because in most cases, agents will be part of what we call a multi-agent system. So rather than having one agent that does everything, in many cases, it's a lot more effective and and more accurate if you have if you break tasks down into steps and you train an agent to do each thing quite well. So I'll give you an example that I use in the book. Um, imagine you're an employee and you want to book some vacation time. And uh you say, oh, I'd like to take five days off in first week in August, let's say. Well, you might have one agent that goes and checks the corporate policies, how much vacation is your allowance for the year? You might have another agent that is trained to go and look in a particular database and see how much vacation you've already taken and what your allowance is, and can you take any more? And then you might have another agent that goes and looks at your colleagues because maybe there's a rule that says, you know, we can't have the entire department away in the same week. And so who's who else has booked vacation for that time? And the three of them then come back and say, yes, you can take the vacation then. And then you've got another agent that actually books the vacation in the vacation booking system. And because each one is doing a discrete task, it's going to be more accurate, typically. But to your probabilities thing, if you only have, you know, a small chance of error in each agent, where you multiply one by two, three, four agents, your rate of error is going up very rapidly. And so again, you see this sort of multiplicative effect of an error propagating through the system.
Mark Blackwell:And so what can you do about it? Is there anything else apart from just being extra cautious, or is this one of the high-risk areas to look out for?
SPEAKER_01:It's definitely one of the high-risk areas to look out for. I mean, companies are um experimenting with agents, and a few companies are actually starting to use them. You also have to be a little bit careful with the terminology because I've come across some companies that talk about using agents where effectively what they're doing is RPA, robotic process automation. So they're not really doing anything that we couldn't do before. They're automating steps in a process in a pretty deterministic way, which, you know, we've been able to do for a while. Real AI agents are much more open in the scope of what they can do, of what they can understand and of what they can do. And that's where they become more challenging. My view is that the only way that we will control these in the short term and in the medium term is by having a lot of human in the loop controls in place. So you will typically have a set of policies that govern the way that an agent behaves. And those would determine, for example, maybe you say that up to a value of 10 pounds, you can do something autonomously, but after that you have to come back and ask me before you spend my money. Maybe you say, you know, there's no value. I want to, I want to be the one who approves every transaction. And so if anything the agent is doing, you could include in the policy what it's allowed to do autonomously, what it isn't allowed to do autonomously, and so on. And then there are other things you can do to um try and make it more accurate. So define the tools that it's allowed to use. You can only use these particular tools, which are ones that you've you've checked. Define the data sources that it's allowed to work with. A bit like RAG and retrieval augmented generation, where you get a quality data source, you can only go to that. So maybe instead of, you know, oh, just give me a recipe, you might say, well, I only want recipes from this particular chef, because I know that, you know, their recipes are a good quality or something like that. Whereas it could go off otherwise and just pick anything from from anywhere.
Mark Blackwell:Got it. So we've got to be careful for the for the agentic AI. As we are coming towards the end, are there any other unexpected risks or that might not be so obvious to people?
SPEAKER_01:Yeah, the one that I So let me name two, actually. So there's one which people do talk about a bit, but they don't do that much about at the moment, although that is starting to change, and that is the sustainability impact, the environmental impact of AI. As you I'm sure have seen, Mark, you know, training these large AI models, particularly the generative AI models and running them as well, consumes vast amounts of energy and water, and it's pretty bad for the environment. Now, companies are starting to measure. Some companies are starting to measure their the environmental footprint of the AI that they use. And I think that's, as with many things, that's the first critical step. If you can't measure something, you don't know what you need to do, and you're going to do anything about it. And that is starting to happen. There are tools out there that people can use and it's starting to become more accepted. So that's one thing. The other one that I worry a lot about is what we call automation bias. And this is the tendency that people have to over-trust AI. So there have been studies, for example, of radiologists where they took um a group of radiologists and they presented them with some AI interpretations of images and they seeded in there some mistakes that the AI had made and they gave them to the radiologists. And what they found was that over time, the less experienced radiologists, their accuracy dropped from 80% to closer to 20%. And the experienced radiologists, even their accuracy dropped from 80% to about 45%. So in other words, they became over-trusting of the AI over time and said, oh, well, the AI says that's a tumor or whatever, or that isn't a tumor. It's probably right, we'll let it go through. And the and the really worrying irony here is that the more accurate the AI becomes, the greater this risk becomes. Because if the AI is making lots of mistakes, you'll probably notice them. You'll be on your guard. Whereas as AI gets more accurate and only makes a mistake 5% of the time, it's much harder to spot. And that is something that, again, I think we have to do training. We have to do things like the phishing attack um emails that I talked about earlier. We we're going to have to find ways to really keep people on their toes.
Mark Blackwell:Now it reminds me of another story. I don't know if you heard about the gorilla and the x-ray story. I haven't. No. Well, this is pre-AI, and they got some um radia radiologists to look at X-rays and see how accurate they were. But in one of them they put an image of a gorilla on the chest. Right. And something like 83% of the radiologists did not see the gorilla. Wow. The learning is unexpected things we don't recognise because we look for patterns. Yes. But I'm also the optimist in me says I'm learning a little bit this week about Kasparov's law, which is following the big big blue chess match where he lost. Mm-hmm. There is possible to get synergies between human and the she and the machine so that if you design the workflow properly, you can get higher accuracy rates, not lower accuracy rates.
SPEAKER_01:I completely agree. And and just one, you know, one statistic on that that I read in a Academic paper was that apparently humans can only really remember about 250 visual patterns. That's kind of the extent to which we can really have in our memory. If you're talking about a tumour, let's say, in a particular area of a lung or something like that, 250 variations is kind of the limit to what you tend to remember. So, in other words, if it has seen once an example of where this particular shape led to a cancer outcome, it will remember where the doctor, unless it's a relatively common configuration, is not likely to spot it. And therefore, I think that's a perfect example of your human plus machine. You know, if if the machine can say to the doctor, hey, there have actually been 10 cases in history where this little pattern here did result in cancer, you might want to look carefully at that. That feels to me like the the perfect outcome.
Mark Blackwell:So thank you, Ray. This has been very interesting. And I've got one final question for you. Of course. Because we've spoken a lot about businesses, but the businesses we've spoken about, if I'm not wrong, are typically large corporate organizations. What if I'm running a hundred, two hundred million dollar business, or it may be even smaller than that? You know, I don't want to miss out on AI against my big corporate competitors. What should I do?
SPEAKER_01:You know, it's an interesting question, and and in writing the book, we thought very hard about this and really tried to design a framework which can be right-sized to the organization. And I might just refer back to my example earlier from the local government and social care ombudsman. That's an organization of 300 people. It's not a huge organization. And they're going through very similar steps to the big organization, but they're just scaling it down. They're doing what they can afford to do within their budget, etc. So my view is that AI opportunity and AI governance are still equally relevant for smaller and medium-sized companies. You just have to kind of figure out, you know, what's the right level of sophistication. An example of that would be if you're a large corporate, you're going to need some kind of workflow platform to log all of your uses of A and check people have done all the right things at the right steps because you'll be having hundreds of them coming through the system. If you're a small enterprise, maybe it's an Excel sheet that somebody's in charge of updating and keeping track of. So there's ways that you can achieve the same ends, um, but in an appropriate sort of size.
Mark Blackwell:Brilliant. Thank you very much, Ray. That gives us some confidence, I hope. I hope so. And I hope this podcast has been interesting to our listeners and inspires them to learn more. So, where can they find out more about you and where can they find more about the book?
SPEAKER_01:Well, the book is called Governing the Machine: How to Navigate the Risks of AI and unlock its true potential. And it's available from Amazon and lots of other booksellers and so on, so you can Google that. Um, if you put in my name, Ray Idleporter, then that would definitely help you to find the book. Um probably the best way to find me is on LinkedIn. My family is the only Idle Porter family in the world, as far as we're aware.
Mark Blackwell:Thank you very much, Ray. May I wish you and your family a very happy Christmas and all the best for the season.
SPEAKER_01:Thank you. The same to you, Mark. It's been a pleasure and a really interesting conversation. Thank you.
Mark Blackwell:Thank you. Cheers. Bye bye. Bye bye.
SPEAKER_01:Bye bye.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.