Arkaro Insights: adapt and thrive in complexity
Arkaro Insights: adapt and thrive in complexity brings together practitioners and researchers for honest, practical conversations on leadership, change and innovation in a complex, adaptive world.
Each episode gives B2B executives the thinking and tools to lead transformation, not just manage it — whether in agriculture, food, chemicals or any industry where complexity is the daily reality.
We explore four interconnected themes:
The AI Implementation Blueprint — how leaders cut through the hype and embed AI as a genuine organisational capability
The Human Edge — the neuroscience and psychology of change, creativity and decision-making under uncertainty
Outside-In Innovation — customer needs, market signals and the disciplines that turn insight into growth
Strategy for Complex Adaptive Systems — emergent strategy, integrated business planning and leading organisations that learn and adapt
Hosted by Mark Blackwell, founder of Arkaro, a B2B consultancy that works alongside clients in a collaborative 'do it with you' approach, leaving behind sustainable solutions, not just a slide deck.
"We don't just coach — we get on the pitch with you."
Connect With Us
💬 We'd love to hear from you! What topics would you like us to explore in future podcast episodes? Drop us a message or connect with us to learn more about Arkaro's approach.
🔗 Visit us at www.arkaro.com
👥 Follow our updates: Arkaro on LinkedIn - https://www.linkedin.com/company/arkaro/
📧 Email us at: mark@arkaro.com
Arkaro Insights: adapt and thrive in complexity
The Steam Engine Mistake Companies Are Repeating with AI — Harvard Professor Joseph Fuller Explains
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
"There are no executives alive on the planet today that have ever overseen the implementation of a general purpose technology to their organisations." — Joseph Fuller, Harvard Business School
Most companies are making the same mistake with AI that factory owners made when electricity arrived in the 1880s — bolting the new technology onto old processes and calling it transformation.
Around 60% of companies are treating AI as a technology problem and handing it to the CTO. It is a management problem. In this episode, Joseph Fuller of Harvard Business School explains what to do instead — and why the companies that get this right may not be the ones you expect.
About Joseph Fuller
Joseph Fuller is Professor of Management Practice at Harvard Business School and co-head of the Managing the Future of Work project, which he founded. A former CEO of global strategy firm Monitor Group, he advises leading organisations on AI adoption, workforce transformation, and organisational design.
Joseph Fuller
LinkedIn: https://www.linkedin.com/in/josephbfuller/
American Enterprise Institute: https://www.aei.org/profile/joseph-b-fuller/
HBS Managing the Future of Work:
Project: https://www.hbs.edu/managing-the-future-of-work/Pages/default.aspx
Newsletter: https://www.hbs.edu/managing-the-future-of-work/newsletter/Pages/default.aspx Podcast: https://www.hbs.edu/managing-the-future-of-work/podcast
Apple Podcasts: https://podcasts.apple.com/us/podcast/hbs-managing-the-future-of-work/id1395603706
Spotify: https://open.spotify.com/show/3zUxYNebA2rrEuH0IJcrJ2
Amazon Podcasts: https://www.amazon.com/dp/B0CJJPGGPX
LinkedIn: https://www.linkedin.com/company/project-on-managing-the-future-of-work/
Related Arkaro Insights episodes:
Stephen Wunker on AI and the Octopus Organisation: https://arkaro.com/ai-octopus-organization-stephen-wunker/
Charlene Li on Why AI Transformation Fails: https://arkaro.com/why-ai-transformation-fails-leaders-90-days-charlene-li/
Niels van Hove on AI and Decision-Centric Planning: https://arkaro.com/niels-van-hove-ai-sop-ibp-decision-centric-planning/
Marco Ryan on Rewire or Retire: https://arkaro.com/rewire-retire-ai-leadership-marco-ryan/
Connect with Arkaro:
🔗 Follow us on LinkedIn:
Arkaro Company Page: https://www.linkedin.com/company/arkaro
Mark Blackwell: https://www.linkedin.com/in/markrblackwell/
Newsletter - Arkaro Insights: https://www.linkedin.com/newsletters/arkaro-insights-6924308904973631488/
🌐 Visit our website: www.arkaro.com
📺 Subscribe to our YouTube channel: www.youtube.com/@arkaro
Audio Podcast: https://arkaroinsights.buzzsprout.com
📧 For business enquiries: mark@arkaro.com
Why AI Adoption Is Different
Joseph FullerIs a general purpose technology. There are no executives alive on the planet today, I would argue, that have ever overseen the implementation of a general purpose technology to their organizations.
Mark BlackwellWelcome back to the Arkaro Insights podcast. This is the show for B2B executives navigating complexity and the limits of conventional management. I'm your host, Mark Blackwell, and today we are diving into our most requested topic, according to a recent LinkedIn poll, the AI implementation blueprint. And by the way, the second most popular topic was the human edge, and I think we're going to see there's some overlaps today. Because joining me is a guest who argues that most organizations are currently fundamentally mismanaging the AI transition. Joseph Fuller is a professor of management practice at the Harvard Business School and co-head of the Managing the Future of Work project. A former CEO of the global strategy firm Monitor Group, Joe's research is at the absolute forefront of what he calls the great inversion. Today we're going to get practical. Let's see what the conversation emerges, but I think we might find out why he thinks 60% of companies are currently setting themselves up for failure, why navigating the J curve of adoption often kills momentum, but it's critical for success. And why the human edge, skills like grit and judgment, are becoming the primary determinants of competitive advantage. Joe, welcome to the show. Thank you. Well, I'm really looking forward to this conversation. There's so much I know that you can say about this because you're such an expert, but maybe if we can just grab a few people's attention to begin with with a prov provocation. You've stated that a staggering 60% of companies are delegating AI implementation that is probably not going to give them a return on investment. Can you tell me more about this?
The 60% Mismanagement Problem
Joseph FullerYes, Mark. Well, there's a real bifurcation in the way that companies are pursuing AI. A significant percent, we'd estimate around 60%, still viewing it as an issue of technological adoption, as if it were yet another glorified SaaS tool that can just be bolted on the side of an existing management process, and there's an end. AI is a general purpose technology. There are no executives alive on the planet today, I would argue, that have ever overseen the implementation of a general purpose technology to their organizations. Some would cavel with that, would say that the creation of wireless mobility was a general purpose technology. I'm not unsympathetic to that, but for sake of argument, I'm going to suggest that we've never had as universally applicable technology emerging at anything like this speed ever in history. And the last time we had something as momentous as this is when George Westinghouse pioneered alternating current and suddenly we had electrification of facilities in lieu of steam power. So what we're seeing is that companies are almost artificially constraining their definition of what AI can do. And as they seek to do some experiments or implementations, often I would argue, so they can answer the question, are you doing anything in AI in the affirmative? They often launch those experiments, missing two absolutely necessary factors for the experiment to succeed. The first is clean, reliable, well-tagged, and organized data to train the model on. And the second is trained staff. So we see this faulty adoption model led by companies who are not thinking about the technology at the right level, and that implementing it prematurely without preparing the ground, getting disappointing results, and interpreting those as an expression of the technologies relevant to them as opposed to a failed experiment, failed because it was not set up correctly.
Mark BlackwellInteresting you say that and be so honest that you suspect it's people making announcements or fear of missing out. There's a statistic that at least two of our podcast guests have come up with that I think is very relevant to that, is that 80% of CEOs say that AI is core to their strategy, but only 15% of employees believe them. I think the employees know what's going on on the ground floor.
Joseph FullerThat's true. And also if you look at active utilization of AI, it trails off around the 80th percentile of income in organizations. So I talked to many senior executives who are very adamant that they're knowledgeable about AI, they have a strategy, they're familiar with it. But as you get deeper into the conversation, they don't have the texture in their descriptions or examples of what they're doing that really cause you to believe that that representation is correct. I'm not saying it's disingenuous. I I think even perhaps more dangerously, they actually think they know what they're talking about. And when I'm afraid some will be rue the day when they fell into that, into that complacency, because those companies that are being run by executives who are very well versed in this are moving faster and they're not approaching it the way that that 60% is.
Mark BlackwellMarco Ryan was on a podcast for a book, Rewire or Retire, which hit exactly this point. Many of the senior executives at FTSE 250 companies he's working with, you know, think that summarizing a board report is the limit of it. They're scratching the surface. But if I can, if I may, you mentioned this story about the AC currents. Can I just ask you to expand on that? Because I think it's a very good analogy of what we might be going through to help people understand the phases and the significance of where we are.
Joseph FullerWell, I think that the the AI is, as a general purpose technology, has to be integral to the process design. So that if you're really going to learn how to deploy AI and get facile with it, you have to embrace the fact that it is a general purpose technology. And in the companies that I'm advising, the first thing we do is isolate one, two, maybe three, what I'm going to describe as main sequence processes. These are very important processes in the industry that company competes in and that are integral to their competitive advantage. We then design, in a skunk work type way, a new process which is designed to optimize the deployment of AI. We are not trying to incrementally improve their existing approach to that process by adding AI to it. This is where a turn you invoked earlier comes in, the so-called J curve, where we're seeing that some companies in deploying AI actually suffering temporary margin erosion. And the reason for that is because this technology is so fundamental, it's unwise for companies to say, I'm going to stop my existing functioning non-generative AI-based process and flash cut over to this new, very exciting but still unproven process. So what they end up, wise companies end up doing is they run both processes in parallel. And they begin to first, and AI is of course wonderful at this, test the results of the AI-driven process in a neutral kind of simulator fashion, and then gradually start introducing elements of the AI-driven outcomes to the actual operations of the business. But in running both processes at the same time, they're incurring extra costs, driving the economics of the adoption of AI temporarily negative, creating a J shape. But once that new process is stable and proven, as you start shutting down your own process, you erupt out of that nadir and start adding, you know, significantly to your margin effectiveness.
Data And Skills Before Experiments
Mark BlackwellYes. I'm fascinated by the J Curve. And if you just give me a moment, I explain why I want to double-click on it at the moment. So when talk to colleagues who are working in large multinationals and who've been tasked with AI transformation. And they're given, you know, a top-down order. Invest in AI. We can't afford not to, but I I want to see an ROI, a six-month, maybe nine-month ROI plan for all the decisions that you make. And you know, this is a pretty common middle management challenge that I know a number of people are up against at the moment. At the same time, we've got coming up in May a podcast that I've already recorded with Eric Ries of the Lease Lean Startup, who's now coming up with a book Incorruptible, where he's challenging some of the fundamental beliefs about capitalism and short-term versus long-term view. And the argument being the sort of companies that he would be supporting, typically, you know, maybe family-owned companies, foundation-owned companies who can see beyond the quarterly earnings report are the ones who're going to have value. If AI is going to be as transformational as you're suggesting, then goodness me, this is a real argument for being in a company that can stand the J curve. And from what I know, there are not as many as we'd like there to be. Your reflections?
Joseph FullerI think that's a fair characterization. We do have as the dominant form in many industries and many capital markets, the public company as the dominant structure. And I think your audience being sophisticated will immediately know all the common arguments about the pitfalls of a public company. Certainly you can see consistently patterns of investment in consistently successful private companies. I'd point to examples like Coke Industries in the United States, the very large Texas grosser HEB, companies like Cargill in the United States, where they're able to, they're managing for positive operating cash, but they're not so worried about period-to-peri, year-to-year income. Multi-year-to-multi-year income, yes. But they have a they're a little bit more patient with capital. They're a little less worried about temporary reverses. Uh, and and that allows them to think more deeply about what type of structure, in this instance, processes we want to build that are going to last. Then one thing I find uh very strange about a lot of companies implementing AI right now is they are following this logic that you just espoused, that we just described, that that, well, I'm charging you as a manager with implementing some AI in your process, and I want to make sure I get at least a 15% rate of return in year one for that. And I'd want the first instantiation to be launched within six months, maybe because the board strategy off-site is in seven months. And where are all these numbers coming from? How would you, how would you know how to measure such a return? You're just providing that manager with strong signals. Go and get me an easy-to-implement, no-brainer return. Now you might say, well, Joe, why wouldn't I want that? And of course, you do want that, but you shouldn't be telling yourself at the same time, solving obvious problems that are readily addressed is a transformational deployment of generative AI. It actually is much more like that lazy man's AI strategy of bolting the AI onto an existing process. Now, companies are very intimidated when I say some of these things because what you are doing when you really pursue a transformational AI strategy is fundamentally altering the way you manage work. The job descriptions of everyone in that process will be changed, the spans of control, the metrics and rewards and governance issues. You'll need to make a substantial change in associated processes. If you're an overworked executive, whether or not you're a technologist and operations or human resources, some Harvard professor shows up and says, Well, how hard can this be? All you have to do in your consumer packaged goods company is set up a completely parallel process for marketing and advertising and social media, staff it, run it in parallel, rewrite all the job descriptions involved, measure the results. Their eyes are already rolling up in their head as they proceed to faint. But the problem is twofold. This is coming. It's coming faster than anyone expected in terms of capability. And the odds, the danger of being late to this game is profound. If you're late keeping up with your rivals, that you could find yourself in a position where your position is, if not mortally compromised, severely compromised.
Redesign Main Processes For AI
Mark BlackwellOkay, so I think our listeners are just getting the message. This is not an implementation of an ERP system. We go and do an 18-month, get a few people on the side who the techies and experts of it. Can we just be really clear and maybe echo back to the story of the steam engine and the electric motors and that history? What time span are we thinking about for an AI implementation of a, I don't know, let's say call it a generic $500 million business for something that makes sense, yeah?
Joseph FullerWell, it would very much depend on the industry and even the strategy of the company in that industry. But with that caveat, let's go to the story of the adoption of electricity just for a moment. Uh alternating current really began to become uh widely available in the late 1870s, 1880s. And the vast majority of uh industrial users, when they decided to make the change, they had lots of reasons that uh, you know, it it was more effective and the power is more consistent. And strangely, they didn't have to buy so much coal or wood to fuel the boilers, and the boilers had this unfortunate habit if you didn't handle them right of exploding, putting all those things aside. But some companies, as they deployed the new electrical infrastructure, pretty quickly realized something. And this is very visible in my home region of Boston, Massachusetts, which was in the 1840s, 50s, 60s, 70s, an industrial powerhouse. The Union Army in the Civil War of the United States in the 1860s wore uniforms made in Massachusetts, wore boots made in Massachusetts, and fired rifles, designed and built of the Springfield, Massachusetts Armory. Most of those textile and shoe mills were multi-story buildings, three, four, five stories high. Why? Because they could operate the boilers at lower pressure because steam liked to rise thermodynamics. But in so doing, they created a lot of logistical and material handlings headaches because they had to move stuff up and down all the time. So the typical floor in the factory was 20% occupied occupied by manual elevators and work-in-process inventory, very inefficient. Well, in some companies, clever engineers suddenly realized that the current being generated by what they would then have called a dynamo, we would call it a generator, just followed the wires. It could take right angles. It could go up, down, sideways. It did not care. The steam cared a lot. So suddenly they could say, with electricity, we don't really care about verticality in the energy flow. So then they started saying, well, maybe we just should build a single story factory and address our materials handling problem. Now, of course, just like the companies we were describing a few minutes ago, almost all these companies, of course, are private. So then you had an owner-operator saying, Well, we just built this factory 10 years ago, and I've got my entire fortune tied up in it, and I can't afford that. And so, you know, actually in the shoe business and in the textiles business in the United States, starting in the early 20th century, it all ultimately started migrating to the southern United States. And today, all those beautiful mills with hand-hewed beams, we works, and brew pubs, and and a lot of, you know, oh, but we're not making many shoes in Massachusetts anymore, except uh except New Balance, the athletic shoe, the trainer's company.
Mark BlackwellSo again, so I'm I've realized so I've learned from this podcast as a CEO, I I can't delegate this to my CIO. This is gonna be a broader management team running it. Yes. And I'm probably gonna need to think again, not just time frame is my big learning that I picked up. This is gonna be multi-year implementation, but I've got to get moving on it very quickly. But I've also really got to think about new jobs, perhaps. We've lived on the world of hierarchical management for years. Can we afford to think about that or have we got to think about things very differently?
Surviving The J Curve Of Adoption
Joseph FullerI think we do have to think quite differently about it, and I will have some new research coming up probably in June, which looks at models of different industries, at their overall structures. I think of it as the geometry of those industries. We've built a uh a model in collaboration with Accenture Research of six industries, large industries like healthcare delivery and banking, where we've mapped how the companies are structured by level. So we can see the rate the ratios between, let's say, upper management, middle management, lower management, and individual contributors. Most executives assume their organization is pyramidally shaped. That's not correct. A few industries like transportation and logistics are, but most have rather uneven shapes. Some would say almost like a young child's block tower. Sometimes there's a very big layer of long block just stuck in the middle of the distribution of other blocks. That's true, for example, of healthcare delivery and software platform businesses. Very large number of individual contributors to the total ratio. When we overlay a separate model of how exposed those roles are to either automation or augmentation through AI, we see that these shapes are going to undergo a massive shift. We almost think of it as a time series sequence with several very important implications. Let me just rattle off a few. Many white-collar entry-level positions are highly exposed to being automated away. Where is my management of the future going to come from? I can't I can't have someone with 10 years of experience if I didn't hire them nine and a half years ago. Most companies rely very heavily on experiential on-the-job learning to learn the market, learn how you do your job. Training budgets in most companies in the US are down by over 50% in 25 years. And what's left is often things like foreign corrupt practices training, compliance training, harassment training, all important things, mind you, but they're more compliance oriented than skills-oriented. Well, how am I going to rely on on-the-job training when the life cycle of some of the technologies I'm deploying is shortening dramatically, and that the half-life of a lot of those technologies is shorter the time it takes a worker relying on on-the-job experience to master that technology? We've never seen anything like that, Mark. Companies are not good at training. They're going to have to get good at short, bursty training and not relying on their vendors to do it for them, which is what they do now.
Mark BlackwellFor me, it's not just a time frame. I mean, it sounds great to talk about AI augmentation, and that leaves people to do the more interesting strategic relationship type tasks, or tacit knowledge that can't be programmed easily. Fine, it's beautiful when to describe it. But the journey to get there is even harder because that tacit learning of knowing that your second supplier in Spain takes holidays in July, but your other supplier in Denmark goes June, that is not formally documented any systems, but is human tacitly knowledge. That is acquired through years of experience. So that we're now compressing that learning journey into almost nothing, as well as the IT learning.
Workforces Reshaped By AI Exposure
Joseph FullerWell, you're also uh touching on another point, if I may, which is uh in a world where what I'm gonna describe as contextual intelligence is really what you're relying on the human being to have, retention becomes much more important. And many of your listeners, I just ask your listeners, most of them will have at their fingertips intellectually, kind of a standard number of what the voluntary or involuntary turnover rate is going to be in their industry. So if you go to most American retailers, the minimum annual number they'll cite for turnover of their in-store personnel is around 40%. And some will say 70%. Well, if you're going to be relying on experientially derived contextual intelligence to oversee these technologies, I cannot afford to have 70% of the people walk out the door. And if you actually look at job descriptions at retail, they are rapidly. Having elements introduced to them about technology and about the management of information flows out of the store, which are making the job more complex and more dependent on someone who's knowledgeable about just the fundamentals of the way our store works, our product works, our systems work. But if you're continuing, most people, and my my research demonstrates unsurprisingly, leave low pay paying jobs, particularly they have a supervisor they don't like. But often that supervisor isn't very good manager because they haven't been trained and invested in. And those low wages are a function of the fact that the workers occupying them and earning them are low productivity. And they're not low productivity because they're lazy or or or foolish. They're low productivity because there have been really recent hires because they were hired to fill a job that was abandoned by someone who was just getting productive. I mean, the the logic of its sense Monty Python-esque if it you know, if if you weren't engaged with actual managers. But the the whole logic of a churn and burn personnel system, of uh you have to learn it on your own system, of uh, we're going to grind down the top or pinch the top of our talent pipeline because those lower end white-collar jobs are easily addressable through AI, and then we'll figure out the systems effects later. Those are all the attributes of bad management. Those are all the attributes of a company that's not, even at the most fundamental level, thinking about the intermediate term implications of what should be observable already in their data.
Mark BlackwellSo, wow, you're you're confirming, and it's no surprise, but you're confirming what I've heard in past podcasts from Charlene Li and Marco Ryan, that uh the the people factor is going to be big. So there are two big gaps I see in executives that are emerging at the moment. The realization that the intensity of the HR challenge is just going to grow. You've made it very clear in this podcast. The other thing I sense is that um executives and the speed of AI adoption, for those who aren't curious about it, they don't know what they don't know. And then they get to the conscious incompetence. I'm beginning to realize I don't know enough. How would you suggest a busy executive start tackling the problems of realizing where we're going and what how to start catching up before it's too late?
Executive Playbook For Catching Up
Joseph FullerWell, I'm going to use an indelicate image here, Mark, for a moment. But the first thing I want the executive to think of in reflecting on our conversation is please consider me the equivalent of your cardiologist. And I'm telling you that smoking three cigars after dinner, enjoying uh a second brandy with those, your heavy diet of red meat and whatnot is in the process of killing you. And uh the when if those pains in your chest are not because you've been hunched at your desk all day, it's your cardiovascular system telling you that if you do not start making some changes, you are going to regret it in a very serious way. The second thing I would say is because you know, just like that executive says, I don't have time to exercise, uh, and I don't have time to sleep properly, and I'm obliged to smoke cigars and drink cognac because I'm always at business dinner with clients who smoke cigars and drink cognac, there'll be many excuses why you can't modify what you're doing. Uh, the market's going to modify what you do eventually by making you unemployed, even unless you respond to this. So the first thing you have to do is start setting aside some time. I try to set aside about four hours a week from what I will say is quite a busy schedule, not as busy as when I was a global CEO, but quite a busy schedule just to learn and to experiment. The second thing, and this is quite important, I think, for executives. Some executives will be familiar with a model over time where there was a reverse mentoring system put in where younger workers who are digital natives were working with people often two, three, four levels up the organization with them to get them to understand things as straightforward as their mobile phone or their iPad or something like that. That logic is going to permeate what's going on in businesses, but in a different way. I think that many important teams, we'll just call it a task force that get set up in companies in the foreseeable future will need to be configured more about age and experience diversity than gender diversity or racial diversity or whatever else. Most task force, most companies, you and I are having a meeting, we say, we need someone to look at this AI thing. Let's pull together a team of directors. They're between the ages of 30 and 40. They've been with a company for five, 10, 15 years, knowledgeable but young, on the make. It'll give us a good basis for evaluation for further promotion. So we appoint the team of directors. And what do the team of directors have in common? Title, salary, usually experience, usually promotability. In the future, you're going to be putting, I think, a 25-year-old new salesman together with a 62-year-old, 40-year veteran district manager, and everything in between, because I desperately need to blend two things technical competence with contextual knowledge. And that will work for senior executives as well. Start working on a project or two with, but not with your direct reports. So with a CEO I was with several days ago, they are preparing for a quarterly call with analysts. And I had suggested that we take all the questions asked by all the analysts that will be on the call historically and use AI to synthesize what the consistent themes were within an analyst and then across the analyst. And the cross analyst is pretty general stuff, but specific analysts have their own little bugbears that they regularly come to. Then we took those data and we created synthetic profiles of each analyst and had the AI generate specific questions they would ask in light of the data that's going to be released to them prior to the call. Then we created written answers and had the AI evaluate the answers and also contrast those answers with answers to similar questions previously put to leading executives in our most relevant competitive set. Now, I don't think this CEO was in any position to do any of that. And I don't mean to suggest, I mean, I think they're a very clever person because they've hired me to advise them. So if if they're if they're not fools, what does it say about their decision to rely on me for advice? So, you know, uh the the the but in this instance, I think we've created a model that shows that quite distinguished, and their name would be known to some of your listeners, CEO, to think about how do I use this in a way that's actually going to make me much more efficient and confident going into this. And so today, actually, I won't be participating is a meeting with the investor relations people and the CFO to talk through some more of the findings of this. Um and and so find those projects, whether it's an outsider or an insider, work with them, make some time, um, and um uh the and and and also just include it in your personal life. I will often sit in an oral dialogue with an AI as I'm driving, trying to learn about things that I don't know, but sometimes I just say, explain something to me that you don't think I know anything about.
Mark BlackwellJoe, that's really good. By the way, you are now the fourth podcast guest who's recommended to talking to AI when you're driving for advice.
Joseph FullerWell, I just don't, you know, Boston drivers are notoriously terrible, Mark, so I'll never ask it to repeat my, you know, critique my my handling of the vehicle.
Mark BlackwellNo, but seriously, I think this has been a really important podcast, and I hope it's just touched a nerve with some people. The cardiologist analogy was very touching for me, and I think very pertinent, and thank you for that. Yes, it's a wake-up call, but just like going to see the doctor, you know, with the correct management and responsibility of yourselves, you can manage these situations. So it's not a panic situation, it's more of a wake-up situation and take the change. Yeah, take the change. It's a choice. Yeah, take the change from that. So, Joe, brilliant. Really, really honored and thank you to have you. Where can people find out more about you and your work and where you're going? And you mentioned some research.
Joseph FullerWell, for those few after this performance who are interested in following up, we have uh two large projects at Harvard, uh, both of which I've founded and co-lead. One is called, as you mentioned, the Managing the Future of Work Project at Harvard Business School. We have quite a number of papers there. Uh, we have a newsletter you can sign up for. We will highlight interesting work by others. We also have our Managing the Future of Work podcast. We have about uh 300 episodes that are on the website. Um it uh is one of the, it's the largest future work-oriented podcast by a considerable margin. There's that. I'm also a fellow at the American Enterprise Institute, so my research is posted often on my biography page there. And um, we do keep a relatively brisk pace of publishing. Our research, Mark, is usually starts with a decision maker and tries to work back from that position as to what insight or data we can provide to help expand their thinking and and shape their thinking. My work is not deeply academic, peer-reviewed journal work, which most of your listeners would find impenetrable.
Mark BlackwellYeah, that sounds fascinating. We'll make sure that the show notes has got the links to everything you've just described. Oh, again, thank you so much. I mean, I mean, we've only touched a short summary of the things that I look found about you in my research, but it's just inspired me to find out more, and I hope it inspires others. Thank you very much.
Joseph FullerWell, thank you, Mark, for having me, and I look forward to staying in touch.
Where To Learn More And Subscribe
Mark BlackwellDefinitely. Thank you very much, Joe. Bye-bye. Joe talked today about establishing age-diverse groups to work on AI transformation opportunities. Perhaps you might want to listen to a recent podcast with Keith Sawyer, where we talk about group genius and the science of high-performing teams. If you want to dive deeper into AI structures and organizations, particularly decentralization and having adaptive organizations, I highly recommend an episode with Stephen Wunker and on his book AI and the Octopus Organization. I think it would be a great companion listen to this podcast. Don't forget to subscribe to our YouTube channel if you haven't done so already, or your favourite podcast program, such as Spotify or Amazon. We've got some great guests to lurk out for. And if you found this podcast of interest, that is Dr. Mark Bloomfield of the Cambridge Judge Business School to look at AI and the future of innovation. As well as mentioned in the show, Eric Ries on his new book, Incorruptible, with more pointers what is needed to be successful in a future world. I'm Mark Blackwell. Thank you for listening to the Arkaro Insights podcast.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Just Great People
The Sixsess Consultancy
The Science of Creativity
Keith Sawyer
HBS Managing the Future of Work
Harvard Business School