The Consulting Growth Podcast

The Future of Professional Services: How Agentic AI Will Reshape Consulting Forever | Rob Price & Prof. Joe O'Mahoney

Prof. Joe O'Mahoney

Send us a text

Agentic AI is rapidly transforming professional services, yet many consultancies remain cautious about its implementation. In this groundbreaking conversation, Professor Joe O'Mahony and Rob Price explore what happens when multiple AI agents work together as a team to tackle complex business challenges.

Rob explains how agentic AI differs from traditional AI applications by creating virtual team members with specific expertise, memory capabilities, and tools. These aren't simple chatbots – they're sophisticated systems that can handle complex workflows while maintaining human oversight. The most fascinating examples include agent teams already working in regulated environments for apprenticeship coaching and due diligence processes, reducing tasks from hours to minutes.

The conversation takes a profound turn when examining how this technology threatens the traditional consulting pyramid. As AI increasingly handles work previously done by junior consultants, firms face difficult questions about talent development, pricing models, and the very nature of their business. "It's not AI that will steal your job—it's the person using AI who will," warns Rob, highlighting the competitive advantage for those who master these technologies.

For young consultants, this shift demands a hybrid skillset combining technical understanding with enhanced human capabilities. The most valuable consultants will be those who build trust, understand context, and create innovative solutions that AI alone cannot. Meanwhile, established professionals might explore what the hosts call "influencer consultancy" – using AI to amplify their personal brand and impact beyond traditional firm structures.

The conversation concludes with a balanced view of consulting's future: short-term opportunities for those who embrace AI capabilities, alongside longer-term questions about how value and profits will be distributed. Whether you're leading a consultancy or considering a career in professional services, this episode provides crucial insights into navigating the agentic AI revolution that's already underway.

Prof. Joe O'Mahoney helps boutique consultancies scale and exit. Joe's research, writing, speaking and insights can be found at www.joeomahoney.com



Speaker 1:

Welcome to the Consulting Growth Podcast. I'm Professor Joe O'Mahony, ceo of Equity Sherpa. We help owners of consultancies quadruple the equity value of their firms over a two to four year period. If you'd like to know how we do this, visit equitysherpacom. Okay, welcome to Futurize. You may not recognize my voice, but I am Joe O'Mahony. I'm a professor of consulting and I am both the guest of Rob Price, but also will be interviewing him on a rather unique approach to podcasts that we're going to explain now. So I'll hand over to Rob to do a little bit of a intro to his to his listeners, because we'll be hosting this on both my podcast and rob's podcast called future eyes and you're right, joe kind of we are breaking new ground.

Speaker 2:

I'm not sure how this will work, but I'm sure we will find a way. We will. We're bright, we can do this and in a sense, like I mean, this came together because we had a common interest in terms of what was the impact or potential impact of this this mad chaotic kind of stuff that's going on LinkedIn kind of largely at the moment around agents and agentic AI and what's its impact on the future of consulting, future professional services in the broadest sense. So we thought, well, why don't we wrap it all up into one conversation, given that it was something that we were both interested in from different perspectives. For people who don't know futurize, then, then I I kind of run future, future as we set up to talk to people who were passionate about ai and and what became agentic ai.

Speaker 2:

But from the perspective of the ceo or the founder who's kind of running the business, or the investor who's invested in the business or the early adopter, that's kind of gone, that's for me.

Speaker 2:

I'm going to kind of running the business or the investor who's invested in the business or the early adopter, that's kind of gone, that's for me.

Speaker 2:

I'm going to kind of do that. You can take those, that capability on board to well, as, as indeed one of my guests said, make my team incredible, which I I kind of always loved as as a focus on the positive kind of impact rather than maybe kind of some of the fearful um. So that's me as futurized and as as me in terms of kind of impact, rather than maybe kind of some of the fearful um. So that's me as Futurize and as as me in terms of kind of the rest of life. Um then, I'm a two-year-in founder of a well, an agentic AI business, but of course we didn't set it up that way because the term didn't even exist. Then we set it up in the context of how do we make sense of this in highly secure, highly assured organizations at the heart of very serious businesses that need to be clear about what was happening with their IP and their data. So that translates quite well, I think, to most professional services businesses.

Speaker 1:

Definitely, definitely. So just for your listeners who may not have heard of me, um, I'm I'm a specialist in the growth and sale of boutique consulting firms and, as your listeners and my listeners very well know, there's a huge amount of interest in, uh, professional services, generally into ai. Um, to be honest, so far there are exceptions, but so far it's been relatively mundane would be the wrong word giving. It's a groundbreaking, life-changing, epoch-changing technology, but it's, if I look around most of my own clients and the research I've done, it's mostly been used for either content generation, bits of analysis, rarely tied up and rarely used agentically. But there is this thing on the horizon.

Speaker 1:

That so I'm just a bit of background. I'm writing a book on professional services and ai, which I wish I'd never told anyone, because it's taking me much, much longer than than I expected, because it all bloody changes every six months. So I thought it would be useful, rob, as you're knee deep in in things, to give it. To start off, giving us um, both, I guess a theater, not theoretical, but you know a, a definition of agentic ai versus normal ai, because I think people get confused. Um, but also, I guess, more practically, what does that mean for people?

Speaker 2:

I think I think if I start at from the point of view of maybe where we start to build in agents and and why we did that and how we did, actually kind of if I go three years ago then completely recognize that point around just using it to create content, because that's kind of where we started, it was what's this gen ai thing? How can we do that to write a report in the context of a consulting business? Now, where we started from was, um, an agent, was almost a. We actually called them ai, team, ai, team members to start with.

Speaker 2:

Kind of it was probably before friendly yeah, well, I think that was part of the kind of reason for it was this is this is somebody sat next to you, virtually in that sense? Um, and we and literally we kind of started trying to mitigate the kind of horrible things that large language models didn't do very well. So large language models didn't have memory in those days, did hallucinate all over the place, and they still do, and they are probabilistic kind of things. So we tried to create niche expert AI team members that used large language models to communicate, shall we say, or to create the things that they're there for, but used other constructs, other databases for the facts, if you like, that we were trying to use within the organisations that we're using. So I almost describe them as like top-trunk cards, if you can imagine.

Speaker 2:

I've got a large language model or a small language model, it kind of doesn't matter. It's a model that I use to communicate. I've got a persona, I, I, I, I am this type of agent and I need to do these types of things. I've got deep knowledge or deep facts that I've been provided, supplementary to my general kind of know-how, knowledge that I've got in the model, but, but facts around the thing that I need to be um, I've given been given all sorts of memories so I can remember forever, uh, about anything. Um, and I've been given tools and, of course, talk. We could talk today about, uh, things like mcp model, context protocol, but in those days tools were kind of widgets that we created, or api calls into systems and it's this virtual wrap that brings all those together. That meant that we'd suddenly got a project manager agent or or whatever it might be.

Speaker 2:

Now, actually, I don't like the concept of a project manager agent because that's not. It's not niche enough, not expert enough in the thing that it's doing. But but for us that we, we started I don't know two, two over two years ago with that definition of a layered build-up of an agent, and that's pretty what? Pretty much kind of what the market now sees as as an agent. Some organizations build agents, they code them, we configure them, but essence it's a construct that uses a large language model in conjunction with other data sources and other tools. I like to think of them as almost the glue between the applications and the data sources that I've got within an organization, for example. It's a method of interaction between them. I can give them work and then of course, magic is giving groups of them work.

Speaker 1:

Yeah, okay, okay. So let's pause there, because this is so what we could do, and there's lots of different ways of creating the agent that you're describing. One of them would be using ChatGPT's GPT function or Gemini's gem function or Claude's project function to create a persona to do very specific things. So it might be I don't know writing LinkedIn posts in your, in your voice, and I, you know, I, there's a huge amount of research that shows that that has a significant impact. Um, but your last statement there, um, about connecting these up together what so this? I haven't seen so, and I said this to you at the time. I hadn't seen, up until very recently, examples of tying those types of individual agents together into systems that work. Now, about a month ago, I did see something for the first time and it seemed to work, and I was thinking, oh okay, actually this is, this is now of the moment. So could you explain a little bit about that next step?

Speaker 2:

yeah, I mean you're right, most of the world has kind of started with agents and then sometimes kind of dead streamed agents together, so an agent that produces an output, that's the input to the next stage, etc. Um, but there are some organizations so we're not the only ones who've maybe kind of used more of a team dynamic, so goal-oriented outcomes from an agent team. So maybe one of our agents that we've built is a control agent. It's got the governance angle of knowing who's in its team and then, equally, it's got a set of agents that maybe have got particular domain, expertise or method or any functions that you would, I mean, think about a human team. If you were building the best team to do a particular thing, who would be in it? And then we can replicate that concept with an agent team and give them a goal, and then that controlling agent logically can determine how it might engage with the other agents it has in the team to do the task that it's being given.

Speaker 2:

That's not unlike, of course, for anyone listening who's thinking well, that's like my, my team that I've got doing whatever it is I've and and, of course, the the logic extends further in the sense that there is no reason why you can't have people as part of that team.

Speaker 2:

So, so what functions are done by agents, what functions are done by people, and how you bring that together. Now, part of that is is things like intent handling, so so, so, what? How? How do I connect to the other agents? And and what? What information am I passing? What am I giving them that gives them the context or the serious organization of the type that is not a repeatable process. It's knowledge work or knowledge work that there is a big difference between being able to build an agent team and then getting that agent team to work repeatedly, reliably, consistently, time and again in in that complex organization, whether it be private enterprise, whether it be government department, there's. There's still a big gap to go um from that, and that's a maturing of the market that that many of us are focused on.

Speaker 1:

But, but principally, I think that the power of an agent team, even if it's only two or three agents working together, it is kind of a game changer in the context of a single agent could you give us an example of an agent, an agent team that you have seen working successfully, probably anonymized, for the sake of your clients, perhaps, but, um, just something that gives people a flavor, even in professional services perhaps, of of what that might look like?

Speaker 2:

yeah, I mean, if I give a couple of examples and I'll try to use different ones from the one that I've just kind of done there um, we, we were working with well, are working with an organization that delivers apprenticeships for professional services firms, um, and so they tend to be mid-career kind of, as opposed to kind of just out of school.

Speaker 2:

Um, and, and they talked to us well nearly 18 months ago now around how could we use these, this concept, these things, uh to, to help it make my team incredible.

Speaker 2:

Going back to my phrase at the earlier uh, earlier on the call and and so actually that's an agent team that um takes as an input a transcript from a conversation between an apprentice coach and an apprentice and then understands the curriculum of the apprenticeship, understands um, the uh, the progress, and and is able, because it's got previous kind of um progress reviews and therefore can automate the production of a report in a highly regulated, highly sensitive personal data environment. Now, it's not a complex team but it is a multi-agent team fulfilling a particular role to take something that takes hours down to something that takes minutes. That's been on a repeatable basis. Now, why I mentioned that one is that that's been used in a live operational environment for the past year. There aren't. There aren't many examples I would imagine out there of live operational multi-agent teams doing work of that nature in a regulated space yep, okay, and and just talk a little bit.

Speaker 1:

I don't want to get too technical because I don't understand the techniques behind it, but the feedback loop is quite important, isn't it so that after the agents have done their stuff, having some form of quality measurement and then feedback into the algorithm, if you like?

Speaker 2:

Yeah, yeah, no, completely. So there's a range of things that we should probably mention there. First one is, when it's doing work of that nature, I think it's really important to have I mean it comes back to my indefinite memory point the ability to interrogate or audit anything that may have been done as part of that work. The beauty of a multi-agent team is that every communication line between every agent at any point in that work is auditable because it's there, it's easy, easy to access. So if you like one of the one of the I was having this conversation earlier on another call, actually it was if you look at the explainability that you can kind of display of that thought, so that reasoning, that kind of was happening, then we can provide full transparency of that. Now then the debate, of course, is how often would you look at that? So, because it's important to know it's there and it's important to know it's, it's right, so you'd kind of want to logically check that on a occasional basis, surely? But but you almost get to a point of trust that says, actually, when we've been in to kind of check, it's kind of it's okay and therefore we can, we can do something, um, we can trust that output that it's providing. Of course, they're still able to edit it anyway.

Speaker 2:

So, going back to the uh and I never really answered the agentic bit, but if we were talking about autonomous decision making it, I mean, we we're very much focused around how do we deliver services that can have some degree of autonomous decision making, but are still kind of putting the decision in the hands of the human. Who's kind of in control of that process or that that final output. The other, the other thing, though, of course is, is this principle of teachability, which which I kind of probably should have mentioned at the agent point earlier. So, as they're doing that work, the agents themselves are asking themselves the question of have I learned anything new in doing this particular piece of work and capturing that? But also, because it's an interaction with a human as well, they're also saying have you got any feedback for me? And therefore it's growing its knowledge of doing that piece of work now. Then then, of course, there's another big piece uh, that is, how do I know that that is right?

Speaker 1:

yeah so.

Speaker 2:

So in a sense you've got to then think about the governance behind behind that. That then says no matter whether that's something that it's determined, it's learned or something that it's been told I mean, rob in his kind of most amusing kind of moments might go ah, mr, mr and mrs agents that I'm dealing with, can you just use cling on now to communicate with me in my work. But somebody's surely got to go and say rob is acting on behalf of the organization and that is a sensible thing to do, or not. So I think there's lots of angles to consider in getting it right, so to speak.

Speaker 1:

Yeah, good, okay, yeah, I mean I think this step between agents and teams of agents is one that I would say consultancies, or in fact all firms struggle with the most. It's relatively easy to use an llm straight through the interface it's. You know a child could set up their own agent give or take, take and then test it and see, you know, improve it. Making that practical step to teams of agents, what type of platforms or tools are available to help firms understand that and execute on it?

Speaker 2:

in a fairly limited way, execute on it in in a in a fairly limited way. Yeah, I agree. Firstly, I mean, it is hard to to build teams of agents at the moment in terms of doing doing that nature of work. And you're seeing kind of the uh rise and availability of um orchestration platforms, so so the, the crews of the world, the autogens enable developers or people to be able to either string together agents or create agent teams. Now I think it comes back to embryonic kind of phase in a lot of this work. So those platforms will no doubt continue to evolve.

Speaker 2:

But there is quite a difference between providing an agent team to operate on a useful piece of work for a startup business, for example, or a scale-up business, or a business that's kind of doing something that is less around criticality of IP or criticality of data and personal data.

Speaker 2:

So it's the differences between and I've talked to big organizations who said, yeah, kind of we build agents, we build agent teams, but not in those spaces, that kind of you're building agent teams.

Speaker 2:

So now I think that that's in a sense, if you've got this emergent kind of um people playing, people getting used to the new technologies, understanding how we can build things of this nature, then it's about combining that with some of the things that we've known and loved for many years before as well, because you, we can't do it in absolute chaos.

Speaker 2:

I mean, to your point of it's very easy to create co-pilot agents, for example, or many other kind of tools, as you illustrated, but if I'm running, if I'm the CIO of a large enterprise, I don't want everyone in the business kind of randomly creating agents left, right and centre. It's kind of my in the business kind of randomly creating agents left, right and center. It's kind of my, my, my shadow, it fears, kind of gone mad in that sense. So so how do we get a controlled balance with governance? So? So the things that we used to think about still important, but just recognizing that there's this, um, well, a genetic sprawl kind of that that we're kind of facing, that can do some phenomenal work but has to do so in sort of in a governed way, yes, yeah, I think that's a really important distinction and you know you.

Speaker 1:

I guess you know the uk government, um, announced recently that they were going to be rolling out uh agentic AI into various departments, including the NHS and probation service and various others, which makes sense, but I guess there's always a hesitation around all the things you mentioned around confidentiality, complexity, incorrect decision-making and the governance that manages that but I think that's down to, I mean, the the work I mentioned earlier in the apprenticeship space.

Speaker 2:

People have looked at that and god, well, god, that can do case handling, for example. So roll it out across the probation service or wherever undoubtedly kind of whoever provides solutions of that nature, they can do useful work in those types of departments. But let's just remember that they need to do it in a way that is governed with explainability, the ability to interrogate, prove, etc. So it's got to be done in a way that still protects data, still uses those kind of models in an appropriate and ethical way, etc. So I think there's a massive thing about let's not stop innovation because of all the concerns. But the concerns are real and therefore we have to kind of still innovate, but conscious of those things to enable us to do things in the right way.

Speaker 1:

Right, yes, yeah, now I have a load of questions around the technical, architectural, business, architectural side of things. I'm not going to ask because it might not be that interesting to either of our groups of listeners, but I'm interested in your view on professional services. I realize your current firm doesn't specialize in that space, but you've certainly got a strong background in professional services, in atos and canopy and well in tech as well. Um, and I'm interested in the next few years, what if I'm a ceo of a boutique lead, you know, consulting firm? Um, what, what, what should I be? One can never predict the future, as Papa told us, but you know what trends should, should be either keeping me awake or exciting me, I guess.

Speaker 2:

I think I mean. So much of this of what I'm about to say is about personal opinion and and, and if you get a load of different people, you'll get a load of different opinion. Um, you, you, you're right, I've spent a large chunk of my life in in a consulting space and certainly in a kind of professional services space, um, and, and therefore some of the things that I'm seeing at the moment are reminiscent to some of the changes. What I mean by that? Um, I started off as a developer a very long time ago and did most tech roles kind of in my um, my early career, which gave me a good sound base in in a lot of things. I was an architect, I was a project manager, etc. And and then kind of we had offshoring, so kind of a lot of the development shifted offshore and career pathways changed in terms of how do you become that senior architect or that senior program manager, etc. It's a different pathway. So first point is I think that this is more significantly more of a change than ever. Offshoring was to the technical space and yes, we can talk about height curves and will things not work quite as promised, etc. Talk about height curves and will things not work quite as promised, etc. But but underlying is a fundamental shift in the way that we do work, going forward. Um, that, yes, we'll kind of go on a bit of a roller coaster, no doubt, but does change things.

Speaker 2:

I think, um, everyone has to play. If I, if I, if I was in, if I was a partner or managing partner in a consulting firm, for example, I'd want to know that we were getting stuck into understanding how these things worked. Now I remember 15 years ago and kind of going back to my Atos consulting days and I and a very good colleague at the time were doing some interesting things in the early days of digital and social media impact on kind of enterprise businesses and I was a hands on and play type person to try and understand kind of what he could do, and he was a evangelist and conceptual kind of type person and we had different approaches and neither is wrong nor right. It's different and I'm explaining that to say. That's why I say get hands on and kind of start playing and understanding, because I think if you've actually gone in and used some of the tools that we've talked about at various points on this call across your team, you've probably got a better understanding about what you should be advising.

Speaker 2:

If. If somebody says to me, as they did the other day, rob, what should we be doing around use of agents or ai in outbound sales or use of agents in development of code or use of agents in any kind of part of the business function, I kind of need to know, because I've done some things, I've evaluated some tools, I know kind of the weaknesses. Now do I see everybody doing that? No, I guess. Yes, some organizations are very visibly kind of making strides in getting their arms around and building in AI capabilities to the way in which they deliver, consulting and accelerate that forward, and all I could say to the other consultancies is you better hope that you're kind of doing that soon, because surely they're going to kind of be able to do it faster, better, cheaper potentially than you are if you've got a kind of totally human centric kind of approach to it all.

Speaker 1:

Yeah, yeah, good, good, thank you. Really interesting um. Is this a useful point to flip the uh, to flip the podcast, um I don't. Well, we can have them all to to and fro?

Speaker 2:

I? I think I think so because in a sense, I mean, it's like one of the things that I'm really interested in the context of what I've just said as an answer is how does that change the business of consulting? It's? It's like I grew up once upon a time a long time ago, in in kind of a this kind of pyramid shape model, kind of bring lots of people in, give them tools to do the job, let them gain experience across a range of things that they were doing, and and they move up that pyramid and they gain expertise. But I don't know that's the model anymore. So what do you see happening in terms of? Well, perhaps based on, I think the models evolved a bit over the last decade anyway, yes, but where's it going next?

Speaker 1:

Yeah, really, really good question. So you know I spend probably too much of my time replying to ai skeptics and skepticism about ai is is important, but it's also really important to recognize the impact that it is having and that's only going to increase um, and part of that skepticism is people saying, you know, ai is having no impact on the job market for consultants and it's simply not true. I I know several this is senior decision makers in large consultancies who are very much saying we don't need um hordes of consultants at the junior level anymore because ai is enabling. It's left that they're replacing whole consultants, as you can imagine, but they're replacing sufficient tasks of those consultants so the consultants can be freed up to do other things. Therefore, on mass, you need fewer of them. So I guess that's the. That's the thing I'm seeing.

Speaker 1:

Um, you know you're completely spot on David Meister's, you know original piece of work was. You know all about David Meister. For those who don't know, he is the guru on, or was the guru on, professional service firms and leverage was the center of his model and the leverage ratio of the firm was something that impacted strategically what the firms were, how they recruited, how much they charged for their work, how much they trained, the promotion prospects and all the rest of it. So I always said to my students make sure you ask what the leverage ratio of the firm is when you're interviewing for one of the big, for one of the big firms. Um, as you say, there have been trends to disrupt that over the last 10 years, especially the last five years. The collapse in cost of um building products and the greater multiples that product-based businesses get when they sell mean that a lot of firms have been shifting to a mixed model where you've sort of got the advisory bit, you've got a few tools clients have been pushing back on, you know, hordes of juniors coming in to their firms. So there is an increasing narrowing of the base of the pyramid with lots of firms. And then you've got firms like, you know, accenture and Tata and IBM and Inversys and Aptos, all of whom are developing, have developed, you know, good, profitable products alongside everything. So this isn't, you know, a new trend, but it is certainly accelerating because what we're seeing is AI increasingly capable of doing a lot of the work at the bottom of the pyramid.

Speaker 1:

That has several implications one from the client side that they're less likely to want to pay by the hour. Now, if you're a particularly dishonest partner, you might say well, we're just going to pad out the hours for the same amount of work, and I've seen that in in some of the larger firms especially. But as with all tech, you know, your competitors will start to offer cheaper prices and higher quality, and also clients will start to get irritated if they feel they're overpaying. So there is this pressure on the billable hour which, um again, it's not a new trend, but it's going to be accelerating because of ai?

Speaker 1:

Um in favor of fixed price or value-based pricing, or outcome-based pricing, um as that, as that model shifts from a, you know, a wide base triangle to more of a narrow base triangle, um you're, you're getting fewer people in the bottom. Now that's and again, this isn't completely new. When you and I joined consultancy, it paid roughly on par with all the other professions, with tech and with banking and all the rest of it, if not better, if you're in the right consultancy. That's no longer the case. So consultancy salaries for the best people are actually not very competitive, unless you're at mckinsey's and bain's and wyman's um, so and and so there's always there's been an increasing talent challenge that's going to be exacerbated at the middle of the organization. So, in other words, as you go up the organization, you need to be able to sell yeah where the hell are you getting those people from?

Speaker 1:

yeah, where are they getting their skills from? So now this is all more future orientated, because we haven't seen that that declining group at the bottom equaling less choice at the top of who you can choose for your senior roles. We haven't had enough time for that to have an impact yet, but it is certainly something that the best consultancies are thinking about.

Speaker 2:

So there's a number of things that immediately kind of jump to mind and we can't do them all in one question, so I'm going to try and split them up. First one is do you think there's a new concept around agent leverage or something of that nature that therefore it's more around? The health of your organization will be greater if you've got a whole host of performing capabilities that that your organization has created. That encapsulates your ability to deliver strong advice, direction faster, yeah. And therefore, if you've got these army I mean a mckinsey style army of agents kind of doing the work, then that's still leverage. It's just that they're agents, not people yes, definitely.

Speaker 1:

I mean you wouldn't want to count up the number of agents as you're at the basis of your leverage. So traditionally the leverage ratio was the number of partners and the number of consultants. I would always just take it up to the top. Look at profit per partner or profit per share, or you know, however the term is, is legally structured, the at the end of the day, and it's going to be quite nice for a while, I think, for um, for partners, because partner uh, profitability hasn't been what it used to be.

Speaker 1:

Generally speaking, there's been a big squeeze on day rates. Procurement have come through. You know, people are asking for more money, the, the in. Certainly in western countries, the growth of consulting has kind of tapered off a little bit. So those partners that bought in for a million, say, into the large firms, they've been really struggling to get that back. I do think this offers opportunities for them, at least in the short and medium term, because they're not having to pay large salaries to make larger profits. I think in the longer term, I think we hopefully we might agree on this there's a fundamental challenge to the business model of consulting when a client perhaps can have direct access to somebody who is smarter, quicker and cheaper than a McKinsey partner who is smarter, quicker and cheaper than a mckinsey partner?

Speaker 2:

and which brings me to another question, which might fall into the category of rob. That's slightly weird as a question, but hey, um. So one of the things that I'm seeing embryonically is the ability to create agents or agent teams that encapsulate deep knowledge and expertise in a particular area, supplemented with the know-how of a deep expert. So let's let's imagine that we could create an agent team capability for rob or joe or any kind of partner or or expert within one of those firms, and they're motivated to continue to train and get that expertise right. Yeah, well, that's for me that's an interesting concept, and there isn't a time to that. It's not limited by my ability to do that either, kind of because I can only be in one place at a time, or indeed, if I choose to retire tomorrow, it still could operate. Is I'm in fantasy land, or are kind of organizations out there thinking about that?

Speaker 1:

It's interesting, isn't it? I mean, I'm thinking about it. Well, let me give you a very good example. I went into a client recently, great client, nice growth, interesting company. Um, and halfway through one of the meetings, uh, with the board, the owner said well, let's just see what ai joe thinks. And there was a long pause and I said ai joe, and they said, oh, yeah, yeah, we've got this bot that's trained up on, you know, your books and your writings and your blogs and all the rest of it.

Speaker 1:

And I sat there nervously worried what type of answer, you know? Would it be better than the answer I gave? And I guess there's. We might. I guess you know we mustn't forget the human element of this. I guess you know we mustn't forget the human element of this. Even if the answer that that bot gave was better than my answer, at the moment people still prefer to pay to get the answer from me because this is high risk.

Speaker 1:

Now, it may well be that we consider the answer from the Joe bot as well, and certainly my work is supplemented by more I wouldn't say agents at all, but you know I've got an analysis bot, so if I've got a whole load of interview transcripts.

Speaker 1:

I can run, run it through them and I'll compare. I spent a long time comparing it to my own analysis of interviews, which, obviously, as a professor, I've done, you know, a huge amount of times, and it will often find things that I haven't done. I'm, quite, you know, happy to admit that. But when it comes to actually giving strategic advice, um, I do think there is something about getting it from the human, um, and having that human, even if it's generated by the system or the bot or whatever you want to call it, having that sense checked by a human. Now you, I guess there is a question which is is that just a matter of trust? Over time, does it, as people get more familiar with bots, um, will there be a point where they say, actually, I'd rather trust the answer from ai than a doctor, or ai than a joe, or ai than a rob, and I don't know what happens then?

Speaker 2:

does that bring us on to the driving reason for the engagement of a consultant in the first place? Because, in a sense, sometimes that's almost it's an insurance thing, isn't it?

Speaker 2:

it's kind of I need to have got validation that the thing that I wanted to do in my business it's right because, look, it's supported by Deloitte or McKinsey or whoever yes, yes kind of doesn't matter what they say, it's just here's the report that said what I wanted it to say, versus I've got this kind of insurmountable problem, I need to turn around my business, or so? So do we need to think about the driving reasons why organizations engage consultants, to then determine which of those are about human trust versus which of those are about innovating, groundbreaking ideas?

Speaker 1:

yes, yeah, and and yes, yeah, and you know, I used to think that consultancy was very much a logic driven phenomenon and to some extent the analysis of a problem is. But everything around it is politics, personalities, psychology, human to human, sales and all the rest of it, and I don't think we'll be getting rid of that anytime soon. I don't think clients or consultants want to get rid of that anytime soon. But there is this question of it's. Like you know, I, I would now so my, my dad has a quite dementia and I use AI routinely to find out, you know, feed in symptoms, get diagnosis, get prognosis, all the rest of it, simply because it's generally better than the advice we get from doctors. So I think with the more transactional work, transactional questions, I think there's going to be a growing. There is going to be a growing role for clients to go straight to the ai, but with, I think, more strategic, ambiguous and complex problems. The human will be in the loop, for the foreseeable future at least.

Speaker 1:

But that it it does. Your question prompts another train of thought, which is do we need companies to to satisfy that? So if I'm joe o'marney and I've got a brand and I've got a whole load of you know, bots and assets and uh or data. Um, that I've I've researched, do I? What can mckinsey offer me that now, there's the brand of the company, but if I've got a great brand myself, are they getting more than I am by me joining them? So that's an interesting view whether ai enables an individual to have more profitability. But otherwise.

Speaker 2:

I think that's a really interesting question. It's one that we've been chewing over recently. So we've got, um, our latest agent team um, probably one of the most exciting ones is a due diligence agent team. So in a sense it right, it does the due diligence, hooking into a data room for private equity or M&A, which of course, is kind of heartland for many consultancies. It's kind of that's the type of thing that they would do. But of course, you could imagine in that space that personality or approach. I mean, that's the space where you've typically got a trusted expert who knows what they're looking for.

Speaker 2:

I think it brings me back to my top trump cards. If I've got this kind of set of top trump cards of agents that I can deploy that have got particular expertise from individuals or groups of individuals, then does that enable me to create a new economy? That wasn't there before of the way that kind of things are approached? I think it touches on a bigger question for me, and when we started the conversation, probably before the call, I mentioned that my daughter had just kind of graduated and was starting in a consulting business. But what would you advise? I mean there are many people leaving universities that many would have had hopes to kind of move into that sector. They've, they've, um. They would have targeted it. What advice would you give to somebody just starting out what? What kind of things might they focus in on that are more likely to still be in demand, for example?

Speaker 1:

yeah, yeah, this is really interesting because for years I told my students to avoid the tech, to avoid learning how to code, because you'll get stuck in a basement in ibm somewhere churning out code and you won't talk to anyone and there's there's a nice job to be had, but it's the human skills that will get you promoted. So I think there's two aspects to my answer the the first is the one I would always give the human skills building relationships, the ability to sell, developing trust, developing a reputation for credibility and honesty. Is is still the basis for quality relationships, sales, but also being a good person. I think so, investing in that and you do that through, you know, practicing, getting mentorship, getting strong feedback and all the rest of it and increasingly, those. Those are the things that ai won't be able to do, in my view, because it requires touch, face-to-face human interaction, reading someone's eyeballs very closely and them reading yours. It's something that tech is going to find very, very difficult to replace. But the other bit of it is actually going back on the advice that I used to give, which is don't touch. No, I wasn't that extreme. I didn't say don't touch tech, because obviously you know the cto route is. Is, was created for bright people who want to go up that route.

Speaker 1:

But you know, you've got to learn this stuff. You've got to learn, um, how to AI, how to prompt effectively, how to use APIs, how to build agentic teams, not just because it will save you a huge amount of time and probably improves the quality of your work, which is what the evidence shows but because the organization will see you as an increasingly invaluable resource. And I think that's quite an important point, because the implementation and I think I'd be interested in your view on this the implementation of AI and agentic teams to its full potential is going to take decades. So this isn't a one hit wonder. This is an ongoing process of task replacement, improvement, evolution.

Speaker 1:

You know we don't live in static firms. You know when workers want to bring a company down, they work to rule, and work to rule is just doing what the organization thinks it needs doing or what's in your contract, but there's all that fuzzy stuff around the formal processes that make an organization work. So so you know, really understanding this is going to put you in a position of greatness, and I, I, you know, you and I probably both saw this in consulting, where you'd get someone who could do power bi or could, could, could, build a database and all of a sudden they became, you know, that great person to know. So, yeah, so those are the things I'd say, but I'd be interested on your take on this as well I think it's in a sense.

Speaker 2:

If I knew the answer then that'd be brilliant. But predicting anything at the moment is hard, I think. If I look at it in the context of building agent teams in terms of the work that we do, we had a conversation recently and said what does it take to do that? So what kind of skills that? So what kind of skills? And one of them was computer science and that wasn't to be a deep computer expert, but that conceptually to understand how these things kind of link together etc. Second was about prompt.

Speaker 2:

We are very focused around removing the need from prop for prompt expertise from end user in the business, and the world does not want a kind of army of prompt engineers to be able to do useful work. So I think we have to kind of hide that behind the scenes. But you need to kind of understand that and org design, as in the psychology of teams and people. How do you build a great team? And there aren't lots of people in the market that kind of come with that mix, yes, some of them, yes, and kind of you build Actually I'd overlay something that you said People, people. It's kind of you've got to engage, because that's the thing that's at the core of this. If you're kind of understanding how a business operates, engaging with a business to understand where that value can be found, then I think that those are maybe the four elements that we're seeing Now, undoubtedly to your point earlier around AI and agentic AI.

Speaker 2:

Ai has many elements to it, so kind of people who understand maths and machine learning or the compute of it, et cetera, then that's I can imagine if organizations are going to have armies of agents, you probably need a load of people who can build those agents as well and understand the data structures and the way that information is stored outside of the models to be able to optimize things like performance, uh, quality of output, minimize degradation in the models, etc. So so I was speaking to somebody earlier today and their son and was doing psychology uh at university. I think that's a good call as well, actually, because it comes back. I mean, again, we're probably in the slightly weird kind of space if we start talking about psychology of agents, but the whole interaction of agents and humans, I think is kind of a key thing to explore. So there is absolutely I mean there has to be absolutely a role for people humans in the context of those firms.

Speaker 2:

Consulting firms have always had to kind of evolve and iterate in the context of what? What are the things that the market is demanding in terms of the services or the capabilities or the expertise, whether that was technology or otherwise. So it's. I suppose there is an argument, or can we make the argument, say it's just another one of those, it's just a kind of bigger scale, one of those than perhaps we've seen before.

Speaker 1:

Yes, yeah, it's interesting and I mean I think a lot of it depends on the type of work that's being done. So, you know, if it's more processual work where there's fairly fixed forms of analysis that lead to, logically, to a conclusion, fine. I remember one of the first pieces of work I did as a corporate consultant was I was investigating the 3g business case for a company and I went away and did my analysis and I was quite proud of it and you know, I went to the sources, did the modeling, all the rest of it and in effect, there were very few scenarios where a firm would come out of buying a 3G license and be making money. I mean, they were costing $7 billion a time just for the license, another $7 billion to set up all the networks and the infrastructure and all the rest of it. And I gave my report to the partner and the partner invited me to the presentation where he completely flipped the message on its head and said you should definitely buy this. It's going to make you a huge amount of money in the long run.

Speaker 1:

These are the risks, but this is my recommendation and I was very obsessed and he did, he took me for and he didn't tell me about this either. He took me for a beer afterwards and and I won't go into the details, but basically said look, your analysis was right. With this firm, with this owner in this context and the owner had an internationally powerful reputation he's going to come out of this making money because of his political context, because of the type of lending he can get from banks and all the rest of it, and that's something that ai would miss out on, at least in its present, in its present stage.

Speaker 2:

Um, so I think those more open, complex, messy questions that where that, where the answer doesn't always derive from the analysis, that's done, things can be a little trickier because it's it's a political environment and because, kind of to your point, it's it's not always driven by the facts, it's driven by the situation and it's the context of why they engaged in the first place yes, yeah I think that's good, I think that's fair and and maybe, maybe in a sense I mean it comes back to I don't think either of us are saying on this, call, don't go into consulting, it's doomed.

Speaker 2:

We're just saying, look, don't go in with a book from 1995 that says this is how to run a consulting business, because it ain't gonna work anymore. Um, but the book for how to run a consulting business for 2028 or 30 hasn't been well. If it has been written, then let's hope it was only published yesterday. It's changing so dynamically.

Speaker 1:

You're speaking to my pain at the moment, rob. This is my book, unfortunately. Yes, yeah, and there's a lot that's going to change. We shouldn't underestimate the extent to which consultancies create their own markets. So that thought leadership, I mean. Mckinsey's perfect at this. They interview a whole load of CEOs, they summarize what the top performing firms do, they put it into a 10-step process, brand it do a Harvard Business Review article on it and all of a sudden they've created a market. And that's another human. You know the creative human. The analysis of it, fine. Perhaps even the writing of the report, fine. Do it by AI. But the ability for that creativity to spot where the new trends are perhaps is one of the last things that will be going I guess.

Speaker 2:

I guess the only thing that I think about is over the weekend I was reading about ai had created a new antibiotic and kind of it had been now subject to years of testing but showed promise in early kind of tests etc.

Speaker 2:

So we know that ai can create things we would have maybe found one day, but we just haven't got there yet. And that must be true in business problems as well. So it's all very well saying, oh, I've got my lean or six sigma and I'll optimize that process. But when ai comes along and says, hey, I've got this different way of and look, the output's kind of pretty much the same, I've just created a different way of doing it, that's going to happen. Yes, there's no guarantee that a process that's been evolved over years is optimized in the context of if I were doing it today, how might I go about it?

Speaker 1:

yes, and that that leads into a lot of the scenario modeling that you'll have seen off. The of the ai is is really good at. You know, if x, then what does y become? Um, and and and and the. The ability of ai to do that is is fantastic.

Speaker 1:

But I guess I'm speaking more about the non-deterministic problems. So, in other words, mckinsey's war for talent. So McKinsey invented the phrase, the war for talent. It wasn't true. The, the, the data behind the book was, was wrong, the analysis was done badly, um, the case studies you know a lot of them failed. You know enron was a big proponent of the war for talent, um, but people are still talking about it as if it's a real thing. So I think there's, there's almost when. When I said that McKinsey create their own markets, it wasn't necessarily a good thing, it's a branding thing, um, and, and I think AI might, might, perhaps struggle with that more creative. I don't mean extrapolating from a data set to find something new or create new hypotheses yeah, so, um, I had a conversation on on the futurized podcast a few weeks ago.

Speaker 2:

Um and um, my guest who runs ceo of a business in in kenya in africa, said um, it's not ai that's going to steal your jobs, it's the person who's using AI who's going to steal your jobs. And I guess that holds true of the consulting conversation. Now, professional services. Somebody who is just comfortable with using the power of AI in any of its various forms, including agents, including multi-agent teams, is going to be logically in a better position. They're still going to have to decide how they use it. They're still going to have to decide what they're going to do. So if we've got an agent team that does um, marketing, campaign creation and ideas, I'm still going to look at them and go do I agree with that recommendation or do I actually like that idea that you've discarded? But actually it's going to be the thing that goes viral. We don't know, but, but I suppose is. Is that true? So should this? Is that the answer for the console? If we were to give advice to any consultant or consultancy, is it as simple as make sure you're using it?

Speaker 1:

make sure you're using it in the right context, in the right way for the right problems.

Speaker 1:

Um, I I think there is still quite a big judgment call that people struggle with, including myself, as to what ai is good at and what it's not good at, um, and where the risks are, uh and and that's that can be a real challenge.

Speaker 1:

Um, there was one thing in just in the back of my head I wanted to get out when you talked about sort of advice for the juniors, but I think it's also advice for the seniors.

Speaker 1:

Whilst you were talking, I was realizing the amount of you know ai that I have used, not always consistently, but for sales and marketing and branding and analysis. And it's entirely feasible for consultants who can use not just AI but digital media generally to build their visibility, to drive up their day rates not that we're using day rates but drive up their pricing, let's say, to extend their reach, to multiply their impact, and I think that might potentially attract certainly the middle group of consultants more than perhaps joining a firm, more than perhaps joining a firm. If you can really build your brand online and get AI to multiply the impact, that you can have, especially if you're doing fixed pricing or value-based pricing, then somebody like your daughter in a few years' time might find it more profitable and more interesting and more liberating to work by herself with a team of ai agents rather than for the big firms in a sense, the right, the rise of influencer consultancy?

Speaker 2:

yeah, because I mean, logically, we can see some aspects of that playing out as, as that generation kind of starts to do things differently, so why not? It comes back to that's the brand thing. It's just it's a different brand than maybe kind of we'd have looked at in terms of the way that we operated. But having said that, I mean I, I it's 15 years ago, since in the consulting business I was in at the time I gave everyone an objective that said I'm going to measure your online score and I had an algorithm how I would determine.

Speaker 2:

And the point there was to create more personal noise, because if they, they needed a presence. What's the first thing that I did if I was buying consulting? I went on to kind of check who I was being given so best to have a voice, best to. So it's kind of check who I was being given so best to have a voice. So it's kind of almost just an accelerated kind of way forward. So I think the interesting thing, the key thing for me and I guess wrapping all of the conversation up into one is how do you feel about the future of consulting? I mean is that are you excited? Are you fearful? Are you, um, are you kind of waiting for a bloodbath in terms of kind of the collapse of organizations, or indeed seeing some kind of drive through it and take massive opportunity? How do you?

Speaker 1:

that's a good question and I'm going to ask you the same one afterwards. I like that, I, I'm, I don't gosh. Can I say this? I don't care about consulting as an industry. Um, I care that business leaders are making the right choices on the basis of evidence, and that's both leaders in consulting but also their clients. If consulting as an industry disappeared tomorrow but people were making good decisions on the basis of reasoned evidence, ideally for the better of the company, but also the better of humanity and society and all the rest of it, I'm happy with that.

Speaker 1:

So I don't have a vested interest in consultancy. I mean, obviously I've got a vested interest in my own specific clients and I want them to do well, um, so if I guess you know, if I think about my clients, I think there's a huge opportunity for them to grow and become more profitable and to become more visible in the next five years. I think longer term, the client to algorithm business model is going to be more prevalent. I think you're going to be getting way fewer people in consulting firms and I think you're going to get the rise of ai first consultancies that perhaps, as you posit, have one or two very experienced partners, but, um, the rest of it is ai and automation, um, am I so? So that's in terms of the positive, I think there's, you know, there's opportunities for better advice. I guess is my short-term view.

Speaker 1:

Um, I think longer term, not just for consultancy, but if we look at the economy as a whole, unfortunately, I see ai as returning capital revenue to the owners of the ai. So, you know, if I was, you know, sam altman recently announced that they were doing, you know they were moving into consultancy. So you know that with uber, uber and Airbnb and all the other algorithm based business models, the profits get returned to offshore companies in tax efficient, in inverted commas islands or Delaware. I think the same model is going to be happening with consultancy and I see this as a bad thing because it centralizes expertise, it offshores expertise, um, it takes revenue into offshore tax accounts where governments can't benefit from that revenue, and it leads to unemployment, especially the middle classes. Um, I don't see those things as a good thing generally, but I've been notoriously bad at predicting the future, rob, so I got ukraine wrong. I've got brexit wrong. Um, I got trump wrong. So shouldn't listen to me.

Speaker 2:

Perhaps I'll be interested in your take on on this as well, rob so I sounded quite dystopian, but kind of with yes, with a view that said you might get it wrong, so it might be okay after all yeah um, I think there was um.

Speaker 2:

My team kind of once wrote an article which was what would we all do if the it actually worked? And I always thought that was a pretty good observation really, because, to your point, if everything worked and every organization did the things that it needed to do, you wouldn't need the consulting industry or the IT services industry. It just operate. So I always, I guess, have a confidence that says the industry can evolve to spot the gaps and the opportunities that it needs to spot. It's just, I think, that those who immerse themselves in this world, kind of, will stand a better chance than those who don't, and, and I think it comes back to those organizations that have a stance of well, this is the way that we've done it for years, it'll be all right.

Speaker 2:

I don't agree with that. It it won't and it will change and and therefore it's important to be part of that force driving that change, rather than sitting back and waiting for it to just happen, right? Um, I think you spot on around the. I mean, if, if I go back to some of the work I did around digital responsibility nearly nearly 10 years ago, starting to think about what was the impact that technology has on us as a society, or or indeed the planet, then I think one of the weird conclusions I got to there was it was less about the technology and it was more about the economic model and and starting to question of what, what economic model did we need for the future? That would would would work more effectively in the world we were to become.

Speaker 2:

And I think linked to that, because kind of people might say, rob, why are you talking about this ai stuff and immersing yourself in this ai stuff when there are so many concerns around it? Well, actually, my stance there is always because I want to kind of use it to create positive impact and try and make a positive difference with it. I recognise the kind of potential issues, I recognise the potential harms, but if we do nothing, I don't see how the world can continue to operate just on changing demographics anyway, because certainly, if you look across some countries, then ageing populations alone says we need a different way of operating, going forward. Therefore, against the backdrop of we have to change and we have an ability to change, it comes back to those organizations that can adapt quickest will survive. That. Well, that's, that's how it's always been, isn't't it?

Speaker 1:

Yes, yeah, I think that's really good. I think that's a great point to draw to a close. I mean, I think both you and I could talk about this probably a couple of days without stopping, but I'm not sure.

Speaker 2:

Go on, you were going to say something. I've got to ask one question that I didn't ask in the middle of the call. So when you were faced with the Joe bot, did you?

Speaker 1:

go oh, that's cool. Or who gave you permission to use my data in that way? Certainly not the latter. I mean, I'm, I'm generally, I, I, I, I think I've built a reputation of you know, I'm certainly not famous, but I, I've built a reputation on the basis of giving away free, free information and free insight and I encourage all my consultancies to do the same. So, um, even, you know if I, even if I think about my published books which have been published with formal publishers, I think I'd probably get about 70p per book, given they solve, they sell for 25 30 pounds a piece. So I'm not protective of my intellectual property at all and indeed I, I encourage consultancies not to gate their content so that the ai algorithms can scrape it, because when people ask questions, you know people won't be using Google in a year's time.

Speaker 1:

People are increasingly using AI and if it provides the reference, which is a massive, if they'll be pointed towards your site. And you know, if someone says who should I talk to about X? You know about consultancy in the UK, one of the answers is Joe, and that wouldn't have happened. One of the answers is joke, yeah, and that wouldn't have happened. So I, you know, I going back to your influencer consultancy, which I think should be a a title for a joint blog that we do. Um, you can't have your cake and eat it. You can't get the fame or the visibility without giving stuff away and I think that's a really important point and an interesting point.

Speaker 2:

And I would say exactly the same about my digital responsibility work. I've always made it open, I've always made it, um, thinking out loud, working out loud. It's, it's available, because what's the goal with it? To make us all improve, to give us all about a chance of achieving the things that we want to do. And we and you're right we've got to follow up the influencer consultancy piece because, yeah, if somebody else hasn't kind of already kind of grabbed it, then then we need to yeah, definitely this could be another company.

Speaker 2:

It could be a joint company it's a new joint podcast talking for hours, um. So so we set out saying it was a bit of a kind of new venture in terms of a, a double podcast both ways. Hopefully it worked for those who were listening. Um, hopefully it gave everybody something to um to focus in on as an idea to take away. But it's really been great to have the conversation definitely thanks so much, rob.

Speaker 1:

Thanks for your time cheers.