
The Consulting Growth Podcast
Joe O'Mahoney is Professor of Consulting at Cardiff University and a growth & exit advisor to boutique consultancies. Joe researches, teaches, publishes and consults about the consulting industry.
In the CONSULTING GROWTH PODCAST he interviews founders that have successfully grown or sold their firms, acquirers who have bought firms, and a host of growth experts to help you avoid the mistakes, and learn the insights of others who have been there and done that.
Find out more at www.joeomahoney.com
The Consulting Growth Podcast
Episode 39: Chris Tomlinson on AI-Driven Transformation and Boutique Firm Advantages
Chris Tomlinson, the founder and CEO of Muuto, is reshaping his firm by embracing AI. Learn how Chris and his team are leveraging the power of generative technologies to shift their consulting practices.
Chris shares his insights on how boutiques can outpace larger organizations by harnessing the agility to innovate without the usual constraints. We explore the potential of AI agents in modeling and refining work processes, transforming traditional consulting models to be more adaptive and efficient.
Chris and I discuss the critical role of leadership in this transformation and share strategies for fostering a culture of experimentation. Whether you're in the camp of early adopters or cautious observers, this episode offers valuable perspectives on the organizational change needed to harness the exciting opportunities presented by AI and digital transformation.
Prof. Joe O'Mahoney helps boutique consultancies scale and exit. Joe's research, writing, speaking and insights can be found at www.joeomahoney.com
Welcome to the Consulting Growth Podcast. I'm Professor Joe O'Mahony, a Professor of Consulting at Cardiff University and an Advisor to Consultancies that want to grow. If you'd like to find more out about me and access some free resources to help your consultancy grow, do please visit joeomahonycom. That's J-O-E-O-M-A-H-O-N-E-Ycom, o-e-o-m-a-h-o-n-e-ycom. Okay, welcome back to the Consultancy Growth Podcast. We have the pleasure of Chris Tomlinson here, who is the founder and CEO of Muto, a rather splendid consultancy that's been a member of the Leaders Club, the Boutique Leaders Club, since its inception. Tomo, could you please tell us a little bit about yourself and a bit about Muto as well, please?
Speaker 2:Yeah, happy to Joe. So yeah, I'm a bit of a geek for the big, complex transformation programs, which is where I've spent most of my career over the years. I've been a proper consultant, if there's such a thing. I've been a proper consultant, if there's such a thing, since I started working post-university for various different firms. I've worked in-house as a transformation director, an internal consultant, and then went back out into consulting to establish Muto. Most of the work that I do, and therefore as we build the firm, is around things with a red thread of organizational change and restructuring. So we've done big relocations, operating model shifts, big digital changes as well.
Speaker 1:So interesting times you've, um, tell us a little bit. So I mean, I obviously know um Nomuto relatively well and and I, I I would like to talk a little bit about how it's unique, how it's different to what you would normally get, and you've worked for some of the big brand names. So how is your proposition different, or perhaps even a different focus, to some of the more traditional approaches that you might get in the larger firms?
Speaker 2:it's different the way we talk about it.
Speaker 2:From larger firms but also other boutiques and I think this is kind of you know, maybe for the kind of people that um will listen to your podcast as well there's a lot of, there's a lot of attention on niching down into specialist areas when you're in a boutique size to be, you know, truly deeply professional in a certain area.
Speaker 2:And then in the big firms, well, we've got you know we'll cover multiple bases and we've got breadth and we've got you know scale and we haven't got scale. And if you say niche, our niche is a very broad niche, our transformation program, so it's, we're still not. You know that, if you hear me now, it's not crystal clear what our value proposition is, and that's kind of one of the things we talk about is we don't talk about us, we don't talk about what we do, we talk about what is the client problem? What are you trying to achieve? Where are your blockers? Because there'll be somewhere in the end to end of a transformation, but there'll be specific to the context of that organization and actually it's intervening at that point in time and in that specific area. That's the difference in how we go about it.
Speaker 1:So I don't know if that even sounds different, but it's a complicated thing to articulate because, frankly, we just try and base it on what's the where's the client need okay, so listen, you guys are doing some interesting stuff uh in in ai and both you and, I think, are both equally excited and potentially sometimes slightly terrified by the capabilities. I know we were talking about it just before the call, so could you tell us a little bit about when you first started getting interested in AI and how that's been introduced to Muto?
Speaker 2:Yeah, I mean I'm not a technologist, which I think is probably different from a lot of consultants who are leaning in on this space. I guess a bit of background context that maybe adds something to this. I started professionally consulting at Accenture and when I joined Accenture that was right, everybody sits in a basement in Hatton Gardens and learns how to do visual basic and UML and Java etc. So I didn't have a particular bias towards that. But I got some of the disciplines and I got some of the language I worked on, you know, it organizations, it operating model, etc. So I've got a bit of a feel for the technology space. But I'm absolutely not a technologist. I understand digital operating models a bit, but not no more than I do in other areas. So I think that's maybe worth with kind of highlighting the way.
Speaker 2:Well, I got interested in it about the same time that everybody else did, I think is when when the generative stuff became accessible. You know I tried previously things that helped with drafting documentation, um, some of those early versions. But you know, unless it helps us do our client work more efficiently, effectively, higher quality, day-to-day probably wasn't something I'd spend a lot of time around, uh, even if we thought it was interesting, um, but yeah, we've, uh, we, we got into it at that stage and I thought, well, this is going to be, I think, fundamentally different for the way consulting gets done. So, uh, yeah, let's lean in hard on it. And we have been doing ever, ever since. So what we're? Two years now, I think, since, uh, chat, gpt was launched.
Speaker 1:So yeah, great, okay, okay, and and so I I met some of your team because I did, I believe I did a small talk that's one of one of your away days and and it it struck me that they were generally more enthusiastic than your, than your average consultant when it comes to ai, and certainly more knowledgeable. So what, what have you done in terms of introducing the concept to the team and, I guess, selling them on the point of exploration?
Speaker 2:I hope they are genuinely interested and excited by it and they're not just kind of toeing the line. But I think that's unfair. I think lots of people are really engaging quite well in it, but I think we've got a normal distribution in Muto, like you've got normal distribution everywhere else, so you've got people who will always be late to the party, you've got people at the front end or all over it and then you've got the group in the middle who could go either way or can be encouraged either way. So since I kind of got my head around a little bit of what I think the implications might be, I've been really pro and positive with the team about getting into it and experimenting. I think really pro and positive with the team about getting into it and experimenting.
Speaker 2:Um, I think the bit that has been um, an interesting learning point is seeing how different people use it to tackle different things, and we can only do that through sharing, posting little videos. We're a virtual team as well, so it's it's how we do the sharing and the collaboration and encouraging that and removing blockers around that, whilst it's much easier in a smaller firm like ours than it is in a big company with maybe more controls on firewalls, maybe more controls on what you can and can't use from different products and tools, whereas we've actually given everybody a an allowance to go and pick what they think is helpful and share it with others, rather than say we're rolling out, you know, co-pilot to everybody and then all they'll do is is use co-pilot.
Speaker 1:so, yeah, we've encouraged a lot of experimentation okay, and can you give some uh practical examples of either how ai has enriched outcomes for clients or even outcomes for your internal teams?
Speaker 2:I mean the client. You know, the deployment with and for clients is the bit we're trying to work out. So you know, have we deployed anything for revenue and income with clients around AI? Not yet. We're starting to do some experiments and co-invest on that and I don't think that's any different from the other firms out there.
Speaker 2:I think there's a lot of froth around this in the consulting IP space. Everything that used to be topic is now topic AI, and I'm not seeing much particularly new or applied in the white papers and the and the marketing that's coming out from other consulting firms. Yeah, and I think it's because they are no further ahead in most cases than the organizations that they're consulting with. So they're all using some tools that are available or experimenting. They're all probably lying about how much they are using it, uh, which I think is interesting as you look at some of the research that comes out. So, um, we haven't done anything with clients directly. That's been revolutionary, but we've got a couple of really, um, interesting conversations and experiments on the go at the moment.
Speaker 1:So, okay, are you? Are you allowed to share anything about the experiments?
Speaker 2:uh, that are happening yeah, well, I mean the obvious one and I won't put client names into it, but the use cases, I think, are probably the most interesting. But anyway, so we, we do, as I say we do, a lot of organization design. We have our own organization design tool, um, and as part of that in we've got a core kind of module that does data, data ingestion, analysis, data cleanup, all chart visualization, etc. So what you would usually expect from a detailed org design project and what we've done is that in, in experimentation with a couple of clients now, is we brought in the generative ai component into that, which which feels like you're being, you know, you're kind of undermining your own business model. But at the front end of that work, operating model, design, option generation, you know, analysis at the high level of the data. We've done that with some generative AI tools and it has been fascinating to see how it prompts ideas and conversations with clients.
Speaker 2:That's the kind of interventionist side, but also just the speed with which you can get you know iterations, analysis and compare one, two, three, five, twenty options where previously, at that stage of a, of a, of an all design, you're probably looking at a couple. Do we go left, do we go right, and then, when you've got a critical mass behind you, go, this is where we're going, and then you just kind of build it, whereas at this point you can take a step back and say, actually, let's be really specific about what we want to achieve here and grade these options that we're looking at against a live organizational data set as well, which is really interesting. So it's not hypothetical. It's also linked to current spans, layers, locations, costs, et cetera, even if you're just doing the analysis at the high level.
Speaker 1:That's really interesting. Okay, so there's, and I guess with that example, I get the data ingest, I get the analysis. I'm presuming to generate models you need something beyond, or not beyond, but something different to chat GPT-4.
Speaker 2:Yeah, we're not using that. We're not generating the data outputs with AI at the moment. That's still our core tool. What we're doing is at the as-is data stage. What you can say is based on this and other artifacts, transcripts of conversations with C-suite. Is the way we've done it. Where's the misalignment? Where's the alignment? Give me a pricey of the specific objectives. You know that you've heard consistently across it. So it's, it's almost a way of very quickly, uh, and this is not different from any other use case around the gender of ai stuff really is. Give me a summary of that. But what's most powerful is when you say, and at the, you know, at the order of magnitude level show me what the pros and cons, ups and downs, puts and takes are around organizational options. So that's before you then go into the detailed design of well, where does position X go and where does person Y go in the 2B design, which is still something you would need to go through.
Speaker 1:Yeah, Okay, so that is quite exciting, actually, isn't it? Well, I think your brain's a bit like mine, because you can see where this is going. I guess this raises a wider question, which is achieving the balance between expertise human expertise and AI expertise, which are both equally fallible, and sometimes the human is more fallible. But something I've struggled with is balancing with what I do best with what the AI does best. Now, have you found a secret formula to this, or is this a matter of experimentation?
Speaker 2:Yeah, I mean everything's the experimentation is more important than the destination for us at the moment, I think, because the use cases, like I say, different people approach things. Just you know everybody's different, so they think about problems in a different way. But for me it's about keeping our learning as close to the development curve of the technology as we can, rather than going okay, I'm using this for a while whilst the development moves on and you've got a bigger gap to what's happening. So I'm trying to keep us in that game and, frankly, you have a lot of options open because nobody knows where this goes. But rather than make a call on we're going to stay in track A now and be wrong, I'd rather spread us a little thicker over multiple things and just see where this is. Super additive.
Speaker 1:Yes, yeah, I mean, I think this is so important and I guess if there was one message that I'd love you know, listeners who are senior in professional services to get from this, would be this focus on experimentation, because things are moving quickly and they're going to continue to move quickly, but in some ways, if you're already thinking and experimenting and trialing, you're going to be so far ahead of people that aren't. When a gap does come up, when there is an opportunity for you to do things faster or better or cheaper with AI and I think there's obviously model limitations at the moment, that thorny issue of hallucination hasn't gone away. They've made a lot of progress on security and, you know, preventing data leakage, but it does strike me that the firms that are experimenting here are going to have a head start when it comes to, you know, say, they release ChatGPT-5 or all, they get AGI in three years time. Maybe we all go home then, but you know, at least we're going to see what's coming down the tunnel at us.
Speaker 2:Who knows where it goes. I think the bit that kind of going back to the boutique point is we're really well positioned to react quickly. But you know us and firms like us. I think that's the main thing.
Speaker 2:I spoke to a friend the other day who used to work in, he used to build, he's a software engineer, he used to build tools for a big consulting house, one of the bigger, bigger strat houses, and the example he uses, you know one, trying to find analysis on data for double payments or overpayments or whatever it was that you know usually be a couple of consultants for three weeks coming through spreadsheets it'd be a few years ago now, um and they produced the product and it did the work in six hours.
Speaker 2:For you know one, one fte to do a bit of oversight for a few days and that immediately got pushed aside by the partners in charge of the work because they were billing by the hour and the more they were in the building, the more exposure they got, and it was basically this dilemma of efficiency and client value versus the, the, the other incentives that there are in bigger consulting companies. So a non-start at that point and I think that's what we're going to see in the larger firms is they're going to stick to the pyramid model as long as they possibly can for partners who are at that stage of the career, whereas you're going to have smaller firms who, who can embrace this stuff and just pivot much faster and it's going to undermine our billing model. But so be it. We either we either embrace it or it's going to. It's going to run this yeah, yeah, I, I agree.
Speaker 1:I think boutiques are at an advantage, um and for advantage in lots of different ways, and I think you've mentioned them all.
Speaker 1:One is their flexibility, the lack of process and regulation that constrains innovation, and very often, the clients that use boutiques tend to be a bit more open-minded and perhaps more open to experimentation than if they go with one of the big firms.
Speaker 1:It's something I was thinking a lot at the weekend, and this overlaps with our conversation prior to the podcast about agents, and what I find particularly interesting is the depth of expertise that boutiques have have and obviously the larger firms have, you know as well but that depth of expertise isn't something that is currently captured by the models, because it can't be uh, you can do, you know the models have been trained on vast amounts of text and video and all the rest of it, but they haven't had it hasn't been trained on muto's depth of experience with transformation or change management or whatever, and I think there's a real opportunity for boutiques to almost model out their work in terms of AI, bots, agents talking to each other and the data flows, and I think this is a good thing to do for boutiques anyway, because it helps them understand what they fundamentally do and what all the different roles should be, even if there's no automation or AI involved.
Speaker 1:But if you've got that feedback loop which is the crucial bit if you've got a feedback loop that says here's the work we're doing, let's look at the output, what's right, what's wrong, what needs to be improved, and so you're slowly improving the model and again this happens. This works offline with humans and online with automation and AI. But if you could create a even just for one service line or one process within the service line, if you could create that system whereby it learns what you're building, there is a fantastic set of capabilities that you can offer clients. You know you're getting some form of output, but it's not just that.
Speaker 2:It's continuously improving and learning from itself yeah, I mean, I think about it a little bit like how you would develop capability in any young consultant coming through their career. It's kind of what you're trying to do with an agent as well. Now there's a whole dilemma there about how do you develop the consultants of the future. But you know, it's kind of tbc, I think, for all of us and in lots of different um walks of life. Um, I think the multi-agent, well, the agentic thing anyway, and the way this is value add for clients, I think is very much. How do you have a joeai or a tomoai and how do you multiply the value that you can add to your clients rather than one conversation at a time, per hour, etc. So I think that's where it will go for deep expertise in certain areas, how that's going to get deployed, curated, et cetera. You know who knows One of the bits we're talking to this is one of the partners, partner organizations that we're working with from the technical side at the moment is that they are developing multi agentic models.
Speaker 2:So it's not just a wrapper on top of chat, gpt, which is a lot of the dot ai stuff you see around the moment. The way they're doing it is in some instances they will have multiple agents playing multiple different roles that they have been trained on with proprietary ip, as you know where we're available and and um, you know public training, training, data, whether they're just from the big models to run through a scenario and provide an output, critique from different angles, et cetera. So that, I think, is an interesting way of looking at it. But also, what is the leave behind for a client after we've done a piece of work?
Speaker 2:How do we say you know, we've done an 18-month health and safety and environment project with an organization, we've run training sessions, we've developed collateral, we've tailored stuff, we've translated it into multiple languages. You know that is an incredibly high value leave behind for an organization, rather than just the hard drive or the inbox of the clients that you worked with as you did your kind of handover. I think so, um, and it's value for the consultancy as well as there is for the for the um, the client organization, because what's your ip you can take away with you. But also it keeps the contact over the longer term, which I think that that's a value add for client and consultant yeah, yeah, the, the.
Speaker 1:So for those of you who who are listening, who aren't particularly into ai, well, again, firstly, you shouldn't be listening. Secondly, the whole agentic thing is so if you pile instructions and different steps onto one instance of AI, it tends to merge things and get confused and not play roles very well if you've got multiple roles. So the idea behind agents is that you have different instances of an AI playing specific roles. So if you've got your standard change management project, you might have a change manager, you might have a project manager, you might have a requirements analyst and so on, and then they talk to each other in order to, in effect, replicate what a consultancy team might do.
Speaker 1:And I know, I know you said earlier, tomo, I don't know where this goes, or we don't know where this goes. I mean, it does raise an interesting point, doesn't it, in that if if you have and I think this is easier in some professional services than others, so I think law is, for example, and accounting are particularly susceptible and they've been shown to be very susceptible to AI. I think it's a bit more complicated with consultancy, because our subject matter organizations are quite complex and open and almost you can't predict what's going to happen and open and almost you can't predict what's going to happen. But having said that, where do you see this going? So, if the AI capability continues to improve, if you can train your own agents up to be generally as good as your consultants in some respects, do you eventually see this as something where you've got where you've got in effect, ai is just another part of the automation or do you see it going a step further, where we might not need consultants at all eventually?
Speaker 2:How much of an optimist are you is what I'm asking no, I, I, I think, I think my hypothesis is that relationship, rapport, problem solving, trust is going to become more and more valuable and important to the customer and and a service provider, any kind of service.
Speaker 2:You will differentiate over that element of quality, because a lot of the content will actually be commoditized. I feel, I don't know, but I think I'm not, you know, I don't think it's the end of humanity from that point of view, I think it just means different skills will become more important and there'll be new problems to deal with, inevitably. And that problem solving bit, that novel problem that has not trained, you know, trained a bot, um, I think, is going to be, is going to be the bit where the value add is, um, you know, I have, I have with different clients, I speak to the same conversation ostensibly every time at the start of an engagement around certain topics, certain ways of working, etc. But it's the context and the specifics that are the nuance on top of it, and I think that's the, that's the human judgment bit, which I think we're a long way off, yeah, at the moment, but yes, yeah, I think you're right.
Speaker 1:And there's also something about, as you alluded to earlier, that person who needs to check the quality you know where the Tomo of the future comes from, or the partner or the director comes from who says, well, actually this was a good first attempt by this set of agents, but it's missed something quite important.
Speaker 2:Yeah, I think that will iterate and improve and iterate and improve. So even that as a challenge, I think will change a lot, probably in the short term rather than the medium term, about how qualities and learning and you get to AGI then next, which I know we're probably kicking ass on, but I think that models that learn is going to be really interesting.
Speaker 2:I think the bit that, as a professional in a certain domain, that I find really interesting and motivating around this is a bit of a hedge for us as a consultancy at Muto, but also, I think, for other, for other firms is there's two tracks to this. For me, there is what the hell is the technology going to do and how is it going to be applied, and that's no different from how do we imply, you know, apply it internally. How do you apply it? How does you know a big PLC apply it into their, into their way, and that will be. You know there'll be leaders, there'll be laggards, etc. But in the same way that people started using email or started using machine learning or started using ELP systems, it'll be fairly level and available.
Speaker 2:The bit that I think is most exciting as an organizational guy, is how the hell do you change your organization in order to adopt this at the scale and the profound level of change that you're going to have to to, you know, re-engineer around the way you run your company, because it's a different mindset, it's different skill set, it's a different operating model, it's a different way of interacting with partners, third parties, with data, with regulators. That internal re-engineering is the bit that is novel and the bit that's going to be, I think, for my career, the fascinating stuff to work through before the net, before that's then kind of stabilized, then the next horizon appears. So I'm hoping I'm young enough, old enough to kind of get through this really fascinating stage of organizational change while this stuff is happening and getting legs, but it'll be, you know, something that then obviously evolves into the future so yeah, I think that's what's going to keep us busy?
Speaker 2:I hope so.
Speaker 1:Yeah, oh yeah this, yes, yes, this is. I mean, it's more. I think you and I both agree this is more than the next total quality management or business process for engineering or lean. This is something quite fundamental that's going to fundamentally change organizations. What are you seeing from your own clients in terms of this space? I guess there's very few, as you said, earlier models out there to imitate, very few, as you said, earlier models out there of you know to imitate. So are they asking you to come and help with? You know experimentation, or are they coming to you with specific projects and objectives when it comes to ai and transformation, or are clients not interested in this? Are they still focused on um, I don't know moving to the cloud or whatever?
Speaker 2:I think there's probably a couple of camps. There's the one where we're in, we're involved in an intensive piece of delivery work for a period of time and we need to get through it, and nobody's interested in looking at doing that particularly differently, because you've got your eyes, you've got your eyes on the finish line. So, um, I think we'd be, you know, rarely would be deploying stuff while we're working at that level of intensity with a client. And I think there's other areas where we're having regular kind of problem solving conversations but rather than just doing the usual well, this is how we would tackle it. Would you be up for tackling it in this way? Or adding an experiment on it, or taking out a little bit of scope and doing something different with it?
Speaker 2:That's not necessarily on the critical path on it, or taking out a little bit of scope and doing something different with it. That's not necessarily on the critical path, probably has some kind of business case. That said, it would self-fund, because if I can do a four-week project in four days with a bit of development, well, there's a win-win there and most people seem to be up for it, at least in principle. And then we get into the details and you know data privacy and all that sort of stuff, then it's a bit more complicated, but you know no more than any other piece of work.
Speaker 2:Um, and then there's others who I think maybe this is just general take up on on ai and use of is it's how do I, how do I you know if it's the? The uh, the forward question of if I ask somebody what kind of you know what they want to travel in, it would say a faster horse rather than a completely different way of working. So the uh, the ford question of if I ask somebody what kind of you know what they want to travel in, it would say a faster horse rather than a completely different way of working.
Speaker 1:So they're asking for a faster horse rather than than thinking about re-engineering from a yeah, okay, yeah, and I'm guessing that that still makes sense in some, in some businesses. You know, I always think we've done the task stuff quite well, so you know you can get it to write an email or summarize a conversation or do a bit of analysis. And I think we're moving to the process level now where we're seeing more end-to-end processes, sort of through a combination of ai, automation and humans. But certainly that process has been re-engineered to use it, to use a you know an old term um, but we haven't, I haven't seen much out there that's approaching anything like enterprise yet just maybe it's a reflection.
Speaker 2:There is a lot of the things about buying a digital product or bringing in an external service provider is. It lays off some of the risk and the accountability onto another party. And if you are experimenting internally with something to re-engineer and it's maybe the first time it's been trialed and it's your specific process, it's quite exposing and there's quite a lot of risk professional and personal risk in there for the, for the buyer. Um, so I think there's maybe a bit of reticence in there as well. Who wants to be the first mover? Because it's bloody hard work, because it's never been done before, even if it makes it easier and more efficient on the back end, the actual development and thinking and problem solving on it is highly complicated. From a consultant point of view, yeah, okay, okay.
Speaker 1:So if you were going to now, I think you're overly modest in terms of where muto is and where your people are. But if you were going to give advice to um, to a boutique leader, um, in terms of and they're so, let's say, they're completely new to the whole ai thing and they're they. They think there's some opportunities there, but they've got no idea what and they're not technical and their senior team aren't technical. How would you encourage them to start experimenting? What types of things would you suggest they keep an eye out for?
Speaker 2:First thing would be behavioral. I think and this is just my, you know, the consulting 101 is what my boss is interested in, I'm fascinated by. So if you are an MD of a consulting company and you're not into this, then everybody will see that and the demand won't be there. If you're going to positively reinforce it, you've got to be visible, active, challenging, encouraging, all at the same time. So I think that role modeling piece is critical. And then people have permission to experiment. Yeah, and and um, you know, to reinforce that, I've now got a policy.
Speaker 2:If anybody comes, brings me anything, I say have you used ai on this to rehearse it, to check it, to challenge it, to improve it? And and, if the answer is no, I say, well, have a run at that, because it's more efficient use of my time, our time, to talk about you know the problem. So I think I think, behaviorally, I put that in that space. Uh, I mentioned this briefly, but I think I think there's there's no silver bullet, there's no, you know, microsoft deployment of this. That's going to fix everything for you.
Speaker 2:So, um, I think enabling people to experiment and trial and fail and, you know, celebrate the stuff you've learned by failing is really important as well. Again, it's probably, it's probably how you operate, but giving people the chance to do that, um so giving the tools, funding it, um having some kind of you know sufficiently safe but light touch policy and guidance around it, because people immediately want to hang on to guardrails, um so I think you've got to give them that. We've done that pretty quickly as well. Um, yeah, I think I think those are the obvious bits cool.
Speaker 1:All right, thank you, and I guess I'm going to add to that while I'm here, because I speak to, as you know, I speak to lots of founders and ceos and MDs and all the rest of it, and I'll always ask the AI question just in terms of I'm learning from them, and very often they'll say well, we have co-pilots. So, you know, and I'm a big fan of getting, I mean, co-pilot can do some interesting stuff, but, you know, go and play with the models. If you've got someone in your team who can, then you know, build in a few APIs or use some Zaps to connect with other systems. I mean, you don't need to be a coding expert to do any of that stuff.
Speaker 2:I can't code, I don't know if you can, but I've certainly had some great, some great experiences and actually ai can code for you if you need it no, I've got a java certificate somewhere, but I definitely can't um, yeah, well, so the way we've kind of encouraged this as well as what is something and this is to like the whole team is what takes time that you could, you know redundant time of a task or an activity or whatever that you think you can make more efficient.
Speaker 2:What do you not enjoy doing that you think you could, you know, automate in some way, show or leverage this stuff. And we've actually started people pairing up on these use cases and we have a sort of two weekly sprint cycle and people pick something, declare what they're doing, go off and experiment, you know, post a video. Um, one of the team, in fact my pa, the other day we were doing we were paired up doing something and um, claude started generating some, uh, some scripts in python. I mean, I don't think she'd ever seen any computer code before and it was literally, literally scaring her, I think, as we as we looked at it, but then when we looked.
Speaker 2:But then we looked at the output, it was just, like you know, outsourced a complex job for a for a simple, automated task, and I think it's that kind of the more exposed you are to it, the less it's, you know, scary after a while. So you've either got, I think, technologists or people who are not technologists um, I don't do, I don't do data, I don't do you know programming, don't do code whatever, and I would probably put myself in that camp. I've seen enough of other people do it effectively that it's helpful. But I think it's breaking down that fear. For some people maybe it's too exposed and maybe these models are too, because they show you it in some of the you know the canvases that they're working and people would rather not see how the sausages get made. Um, but it's all, it's all part of it's all part of the learning cycle.
Speaker 1:Yeah, yeah, and increasingly the skill set. Unless you are tech, tech is actually around what happens around the box. So you know it's not so much the code it's well, what do I do with it afterwards?
Speaker 1:And if it doesn't work, then what do I do then? And I've been experimenting with Zapier and it's quite remarkable what you can connect now with virtually no cost and virtually no technical ability. Yeah, good, okay, so okay, is there anything else that you think is particularly interesting, either in terms of where we're going, what you've seen elsewhere, or, um, or anything else that you you would say to another leader?
Speaker 2:when it comes to ai, I mean the risk of getting too, too detailed, uh, into some of the technology about this. I think, um, I think, if you turn it around on the boutiques, and what does it mean for us? I think, as and when other firms like us start to crack or crack, it necessarily get their head around how they're going to build for this kind of support in a different model from time and materials or fixed price or, you know, resource and an outcome. I think that's going to be really interesting because, as as soon as you start to bring this in, you're basically basically competing against yourself on prices and resource deployment. So we've got to work out how to put a figure and a mechanism around the value to the client. So I think that's going to be interesting to battle with.
Speaker 2:Now I think the other stuff is nothing really, I just think it's. So I find the fascinating. The bit that's fascinating is there's so many different problems to be solved and there's so many different ways to solve it and it's kind of marrying them up. And I think that'll be very different on a firm by firm basis, because it's different domains, different client bases, et cetera. So I think they'll see some of the same challenges that we're facing and trying to tackle, but, um, probably at the same rate that everybody else is yeah, yeah, interesting times tomo yeah, it's fun.
Speaker 2:It's fun to be working, still not letting me start later in the day and finish earlier in the evening. So, uh, in fact, I probably think I'm less efficient as I start experimenting through some of these tasks, but uh, at least it's more interesting.
Speaker 1:I am, I'm getting a lot more done faster, but I'm just filling my time with other things. So it's and not, you know, still work stuff. You know, it's not like I'm off, you know riding horses or learning the banjo. So I think I think with entrepreneurial types, there's a mindset challenge there as well.
Speaker 2:Yeah, I'm not. I'm not sure, but this is the thing. I don't think we're going to be out of jobs. Things are going to shift and I think there will be certain sectors, certain jobs, certain types of work that will be hugely impacted. We know this, but there will be new things that need addressing. So I think, again, that's the epoch that we're in or heading into.
Speaker 1:Interesting times as, again, that's the, you know the epoch that we're in or heading into. See interesting times, as I say. Okay, tomo, thank you so much for your time. I'll put the links to your bio and linkedin and muto in the show notes, but in the meantime, thanks for your insights and expertise. Really appreciate it thanks, jared, happy.
Speaker 2:If anybody wants to connect they'll. They'll find me through those channels here, so feel free to reach out.
Speaker 1:Happy to chat thank you, take care bye bye if you'd like to find more out about me and access.