Dr. Markus Bernhardt, Principal, Endeavor Intelligence
Dr. Markus Bernhardt is the founder of Endeavor Intelligence and author of ‘The Endeavor Report | State of Applied Workforce Solutions.’ A global thought leader in AI, learning, and workforce transformation, Markus combines a background in theoretical physics with deep experience guiding organizations through technological change. Known for his pragmatic “no silver bullet” approach, he helps businesses cut through AI hype to implement solutions that deliver measurable results. Through his research and advisory work, Markus empowers leaders to balance innovation with evidence-based strategy and drive sustainable performance improvement across the modern workforce.
Nolan Hout, Senior Vice President, Growth, Infopro Learning
Nolan Hout is the growth leader and host of this podcast. He has over a decade of experience in the Learning & Development (L&D) industry, helping global organizations unlock the potential of their workforce. Nolan is results-driven, dedicating most of his time to identifying and improving the performance of learning programs through the lens of return on investment. He is passionate about networking with people in the learning and training community. He is also an avid outdoorsman and fly fisherman, spending most of his free time on rivers across the Pacific Northwest.
Artificial intelligence is transforming the way organizations learn, perform, and adapt, but separating meaningful innovation from market noise is harder than ever. In this episode, Dr. Markus and Nolan discuss insights from ‘The Endeavor Report.’ From demystifying common misconceptions to identifying practical frameworks for innovation, this conversation uncovers how leaders can harness technology to create smarter, faster, and more adaptive learning ecosystems.
Listen to the episode to find out:
- How does the Endeavor Report help leaders identify practical, client-led use cases for AI in L&D?
- The four quadrants of the Applied Workforce Solutions Navigator.
- Why having “experts in the loop” is essential for effective AI adoption.
- How organizations can avoid “shiny object syndrome” and approach innovation with balance.
- The difference between efficiency accelerators and capability creators in workforce technology.
- Why do the best AI projects combine experimentation, transformation, and partnership?
- How to strike the “golden middle path” between reckless AI adoption and risk-averse hesitation.
There’s no silver bullet when it comes to AI or workforce transformation. The best organizations do their homework, understand their opportunities, accept their constraints, and move forward thoughtfully. Innovation isn’t about leaning all the way in or holding back; it’s about finding that golden middle where learning truly happens.
Principal, Endeavor Intelligence
Introduction
Nolan: Welcome to the Learning and Development podcast sponsored by Infopro Learning. I’m your host, Nolan Hout. Today, we’re joined by Dr. Markus Bernhardt, founder of Endeavor Intelligence and author of ‘The Endeavor Report’ on the ‘State of Applied Workforce Solutions.’ Markus is a recognized thought leader who specializes in helping organizations navigate the intersection of AI, emerging technologies, and the future of work. He brings a unique perspective to the field, combining deep academic rigor with pragmatic real-world experience from guiding numerous global organizations through complex technological transformations.
Today, we’re going to be talking a lot with Markus about the Endeavor Report that he put out earlier this year. And we’re going to go through everything, starting with a brief overview of the report. A lot of this is focused on AI, so you’ll learn what AI can do, what it can’t do. We’re going to talk about real-world case studies and then end up with a couple more hands-on learnings that we can take away from it. Without further ado, I want to introduce you to our guest, Markus. Welcome to the podcast.
Markus: Thanks for having me, Nolan. Pleasure to be here, and I’m really looking forward to this chat.
Markus’s Background and Early Career Journey
Nolan: Absolutely. Before we begin, we always start by learning a little bit more about our guests. Obviously, you’re very acclaimed in the learning development space. You’re an author. You have a lot of speaking events and keynotes, but that’s not where you started. Well, maybe you were. You may have gotten lucky and come right out of the gate, just speaking at conferences. But we would love to learn how you got into this field. What kept you down this path, and how did you land where you are today?
Markus: Yeah, it’s a relatively long story, but I’ll keep it; I’ll cut corners. My academic background is in theoretical physics, so I used to do a lot of programming, coding, and math. So, equipped with that, and after a decade in education, mainly running education institutions in the UK, I thought to myself, let’s have a look at this AI stuff, these neural networks.
This is all bubbling up. This looks like it’s up and coming, but I want to make up my mind on my own. So, I started coding some neural networks, used some of the libraries available at the time and optimized my own backward propagation, looked at how many nodes you need and how many layers and what differences that makes and what gets your computer to crash and what gets it to do something in a relatively short period of time and just played around with these tools.
And at the time, I also got to know a startup for adaptive learning based out of Cambridge in the UK. We got chatting, and then I joined them as chief evangelist, and that got me onto the stage and speaking at events. And at the time, they still always had the rule that when you come from a vendor, you’re not allowed to speak. Still, at the time, a lot of conferences made a huge exception for the weirdo that was me, who wanted to talk about AI, because they all probably thought, well, we can have one guy who’s talking about it. No one else is doing that. We’ll bring him in, and he can chat.
I believe I honored their trust by providing good information about what I thought was coming, rather than selling the product directly in the session. So, I think they honored that and kept inviting me back. And so, for a couple of years, I was telling people about a natural language processing library called GPT-2. It changed how we tagged content and worked with words, as we moved away from labels or keyword searches and could interpret text more intelligently. And I had half-empty rooms, but exciting half-empty rooms; everyone else at the conference had decided to go to a proper talk. And then I did the same thing for another year with GPT-3.
And then the 3.5 mode came, and many other conferences said, “Markus, you’ve been doing this.” We’ll give you a bigger room. We won’t give you the graveyard shift on Thursday, so let’s see where we go from. It became a topic of interest, and I happened to be there. And so that was how I ended up there. Being in the right place at the right time, I enjoyed it.
Nolan: So, help me understand when you mentioned you connected with this firm in Cambridge, was that really your first foray into learning, like adult learning principles and corporate learning, or had some of the work you’ve been doing in the past focused on the development training arm?
Markus: The one thing I’ve done throughout my entire career is learning and training. So, I started my early career in the armed forces, and there I became an instructor. In the late nineties, the armed forces were among the places where you had to have proper learning objectives that were measurable and precise. The training was quite formal, but it was quite thought-provoking at the time regarding how learning objectives should be formulated and how one measures the impact of learning and training.
So, I started with that, and then I spent most of my career in physics, lecturing and tutoring—things like statistics for biologists or physics. After that, I went into education and covered all age groups, from two and a half years old to 19 years old. So, not just university. So, I’ve explored various training, learning, and human development topics, and I’ve always been interested in them. And so that is the ongoing thread that goes through my entire career. And then, when I saw that you could apply AI quite intelligently to that, I was impressed.
That’s when I got excited. One of my first conversations at the time must have gone along the lines of you using AI to make GDPR training more interesting. So yeah, I’ve always been relatively direct, and I’ve voiced my concerns about whether something works or not. And that’s also stood me in good stead because I’ve not only once been called the not-so-silver-bullet guy, which is a weird and very long title, but I’m very happy with that as well.
Overview of the Endeavor Report
Nolan: Well, thanks for that background. It’s really fascinating. You can see that you are the right person for exactly what you’re doing. You have a history of making education the central focus. Yet, you clearly understood this deep, deep technology before it became what it is today, with its commercial use, building from the technical layer. It reminds me of the difference between, you know, giving a brand-new hire a tool like Claude, a marketing hire, and saying, “Go start writing blog posts on SEO.”
They’re going to be able to do it. You know, they’ll be able to do it, but they have no idea why they’re doing it. They don’t know whether it’s good or not. They can’t evaluate whether it’s going to work. So, you have that root and origin, which brings us to the Endeavor report. As you mentioned, you know, it’s very well known. But for those who haven’t heard of the Endeavor report before, could you talk more about it, when it came out, and where people can get it? Then, can we dive into some of the details?
Markus: Yeah, the inaugural Endeavor Report came out in June this year. And after several years of speaking, people often come to me and say, “Markus, we really like the use cases. Those are the best insights. Tell us where people have found challenges, they’ve overcome, and what’s worked really well.” There’s also that hint of no; these areas are still too far and a bit hyped, and here’s why. People love that kind of balance, especially when it’s connected to a use case, ideally one that came from a client rather than directly from a vendor.
The three heroes’ final struggle is to take those, you know, fully seriously, and we know why. I’m not telling anyone anything new here. People want a real story and to learn from a real case. They don’t want to know whether the vendor is amazing. So that’s the story. And I’ve always had good relationships with vendors and clients across the board. And I’ve seen a bridge between the two in my advisory work. And so, I decided, yeah, in addition to mentioning them in my sessions and talks, I’m going to put a report together. And so, I went out and started telling people that they had put together a report that is client-led only, and not a single vendor would ever be mentioned in it. And mixed reactions, you can imagine.
But amongst my vendor friends, a few also said, “We get it. If no one’s listening, then it doesn’t matter how loud we’re singing. So, if we want people to listen to us, we can’t just put the vendor in; we need a client-led story, and then they’ll get excited. And then at some point, they might ask, “Who’s the vendor? And then we have an in-drawn. If we have a sponsored piece or several sponsored pieces that no one reads, then all the sponsorship is wasted. We had good reactions there and received good reactions from some clients. Some of my use cases did not come through the vendor. The client said to the vendor, “I’d like to do this with Markus.”
So yeah, that’s how I got my first eight use cases together, and they were a nice little variation. As I brought them together and told the story, I included it in the report for each of them across two pages. I started noticing some similarities and differences, explored them further, and developed a framework to illustrate how I perceive the use cases, where they differ, and where they are more similar.
And so that’s how it came about. The reception was amazing, even better than I had hoped for. So, I’m very pleased with that. Following the reception, people have also asked me to dive deeper. We’ll talk about this in a moment. The applied workforce solutions navigator: this research piece I did about those use cases, where I began categorizing them and developing a framework. So, I’ve published that as well. And now I’d like to put together the next report, which should be a Q4 report, with mid-Q4 for this year’s publication. So, I’m looking forward to another good set of use cases, some completely new and others continuing from the last report.
So, a section will be called Endeavor Continues, where we review some of the use cases we already know and find out what’s happened. Did they pivot? Did they scale? Did they do both? What happened? What came out of it? So, it’s basically storytelling. People want to see what others are doing. People struggle to get information from vendors at conferences. People can’t speak to all vendors, and so people are looking for other ways to get on the front foot. In such a fast-moving field, you want to hear from your peers, and you want to learn with and from your peers. So, this is a vehicle that promotes that.
The Importance of Trust in AI Solutions
Nolan: The concept of who you trust and where you get your information from is becoming increasingly important these days. That’s how I started the podcast because I no longer had conferences to attend to hear stories. And I loved hearing customer stories, or, you know, not even customers, just real-life stories about the pains you have. Like, what are the problems when you sit down at a Brandon Hall event in the morning, chatting about the world and what’s going on? Hearing similarities was valuable for me as a marketer. But then when COVID came, I didn’t get those stories, you know, like what you said. You can talk to your sales team, and they’ll give you a flavor of it. And then you’re going to talk to the know, delivery team, and they’re going to give you a flavor.
You can sometimes talk to a customer, but salespeople want to keep that information close to their chest. So, I’m listening to all these people who start these podcasts. The biggest benefit is that they can have open conversations that they normally wouldn’t have. So let me do that. And I think the value of the audience is really what you said. I mean, it’s. Putting literally, in this case, a microphone in the case study, to these people that you want to hear from. Now, obviously, you can still go to the events and talk, but even an extrovert like me has a hard time walking up and saying, “Hey, I’m Nolan Howe, nice to meet you.” Tell me about your AI challenges. It’s not real; you can’t really dig deep into it.
I can understand why this report has gained so much attention. And for those that haven’t heard of it, the majority of this is all talking about AI. I read it, correct me if I’m wrong, Markus, but these solutions and case studies are just discussing different ways organizations have been leveraging AI within their workforce, correct?
Case Study: Orthopedic Surgeon Training Device
Markus: 99% correct, yes. I will talk about it more as technology, a little bit, because I think, yeah, other technologies play a role as well. And we have one use case, which is a tabletop practice device for orthopedic surgeons, where aspects of AI will come in the next iteration, but in this first iteration in the use case, there was no AI in the solution. So, I’m looking for a variety of interesting technological solutions where people can learn, practice, and get real-time support, ideally all these features. We’re also measuring data along the way to refine what happens in the next loop or step.
And so that for me is a workforce solution. If you’re learning and smashing training, ideally with some real-time support, and there’s a data piece that combines those two. It becomes a continuous loop, which, for me, represents a modern workforce solution where we can now bring these elements together. While we also realize that not everything has to be tech-solved, there are still workshops, mentoring, coaching, and other good tools out there for people to improve and learn. Yeah, the tabletop orthopedic surgeon is used in a workshop where they can practice.
And because they can practice on this tabletop device, they can now take many X-rays of the patient. As previously, they needed to check for the right alignment. To determine if the patient has that, you must take an X-ray. If the X-ray machine is in the right place and you’re fully aligned, then that picture is enough, and now you know you can move ahead. If the X-ray machine is slightly misaligned, then it doesn’t even matter if you’re aligned or not. You can’t see if you’re aligned.
So now, you must readjust it and take a second X-ray. If you notice you’re slightly misaligned, you must adjust the tool and take a third. And so that is one of those fabulous, great stories where a simple desktop solution with some coding interprets the angles and interprets what a surgeon is doing. And you can practice with real-time feedback. I mean, these are already fully qualified, well-performing surgeons. They even start competing against one another in the workshop in one-upmanship. To see who can take the fewest x-rays, and even after the official workshop is done, there’ll be quite a few tables still saying we’ll do one rounder because maybe we can crack the high score.
What AI Can (and Can’t) Do Today
Nolan: I can’t wait for the Top Gun movie equivalent of this to come out. I’m sure it will be everything. Well, as I got into the report a bit, I thought what was great is, I don’t know if it’s on the first page, but in a couple of the pages, you talk about what AI can and cannot do yet. And I love the end. Opening part of the report because I feel like it gave such a real, I don’t know, it’s such a good context for setting where we are today. Where are we not yet? Because it creates this report, even though multiple ones will come out, it really helps you understand our position at a specific moment in time. For those who haven’t read it, what is a high-level overview? Where are we today with AI doing these things well? These two things are not quite there yet.
Markus: Yeah, that was when I put the report together. I thought I should also have some other useful information there. I was very honored to work with Dr. Michael Allen and Steve Lee from Allen Interactions in this article, where we discuss how our conversation evolved into exploring the misconceptions between buyers and vendors. Because there’s a big mismatch between expectations. So, for example, one of the things we talk about is that the buyers’ side of things, well, everything is now plug-and-play. Everything is automatic. The AI does all the work.
So, I mean, I’m definitely going to have the solution within 36 hours (about 3 days), and it’s going to cost me almost nothing because all you did was throw AI at it, did your thing, and now it’s ready. So, the plug-and-play element is unbelievably overhyped. And one must really say, no, if you’re doing curriculum mapping, that’s not just having an AI that you tell to do curriculum mapping. It goes far beyond that. And you also need your experts in the loop.
And I’m always very keen to say it’s experts in the loop. We don’t need a human. We don’t need humans because they’re there for their human elements. In that moment, we don’t need compassion; we are ordering our feelings and questioning why one thing might be better than another. What we might need is factual knowledge and evidence. So, we need an expert in the loop to look at the outputs and guide the process. So, the plug-and-play expert is in the loop, and how much time does it take? Get it right?
And how much time does it take to continue to get that, right? Because in a fast-evolving world, the tool will change. The data will change, and it’s not just like throwing more data in and having it update itself. Anyone who’s tried this knows that when you have your GPT-type chatbot or Gemini plugin chatbot, and you upload your HR policies, the first instance works unbelievably well.
But then, if new versions come out, and you start uploading them, they can’t really distinguish between a new version, an old version, a timeline, and what has replaced what. It becomes a myriad of mixed policies, making it less useful.
Whereas when we hear the benchmarking, we think, man, these things can really think. I mean, this is the easiest task ever, right? Version 8 is more recent than version 7. And if you can solve crazy math problems, then of course you know the difference between version 7 and version 8. It turns out that if we investigate the strengths and weaknesses of large language models, we find that certain aspects are not strengths. And there we fell heavily immediately.
And so, it’s about which aspects are plug-and-play and fast, and which require more thought and understanding of why the model behaves as it does, with certain strengths and weaknesses. And the other one is how we bring the expert into the loop and ensure this stays effective? The other thing people often get wrong is assuming models simply improve. Well, let me remind you of earlier this year when GPT-4 started throwing emojis at us all. And we went, huh? and they had upgraded the system.
This was apparently a better GPT-4 than the GPT-4 the week before. But for many of us users, it was not a better GPT-4. Now, if you have a plugin that works with that and imagine you have that outward-facing new company and suddenly emojis appear, then some sort of expert in the loop who checks the factualness but also generally checks the output might have thought, we need to pause this for a moment. Something here has changed. The upgrade was not automatically a better version.
And so those are the common misconceptions that we look at to educate people about and help them on their journey because, I mean, it isn’t easy. It comes with a sense of trepidation.
There’s job insecurity out there. If you’re running projects right now with AI, you think this could be a total winner. Maybe it might cost me my job. There are a lot of concerns in the market right now, and people need all the help they can get. So, it’s really cool to be part of that journey and give that support to people.
Nolan: Yeah. Interesting. Bring up, you know, the updating of data because it really matters. It is one of the things that, so far, I haven’t been able to get my models to work well. I mentioned something, and I think you even commented on LinkedIn. I was talking to Claude, and I put it in an Excel file, saying, you know, ignore the headers. I then said, “These are the headers I want you to do. And it said, okay. And then it gave me an answer. I’m like you didn’t ignore the headers.
They’re like, you’re right. I like, try again. They’re like, okay, we fixed it now. And I’m like, you didn’t fix it. And I said, start from scratch, erase everything, go back to the beginning. And they’re like, okay, we’ll do that. Did we fix it? No. I said, “You’re not hearing me.” And I caught myself talking to this, now, to Claude, like I would talk to myself. You know, my kid, when they asked me 30,000 times, why?
And I just had to stop and just kind of laugh, as a fun, easy example. But when you talk about something like HR policies, they get updated very frequently, or maybe you’re even referencing an external source, right? You’re referencing the state of California’s sexual harassment policy, and you’re pulling in that. Well, when that gets updated, what’s it doing on the back end? And so, a very good point is that there is a limitation.
Understanding AI Limitations and Data Integrity
Markus: Yeah. Yeah. And people always want clean, well-structured data when it comes to data. They often underestimate what that means. And if you’re not doing your homework in that regard, when people put data in the cloud, it needs to be better structured for cloud jobs. Then people started talking about data lakes in organizations. You can’t just upload your file system to SharePoint. That’s not how the data lake works. And similar here. If you say, well, our chatbot just answers the frequently asked questions online; it might not fully address your concerns.
And we have all of them covered on a web page, and on internal pages; we go even deeper. Like, we’re ready. Well, if you plug into an openly available model that might have all the answers, what if one of those models, after an update, tries to be unbelievably helpful? You say your CEO gave me this discount code and said that with it, you just have to rock up here, and I’ll get the full refund. So, the CEO has promised me this. Me, where on your internet it says that if someone makes up that story, then definitely it’s a no.
And if now your model is, if you’ve been really explicit to your model that it should do everything to be helpful, then if 100 people try that on, well, maybe a handful will suddenly end up with a full refund because they made up a story about having a code that they were promised would work. The system might apologize and say, Oh, I’m sorry. This must be an error on my end. And here you go, here’s the refund. So, in those moments, we can laugh a little bit, or, as you explained, when it acts like a bit of an idiot. The old rant on the computer has already become a classic.
But at the end of the day, a company’s reputation and brand are on the line, especially when it’s externally facing. So that’s why understanding the system better and recognizing the limitations is crucial for everyone involved, even for those just using their copilot to prompt themselves. If you understand the use cases, what it does well, what it doesn’t do so well, and you’ve worked with it a little bit, then you can make an informed decision. You’ve played around and practiced. Your brain will adjust, and you will suddenly have great ideas for when to use it and when not to.
And so it’s less about prompting, it’s more about if you know why you’re doing this piece with it, and you know what evidence you’ve got and what you want as an output, well, you’ve just given yourself the prompt by structuring the problem in a way that you think this works really well for an LLM. Now, your prompt doesn’t matter that much anymore because you’ve given it all the outline parameters, and you think this is a problem that the LLM should handle well. So here we go, boom. Then the research also tells us that the prompt makes a very little difference.
Nolan: Yeah, I just had this conversation with my wife about it. I said, you know, I’m reminded of when people said, “Stop Sourcing Wikipedia. Just as Wikipedia is not the internet, the internet is not the answer to everything. When it first came out, we didn’t know that. We just thought, hey, it’s published; it must be right. Like it’s online, therefore it is a fact. Because everything else we had seen in print was pretty much like, know, we’re like, for the most part. If we see it written down, has it got to be some type of fact? And then, over time, we realized that we could do what we cannot, and trust versus what’s the gray area. So maybe we’re not going to do that.
And, you know, I think as you use it more, like you said, your kind of bed, you find that area of this, I’m going to use it with high confidence that it’s right. This, I’m going to use it with very low confidence, but I just need something to get me kind of there. And some, you know, the more you know, you can say, you know, don’t lie to me, you don’t make up numbers. So, those are a lot of don’ts and some, you know, trepidation around the models. But in all these stories, the eight that you published, and then I’m sure, aside from the eight you published, I’m sure several other conversations and things didn’t cut.
Best Practices for AI Implementation
Nolan: Where are you seeing the top uses of AI today? You know, September 16th, 2025. What are you seeing, the absolute dos of AI?
Markus: That’s a good question. That’s also a bit of a tricky question because I don’t want to promote a certain sector or a certain application. So, I’m going to sidestep this one a little bit. I see the best use cases where people from the start have thought their use case through properly. And this isn’t reinventing the wheel of thinking about something through. These are just the basics. It happens that with innovation, people often see the shiny tool and act on their initial reaction, which is more like that of a six-year-old in a toy store.
Wow, imagine what we could do here. And I always say, yeah, and there are also doubts. Some say to start with the problem in mind. And they then look at the others who have their Toys R Us approach and say, see how much better we are. I stepped back from both, and I said, “You need to do both.” When it comes to innovation, you need to approach a tool with the Toys R Us approach and go, my God, imagine what’s possible. Then you also sit down and jot down all your problems, the frictions you have, or where you’re underperforming. And then you start to think about what kind of solution works. But to be innovative, must think a bit beyond the blue sky, and you must think a bit crazy.
But that’s not the hat with which you made your final decision. So, I think if you have done both, bring them together, and you will do your homework. You look at what’s most reasonable, what’s within budget, which timelines are good, and where we can measure ROI relatively quickly to see if it is doing what we want it to do. Then you’ve done your homework. Then you’re ready to run a use case.
The second part of your homework involves examining the transformation needs of those who are involved and who might use or interact with the tool. That’s the key thing to get right. The best tool that no one uses is not the best tool. The best tool is the one that most people use to achieve the highest effect. That might only be a mediocre tool, but if your transformation has run well, you’ll get much better ROI than if you bought the best tool and no one’s using it.
So that’s part two of the homeworkwork: the human transformation piece. If you do your research and find the right vendors to partner with, you’ll often find they are PA. They aren’t off-the-shelf, SaaS style promises. These are co-productions where you have a vision and no, the features won’t come off the shelf. And you see that in a lot of the use cases in the report.
And you partner in such a way with an external partner or with an internal team, and you’re developing an in-house solution, depending on what kind of organization you’re in. Then you’re at the forefront on September 16th in 2025. Then you’ve done your homework, and you’ve approached it in the right way. You haven’t been distracted by shiny object syndrome. You haven’t gotten distracted by the fear that an experiment might not work out. That’s another one. When we look at sales and marketing, for many years, you must run experiments and celebrate the winners.
When marketing has a smash hit home run campaign, no one asks, “Why didn’t the five before not work?” The whole point was to find the one that works and lean on it. It could have been any of those six, right? And that’s normal in marketing. But especially when it comes to HR, learning, and talent development, we are accustomed to working on those projects.
We can’t test and see how good this one is compared to that one is, with this entrepreneurial mindset, we run five and consider that maybe one is a real winner, two are real winners, and the others are learning or normal progress. And maybe there’s also one or two in there that didn’t quite work out the way we thought. That’s completely normal in other parts of the business, but in our part, that’s still new thinking. So do your homework. Consider the people’s transformation and think about how it enters an experimental phase, where it’s not about winning every use case. It’s about having a small portfolio and moving at a pace that allows you to pull out the brakes, pivot, readjust, or acknowledge that it just didn’t work the way we had hoped, but we tried.
Nolan: Yeah, I can echo that. I helped launch our AI offering. It’s not a product; it’s consulting services and the like. So, I get brought into most conversations with, you know, learning leaders and things like that. And after the call, usually my sales rep at the company will say something like, “This is great.” You can tell them they’re excited. They want a proposal tomorrow and then enact it next week. And I tell every time, every time I say, listen, it’s the person who likes it, that person who says, this is great.” Show me a demo next week for my boss. We want to get moving next month.
A hundred percent of the time, they sign up zero times. It’s absolute: that person will never buy from us at all. Contrast that to somebody I met last year at this time. An exploratory conversation. They came back to me. They came back to us a month ago and said, ‘Hey, we’re ready.’ Then I was like, okay, yeah, it will take a month. But those people who have almost let it marinate, the idea of, I saw the shiny thing. Now let me dig a couple of layers deeper. Let me get everything in real light. Let me look at the human element. Let me look at who will use this. Now I’m ready to act. Now I’m ready to come in.
Okay, partner, come back in and work with me on this. I do; I absolutely think you’re spot on. And those have gone really, successfully. The other ones, you usually sputter. They start a little bit awkwardly. Yeah, that’s always the case. And then the other thing that I think you really nailed is that partnership mentality. It reminds me, I don’t know if you did much work in cloud computing when AWS became a big deal in the IT world. I remember clients would say, “We have customers, and they just really need an expert on cloud computing.” I need somebody who has 10 years of experience with Amazon Web Services. And I would say you realize Amazon Web Services is two years old. And I think with AI, I think that’s the, I understand you right, Mark, I think when you mentioned partnerships, your kind of saying, you must realize that these partners are building it with you. It’s not going to be cool. Infopro is an expert. We’ll have no problems. It will come in as an AI, and boom; we’re done. Now we’re an AI-first company. No, it is working alongside you to help you reach that promised land, rather than just giving me their strategy. I’m done. I don’t need to invest in myself.
Markus: Yeah. And that’s also something that should, or could, come up in a proper conversation where a vendor might say, “We’ve had two similar use cases.” Let me tell you about the mistakes we made. Let me tell you about where the easy wins were. Let me tell you which parts of the project will be more straightforward and why, and which parts will need a bit more handholding, trial and error, and fidgeting to get right. And then you go, now I’m listening to someone who’s consulting me and who’s helping me. That’s the kind of partnership and vendor I want to work with.
So, there’s a huge opportunity here. At conferences, we see a lot of potential buyers walk around just hoping that they’ll meet someone who, instead of just trying to get the pipeline, scanning the badge, and getting the meeting in the diary, will use the first five minutes to become a little bit of a source of really good information. Of course, that’s their job, but also a bit about other projects. And I’m not talking about hero slides. I’m just talking about, yeah, we’re going through something similar, and it’s not always as easy as you think it is. And have you heard from others that they do well? I can tell you a little bit about what we’re doing with our clients.
Have those five minutes, trust me, you’ll have a far better conversation booked in. And I still see a lot of vendors get that wrong, and the poor sales teams, the expectations placed on them have also gone up, right? They are not AI experts, and they are still figuring out the company’s use cases and how they’re working. Yeah, I always smiled when, in the first 18 months, I went to vendors and said, “So what plugin are you using in the background?” And they say, we’re building our own.
You’re not Salesforce; you’re not building your own LLM in the background. No, you’re not. This is an LLM plugin. Go and ask someone who can tell you which one you’re plugging in. It’s probably OpenAI. But go and ask someone because I’d like to know how this is structured. And there was even one or another who said, “No, we’re building our own.” Uploading your information into someone else’s tool is not building your own. That’s not quite the same thing. But yeah, we must appreciate the challenge going both ways.
I’m talking very flippantly here, but if you’re in a sales role at one of these organizations, there’s a huge expectation that you suddenly can explain AI, you can explain large language models, and you can build that trust with the customer. Wow, what a job. But the opportunity is there because customers and potential clients are looking for help. They’re actively looking for help. And the best thing to do to get rid of them is just to ask them the three standard sales questions, where you think you’re taking them down the funnel, and you get them booked in the diary, but you haven’t had any conversation that has fostered any trust, right? That’s my view.
Navigating AI Use Cases With the ‘Workforce Solution Navigator’
Nolan: Yeah, I absolutely agree. It’s those that you’ve allowed to teach something, you know, that have really helped. So, one of the things we talked about was that we were going to present some case studies. There are eight wonderful case studies in the Endeavor report. If you haven’t downloaded it, we’ll have some links available for you to get it there but go through and read them. But at the bottom, after the case studies, we have the applied workforce solution navigator included in this report. I want to make sure we cover this. Can you talk a little bit about what that navigator is and how people are leveraging that?
Markus: Brilliant, yes. So, the Navigator is just my interpretation of the use cases I was encountering and how I categorize them in my head. That’s it. You know, like any good framework, it’s not the framework, it’s not the best thing, it hasn’t reinvented the wheel, it just helps you think through and ensures you don’t forget aspects. So that’s the starting point. I saw two pathways that people follow. One question is: Are we enhancing operational excellence? Are we doing exactly what we were previously, but doing it better, faster, or both? Or are we doing something completely new that wasn’t there before with technology? So those give you one dimension to look at yourself. Is this faster, better, more revenue, or is this completely anything?
Nolan: Yeah. Did I make a better mousetrap, or did I make a new trap altogether?
Markus: Correct. So that was one. And the other one is, are we putting a completely new solution together ourselves? Some tech companies do that, or they co-create with a vendor. Or is it more about going out, finding a vendor, and integrating what’s already there? And those two are very different because the timelines and the kind of roadmap that you’d have to go through are very different. And once you map those two against one another, you end up in four quadrants where your use case might land.
And the trick isn’t that one quadrant is better than the other. No, they’re all the same in terms of how good they are. It’s just categorizing and helping you think. If you’re doing something you’ve done before but can do more efficiently and better with technology, and you’re working with an outside vendor who has the solution, that is the quadrant I call efficiency accelerators.
And when you’re putting an efficient accelerator in place, that just means you must ask yourself certain questions. You need to approach it a certain way; you have to do your homework in a slightly different way than if you’re building something completely new that was never there before. You’re building that in-house, and now you also must convince your people that this new process, with a new tool, is going to be there.
And so, for each of those, that would be a capability creator. Each of those quadrants just gives you a different mindset to think about your use case and help you go through maybe a few tick boxes and help you do your homework so that you can have a successful one. That’s in a nutshell for the four quadrants, where neither is better than the other. You don’t want to be in the top right, or you want to avoid being in the bottom left.
That is not what this is about at all. In fact, good organizations will have a portfolio of use cases where there is a representation of one or two use cases in every quadrant, because you would be missing out if you didn’t have some efficient accelerators of things you’re already doing. You’re accelerating them with technology, but you’d also be at a disadvantage in most sectors if you weren’t building something completely new that wasn’t there before, which might give you a real competitive edge over your competitors.
So that’s it in a nutshell. And if you look at the report, the eight use cases are spread across those quadrants. And we even have a couple of use cases that can’t be exactly defined to be only in one of the quadrants, which again shows their complexity. It’s not science, as you are either exactly there or exactly here. You can have an overlap of maybe two quadrants, which is the story I will try to tell in the report. We explain why we’re thinking that way and why it helps to consider those use cases in this fashion.
Nolan: Yeah, and that’s great that there’s, there’s, because I think, what’d you call yourself? What is the title? Was it The No Silver Bullet Guy?
Markus: Not the silver bullet guy.
Nolan: So, you know, when you think about it, it’s, think so many people are like, well, what is the AI approach or the tech transformation approach or whatever it is. But I think saying, you know, more than likely it’s going to fit into one of these four buckets. And when it does, here are some things that you need to consider. I think it’s helpful to break that down for people, especially in something like this that has so many interconnected parts, and to be able to say, “Is the goal of this program?” Where does it fit? And then what questions do I need to ask? Because you’re so helpful.
Because this is, especially in the field of AI, that’s the path you’re going down. It’s more than likely the first time that you’re working on that solution like this. It’s not like you have 20 years of experience enabling your organization to use AI or accelerate with AI. It’s your first go-round. So, having some roadmaps is really helpful. And for those who, again, haven’t looked at the Endeavor report, it really encourages you to check that out. It’s free. You can download it. We’ll put a link for it as well.
Finding the Balance: Leaning In vs. Holding Back in AI Adoption
Nolan: Markus, before we head out, I’ll give you a couple of takeaway options. You can either leave people with the idea that if you’re not using AI for this, it is where I’m seeing the most value, or it is the one thing I understand. See, most people get it wrong in AI. So, you can either choose, you know, a cautionary tale, or maybe a silver bullet.
Markus: I’ll position myself a little bit in the middle. So, here’s the thought process. When people begin these projects and consider these use cases, they often think you need a certain personality and must be in a specific type of organization to do this. And they often have those leaning in mind because that’s what social media is telling us. There are those heavily leaning in. And they’re like cowboys shooting into the air like crazy, not caring if they’ll hit anyone by accident.
And then on the other hand, we have the hesitators. They want everyone else to make the mistake, and they’re not going to do anything about it. And we’re told you must choose between these two personalities now. And that’s all complete nonsense. I work at a tech company in a totally unregulated space, so I’m not in pharma or healthcare. Then experimenting and leaning in is going to be the way forward because we’re trying to gain an edge over other organizations doing something similar, regardless of which department you’re in. And the way we do that is by leaning in, which doesn’t mean going wild when it comes to use cases.
Leaning in means finding out as much as possible about what the tool can and can’t do, and then considering how it might work for us, both with the toys and with the problem. And guess what? The leaning-in crowd also says no to four out of five use cases because they gathered all the information, put it on the table, and said, “This isn’t right.” And even if you’re in an organization that doesn’t have many regulatory constraints, that’s how it would end up. Of course, if you’re in a highly regulated environment, you might have to look through seven or eight options to find one that’s okay.
But guess how much you’re learning along the way about compliance, about SOC 2, about where the data is stored, about what kind of terms and conditions you’re signing and how they’ve changed in the last six months. All of that is a learning journey. None of that leads to cowboys. Yes, let’s go. We’re going to go for every single use case out there. And the other crowd, the hesitant crowd, might be hesitant because they’re highly regulated.
They might be hesitant because the company or the sector is, perhaps, due to clients, and risk-averse, which is necessary. So, they don’t just choose their personality as a cowboy or as a hesitant person. They might just be in that kind of role. And there, they would also say they’re probably, to a large degree, the better companies looking at possible use cases and exploring the limitations. What are the constraints?
How can we address data privacy, bias, and other concerns in a way that allows us to justify our actions to clients and stakeholders? They might then say no to many things and come up with alternatives. So, in any sector, if you’re looking and exploring proactively, no one will force you to hit go. And you’re going to learn a lot along the way. And even in highly regulated areas such as pharma and healthcare, as two typical examples, I work with people who are leaning heavily to try to find out what might work and when the tools, regulations, and market are ready.
And they’re also running use cases. They’re just running slightly different use cases than those that are less regulated. Don’t let social media tell you that you’re just a person. As a professional looking to advance your team, function, or company, depending on your perspective, you should be considering what opportunities we have. Amongst those opportunities which satisfy the rules and guidelines that we’ve given ourselves or others have given us, and within that playing field. Where can we operate, run a use case or two, and learn even more from that?
That is what the golden middle gets, right? So, you don’t have to be on either side of the spectrum. You should be someone who thinks these things through with their team diligently and has a good idea and also expects there to be some learning along the way, both when it comes to theoretical research about a personal use case, as well as, like we said earlier, when you run it, there’s going to be friction. They came off the shelf and ran smoothly from day one, a solution that has never existed anywhere in the world. So why would that have changed with AI? That’s where our sector, amongst other sectors, also loves a good silver bullet story that this thing is going to solve all our problems. We just must sign the dotted line. That has never worked.
So, position yourself in the middle. There is no single thing you should be pursuing because it depends on where you sit, whether that’s right for you, and whether it satisfies the regulations. There’s no specific personality to approach this with, or whether you’re innovative or anything like that.
No, just do your homework. Where are the opportunities, and to which ones do you have to say no, and you have a good reason that you’re saying no? And to which ones will you say yes? And you probably have a good reason why you can say yes, not just because you’re excited about the upside but because you did your homework, and you think the risks are within the boundaries that you or others have prescribed.
Nolan: Yeah. So, what I’m hearing is that the cowboy was on one end of the spectrum, holding out. What was on the other end?
Markus: Those holding back, those hesitant.
Nolan: Holding back. I feel like it’s new. It’s a new Likert scale; cowboys are holding back. The advice is to answer two or four; you know, slightly holding back, a bit like a cowboy, or being in the middle is where you want to be. So, either be a bit of a cowboy, holding back slightly, or be neutral, but try to avoid being a cowboy or holding back on very sound advice.
Closing Thoughts
Nolan: Markus, thank you so much for joining us. I appreciate you spending some time with us, and I look forward to the opportunity to do it again soon.
Markus: Yeah, thank you very much for having me. I really enjoyed the conversation, and who knows, maybe I’m looking forward to the next one as well, Nolan. Awesome.
Nolan: Absolutely. Thank you. Bye.