Dr. Ashwin Mehta, Founder and Director, Mehtadology

Dr. Ashwin Mehta is an AI strategist, learning technology expert, and founder of Mehtadology. With over 14 years of experience across industries, he advises organizations on integrating AI to drive learning, performance, and workforce transformation. Holding a PhD in learning technology adoption and an MBA, Dr. Mehta is a sought-after keynote speaker, consultant, and thought leader. He leads AI masterclasses, contributes to global discussions on ethical AI, and produces original research as well as podcast content. Through Mehtadology, he helps businesses cut through the AI noise and implement strategies that deliver measurable, sustainable impact — the kind that matters in real-world transformation.

Nolan Hout, Senior Vice President, Growth, Infopro Learning

Nolan Hout is the Growth leader and host of this podcast. He has over a decade of experience in the Learning & Development (L&D) industry, helping global organizations unlock the potential of their workforce. Nolan is results-driven, investing most of his time in finding ways to identify and improve the performance of learning programs through the lens of return on investment. He is passionate about networking with people in the learning and training community. He is also an avid outdoorsman and fly fisherman, spending most of his free time on rivers across the Pacific Northwest.

AI has moved beyond buzzwords—today, it’s a boardroom priority for learning leaders worldwide. Yet, moving from curiosity to practical application remains a challenge. In this insightful episode, Dr. Ashwin and Nolan unpack how organizations can begin their AI journey in L&D, starting small, experimenting with purpose, and aligning efforts with business outcomes.

Listen to the episode to find out:

  • Why is defining a clear purpose the first step before adopting AI?
  • How existing tools you already use may have AI built in—and how to unlock them.
  • The role of governance and risk appetite when choosing AI solutions.
  • How to approach the best-of-breed vs. all-in-one tool debate.
  • Ways to experiment safely with AI through pilots and free trials.
  • How to foster AI literacy and innovation culture within L&D teams.
  • Why measuring business outcomes, not just L&D metrics, is critical for success.
  • The future shift from content-driven learning to AI-supported copilots in the flow of work.

Quote Icon

Most likely, many of the tools you’re already using incorporate AI. The easiest first step is figuring out which ones do—and ensuring you’re using them to their fullest.

Dr. Ashwin Mehta,

Founder and Director, Mehtadology

Introduction

Nolan: Hello, Dr. Ashwin Mehta, welcome to the podcast.

Dr. Ashwin: Thank you for having me.

Nolan: It’s a pleasure to see you again virtually. We’ve connected in person at Learn Tech events in London and Brandon Hall. Always good to reconnect with you and have you on, especially talking about a topic where you’ve definitely made a name for yourself in the L&D, training, and HR industries: AI.

What we want to discuss today is how to get started with AI. Many conversations I have with other CLOs and learning leaders reveal a great deal of excitement and buzz about what we can achieve. People have seen the tools, strategies, and methodologies, but actually getting from zero to one—getting it started—seems to be a big stumbling block for many. So, that’s what we’ll really talk about today.

Where Are We in the AI Adoption Cycle?

Nolan: But let us start, Ashwin, with a “state of the state”: where do you see L&D now with AI, where are we on the adoption cycle of AI in L&D?

Dr. Ashwin: It’s an interesting question, and we have to set the context first. What do we actually mean by AI in L&D? Most of the L&D departments and companies I’ve worked with in the past have had some AI already working for them. If we consider the platform space specifically, many companies have LMSs or LXPs. They might have recommender engines, some search capability, or pathway creation.

These are AI functionalities that are part of the bread-and-butter cycle of L&D, typically involving content platforms and some form of data, without anyone really calling them out. So, it’s easy to be scared off by the terminology, but it depends on what we mean. Traditional AI use cases in technology platforms and Software as a Service (SaaS) products have been prevalent for many years, and Learning and Development (L&D) has been utilizing them to make content accessible to learners.

It’s probably not the case that we’re saying nobody’s ever used AI before, and now we have to start using it. So, it’s important to ground our conversation in that. What we’re probably discussing is generative AI, a topic that has been debated for the last two years, following the launch of ChatGPT, Microsoft Copilot, Gemini, and similar tools.

The question is how to effectively start using some of these tools. If we take what I just said—content, infrastructure, data—a very simplistic way to look at L&D, L&D departments either use suppliers or do it themselves. They create content, post it on a platform, and learners access it through that platform. When learners access content on a platform, it produces some data. This is the simplest way of looking at it.

Now, if we look at it in that very simplistic way, we need to be thinking about purpose first. We want to use generative AI. Why do we want to do that? Is there a reason? Will we enrich the learning experience? Are we going to make more content? Are we going to reduce the cost base of L&D? Do we want to shift headcount into other areas, such as coaching, and utilize AI to handle the digital content production side? Therefore, there needs to be a tangible reason for doing it, as that guides you into what you are going to do. If you jump into doing this thing without a purpose, then inevitably, you’re going to get it wrong.

Nolan: The foundational question of everything. The first five episodes of the podcast focused on performance-led questioning in L&D: why are we doing this? Let us not just do this to say that we did it. What is the actual performance metric that we’re trying to improve? Let’s see if AI can help us get there.

Understanding Business Metrics vs. L&D Metrics

Dr. Ashwin: Absolutely. It adds a couple of interesting nuances to the discussion because L&D, generally speaking, has been incentivized to focus on L&D metrics. Batch throughput or continuous throughput, what are we looking at as a factory, as a content factory? “I produced 10 items last year; now I want to produce 100 items.” That’s one way of thinking about metrics. However, we’ve long had this ambition in L&D: to tie learning strategy to business strategy.

That raises the question: what’s the business doing with AI? What’s the business doing with activities, tasks, and roles, and which of these can be partly automated or can be partly served better by AI? And where do we need learning interventions that allow us to make digital media with AI? So, there were two very different conversations. That’s why this purpose is so vitally important, because we need to determine where to deploy the best technology available to us.

Choosing the Right Path: Content vs. Operations

Nolan: When we start down these two routes, we have a starting point: whatever route we choose, we need to have a purpose. It’s going to seed our investments, seed our funding. We’ve created two clear paths. We’ve said that we have these business metrics that we can work with and help the business improve. Perhaps I’ll implement a GenAI bot, or perhaps I’ll integrate a sales agent into my sales workforce, allowing me to ask questions and receive real-time answers, and ultimately create a customized sales pitch for my team.

Whatever it might be that would improve metrics such as average order value, time to sale, pipeline velocity, and pipeline value. Then we have the complete other side, which is, okay, how quickly can we produce an hour of content, or an asset? Of those two areas, Ashwin, which one have you seen to be an easier path?

Dr. Ashwin It depends on whom we are talking to. When discussing with L&D departments, the content production piece is significantly easier. If we’re talking with operations, then the other way is easier. So, the locus of discussion changes what the discussion is about. However, the reason I mentioned this is that we should be integrated with what the operational side of businesses is doing, because ideally, we want to support the skills and throughput, as well as all the other aspects that operations do. Therefore, it’s essential to be involved in that discussion or at least be aware of it.

Nolan: I think one of the things, and this is a series, so for those listening, if you like this, this will be a part of a series. One of the other things we’ll discuss is the future of AI and how it influences your path. I think that absolutely holds: before you even start, even though you can’t really boil the ocean and want to look 30 years in advance, there is a lot of value. Suppose we’re discussing how to transition from zero to one in AI, even spending a couple of hours with other executives, understanding where the business is headed and where the technology is headed. However, you may have no intention of getting there in the next year or two years.

In that case, it definitely is a good foundation of how you want to structure which of these paths you’re going to go down, where you should seed investment today, versus what are the things when you hear about them, it’s nice to know, but you’re not going to tackle it right now. Because in many organizations, choosing what not to do can sometimes be a harder conversation than deciding what to do.

Dr. Ashwin: IAbsolutely. And I think we’ll touch on this a bit more when we discuss data in one of our other discussions. However, first of all, we have a purpose. What are we actually trying to do?

Choosing the Right Path: Content vs. Operations

Dr. Ashwin: f we are trying to go down the content route, I know you guys have significant content studio capability, so we probably start to go down the road of which content or media development tools we are going to start to use? And there can be quite a landscape of those tools. I’ll probably discuss that in two separate parts. Part one is: What tools will we use? This tends to be the bit that everybody gets really excited about. Will my tool be mentioned, or will I be able to create videos more effectively? The reality is that there are many tools available. There are benchmarks for various tools, some of which are specific to the video production space, others to the image generation space, and still others to the avatar space.

However, if we consider the basic use case of an e-learning developer, they typically use a tool like Articulate, Captivate, or Elucidat, or some other authoring tool. I think it was last month that Articulate entered the AI race. They’ve said, “We’ve got an integration with OpenAI or whatever it is that they do.” I haven’t really looked into it in much detail, but they now have an AI capability. A lot of the other tools that we generally tend to use in the video production or maybe the avatar space—Synthesia, Colossyan, our own—all of these tools, they’ve had this kind of capability for quite a while, even beyond in the animation space. L&D departments were probably already using these tools, and AI snuck in. You can create a course for relatively little.

Say, “make me a course on X,” and it will make the scene structure for you. That’s when you say, “What’s the easiest step to getting into using AI in your workflows? The tools that you’re using probably already have some AI capability. Figure out which tools do and which tools don’t, and make sure you’re using them to the most or the least of however you want to use them for your purpose.” So, relate it to that purpose. That’s probably the easiest step.

However, the reason I suggested splitting the discussion into two parts is that with great power comes great responsibility. With tools comes a little bit of governance. Therefore, we need to consider, from an enterprise perspective, that most enterprises have some governance in place regarding how we implement technology tools within our environment. With that, it’s helpful to understand the tools you already have, as they now have new features and functionalities. Does it still meet your governance? Are there still things that are permissible within what was assessed in terms of risk? Are you releasing any of your data into the wild? Is everything still secure? Such considerations need to be considered. That’s the simplest step: use what you’re already using; it’s likely to have AI in it.

Now, governance tools, if we start to think about evolving to a slightly different tool set, moving outside of the e-learning development world, you have a lot of tools that are, for example, the Midjourneys of this world, or things like Runway and Flux, where you’re starting to be able to manipulate images, create images, and then create videos from images. So, you create a sense of movement. Adobe made an announcement, I think last week, where they were exploring the rotation of 2D images, and it does that. They had a little guy fighting a dragon, and the sprite of the guy was facing the camera, and they said, “We want to look at the dragon,” so they just turned him around, and it just did it.

That’s quite impressive. So, you start to get some of these other capabilities. If we look at the Photoshop side of things, specifically the Adobe suite, for editing video or images, there’s a lot of generative capability in those tools now. So, we start to build the landscape. I’m not particularly calling out tools for any reason other than to say that we now have a landscape of media generation tools, all of which have amazing capabilities compared to what they had two or three years ago. And they all require a little additional governance around them.

The reason to keep mentioning governance is that some enterprises I’ve worked with have been fairly blasé about, “you can just go and use tools, and we’ll deal with it later. Use it for a pilot, and it’s fine.” Some organizations, particularly regulated organizations, are very strict around “you cannot bring anything into the environment unless you’ve done lengthy and detailed governance around where data centers are, and how data is processed, and all these kinds of things.” Therefore, it depends on your business’s risk appetite to determine which of these routes is applicable in your context.

Nolan: This is where I remember I was having a conversation with someone, maybe a year ago, and I was showing them our tool for generative AI, like a co-pilot but with a lot of learning elements added into it, to help with prompt engineering and to help create storyboards, video scripts, and things like that. We were demonstrating the flow of, first, using this generative AI content tool actually to tell us what to do. We load in product manuals and other related materials.

Essentially, we’re trying to democratize the SME. Everybody has this big problem: all this knowledge is locked in my SME. With AI, you can take a 10,000-word document, find the answer within a second, and then go to your SME and say, “Hey, just validate, is this right or wrong?” versus “Tell me everything on the 10,000-page document.” I digress.

So, I was showing them our tool for that, and then I went to Colossyan. I said, “Now you can take this, and look, you can just create this avatar, then you can take that video and that avatar, and you can convert it into any language, and then you export it into SCORM, and then you can create a different e-learning on this tool.” And the question was, “What is your advice when it comes to using this tool for this, and this tool for this, and this tool for this, versus a best-in-breed?” And the question was, “Isn’t Articulate just going to come out with something that has all of it in-house?”

As those of us who follow the tech space know, those are traditionally the laggards because they can’t afford to get it wrong. They can’t afford to launch AI, and it blows up in their face, or they experience a security breach, and now everything is gone. So, them launching it this last month, I think a lot of people have it in their idea of, “can I get away with just one tool?” Have you advised people one way or the other, or seen an approach that works better for one person than the next?

Best-of-Breed vs. All-in-One Tools Debate

Dr. Ashwin: There’s not really a one-size-fits-all, and the reason is that we have to go back to the purpose, think about the learning strategy. Some learning departments will have a very small group of learners, and they want very high-quality experiences. Some learning departments will be serving hundreds of thousands of learners and want to focus on scale. For these reasons, we need to focus on the business strategy and what works for that business.

Generally, I would say that we’re still primarily discussing development; we also have several other factors to consider. In the development space, I think all of the learning designers I know would say, “What about design? We’ve got to think about design!” So, we’ll talk about that in a second. But in the development space, you could, and this is generally what I think, say, “what I need is one good image generator, I need one good video creator, I need one shell that’s capable of giving me a first draft with a bunch of scenes, and maybe an avatar tool and maybe a couple of other things.” But you say, “these are my typical pedagogies, this is usually what I do in terms of design, I’m looking at scenario-based or experiential learning or problem-based learning, whatever it is that suits the appetite and suits the mode of my business.”

These are the things that reach my learners because we’re effectively in a competitive space. We’re competing for attention, competing with work priorities, and other things that happen in people’s lives. And generally, what we’re trying to do is make a stimulus that reaches a person such that when they receive that stimulus, they somehow create knowledge in their minds, and they go off and do different things than when they did before. We can borrow a lot from the field of marketing in this respect, because marketing involves reaching out to people and trying to influence behavior through the use of media. It’s basically what we’re doing.

In the marketing space, you think about adverts; they are competing with other products. We are competing for time. So, if we’re competing for time, we need to leverage the best of our creative design capabilities with a typical range of tools that those folks want to use. So, it depends on your business, but it also depends on your designers. And that’s why we then move into the design phase.

Nolan: My advice to people, in general, with tools, as somebody who’s led our marketing stack and tech stack, and built an LMS and an LXP, is that I always advise finding software that solves a big pain point and does that really well. Don’t worry about edge cases or things you rarely use. Go back to the purpose. If you have a large number of avatars in your training mix and library and are using avatars, and you have different languages, find a tool that allows you to reduce that production by 30%.

That will gladly pay for that tool. Don’t worry about, “well, can Articulate also do it?” or “can this tool also do it?” If your in-house tools can, so be it. But find a tool that works for you today and has some future potential. I think what happened to a lot of organizations that I’ve talked to in this past year is that, as you said, we’re kind of on a two-year cycle now, and we’re in the second year.

As I started talking to people, year one was a lot of skepticism. Now, a lot of this is more like “how do I get started?” skepticism, versus “what is this?” skepticism. And I think a lot of organizations punted because they were just afraid to choose the wrong tool or choose the wrong thing, when if they would have just started, even the inefficiencies of starting, even the inefficiencies of choosing the wrong tool or the wrong platform at this point, would have been greatly outweighed by the results that they would have had if they would have just gone with one and said, “even if we just use it for a year, six months, it pays for itself.”

User-Friendly Tools and Streamlining Design with AI

Nolan: So, we’ve covered that step. Now, what’s next? We’ve discussed development, and you mentioned, “Hey, we have this whole other one” – what’s the next one?

Dr. Ashwin: Just one more point on the tool thing. If you do what you’ve said, I recall the days when we used to have debates over Cubase versus Pro Tools. Some people were using Pro Tools, while others used Cubase, their digital audio workstation of choice. They worked quite differently. The interfaces were different: where are the buttons? How do I get it to do this, that, or the other? How do I add channels?

So, all of the functionality was very different, and it was a different skill set to use one software versus another. Nowadays, that’s no longer entirely true, as user interfaces and mechanisms have converged. Most of the software looks very, very similar. Therefore, the skill set required to pick up a new piece of software and then utilize it is much lower than it used to be.

Nolan: Absolutely. Very good point. I use this very terrible graphic design tool called Paint.NET. I do it because it’s free, and I’ve been using it since college. It’s not that my company won’t pay for Adobe, and I’m familiar with the tools; however, this software is probably around 30-40 years old at this point. Like you said, they diverged. It used to be that you could do one or the other, and learning that had a huge impact on adoption. We have just become partners with Colossyan, requiring zero training. I went to the tool and started poking around, and I was able to complete it within 30 minutes. So, yes, I absolutely agree there; it’s very user-friendly.

Dr. Ashwin: So, let’s pivot slightly to design, because we’re still in the tech space at the moment. Design is something you do again. You conduct your analysis, design, and development—the usual waterfall ADDIE approach. For those who do, the thinking part involves experts, the individuals who may want to collaborate with you to provide expertise in a learning intervention. Now, those folks, their time is precious. They have lots of other competing demands. So, in a business, do we want them to be creating a first draft of something? As we mentioned in the previous section, many tools and technologies available can create a shell, generate a first draft of a script, and ultimately minimize this burden.

But what if they don’t? How do you get your designers to be working back and forth with some of the foundational models to be thinking about pedagogy, to be thinking about that first draft, to be thinking about objectives and satisfying those objectives, and all of the things that we hold near and dear as some of the key points in that development journey that starts with analysis and goes through design? These are content pieces, and I’m not saying that content is the way forward. However, I’m saying that for those who are still creating content, thinking not only about media development but also about the entirety of the journey is potentially a way forward to starting with AI.

Has anybody tried using all these tools? Have you used ChatGPT to generate a design, outline, or first draft of a script for yourself? Because this kind of thing is quite easy. You mentioned prompt engineering earlier, and with a little bit of experimentation and, dare I say it, a little bit of online discovery—look on YouTube or Coursera or anything for prompt engineering courses, there are thousands of them out there—you can figure out pretty quickly that you can start to streamline your process that goes from an expert or a business demand or something that you need as an intervention through to something that is almost the first draft. You can take that process and utilize a lot of AI to achieve it.

Nolan: I’m implementing this internally at Infopro right now in a couple of our back-office cycles. I realized I needed to take my own advice to heart. At first, I thought, “Maybe I can just wait for Copilot, our corporate org, to have Copilot and start loading our assets and make it our own.” However, I then realized that the ability to build my own world in Copilot, or within any GenAI tool I’m allowed to use, is not very complicated.

Most of the options available to get me from zero to 90% or 80% there—even if they only get me 50% there—are not a lot of effort and require no code. You don’t ave to understand code or development; a basic understanding of how the entire GenAI process works is sufficient.. But if you do, you can build a working model that takes you really far, really fast.

Overcoming Fear and Building AI Literacy

Nolan: But for those who might be scared to start, what are those areas that are scary for people? What’s on the minds of people who aren’t doing it that might be a roadblock for them to start?

Dr. Ashwin: It’s a whole variety of things, to be honest. Some organizations, early on, in November 2022, when we were all able to play with ChatGPT a little bit, some organizations decided, “nobody can use this. We don’t know enough about the risks, we don’t know enough about the security, so that no one can use it.” For those organizations, I’m sure that was the right decision. However, this also introduces a certain level of fear among the worker population, as messages don’t always disseminate uniformly throughout the organization. So we have this very strong mandate: “nobody touch this stuff.” And then we have perhaps the trickle-through effect of, “Actually, we can use it for a little bit of this or a little bit of that.” Those messages have trickled through much less strongly than the mandate to say no, because mandates to say no tend to be widely disseminated.

So you have a little bit of that fear of, “well, you said no, you said don’t touch the button, and now you’re saying it’s okay to touch the button. I don’t know if the button’s hot or not. Why would I touch this?” This natural human inclination to ask, “Should I really do this?” is potentially healthy because, when it comes to AI skills, if your organization as a whole is AI literate, you can have a robust and sometimes thorough discussion with those around you. You can use AI in these specific use cases; however, we don’t recommend using it in other use cases.

Perhaps it’s useful for X, and perhaps it’s not useful for something else, or maybe we want everyone to be experimenting. So, whatever the organization is saying that it wants people to do, that needs to be accompanied by a baseline of AI literacy, such that people understand those messages and can then act on them for a competitive advantage in the business. That’s one reason why some of the adoption has been a bit low.

Dr. Ashwin: Other businesses, of course, have been very quick to adopt and say, “everybody go and play,” because it’s going to affect our position in the market, and we want to be the best. So, we want everyone to play, go, invent, and experiment, and that’s also fine. However, the skills point, I think, not only comes from this idea of “let’s say no and make sure that people don’t play with it,” but also from media scaremongering and similar factors. If people aren’t working with AI, they’re not using it at work, and they’re not really using it at home either.

As a result, you get a lot of noise and chatter, and then it may become something you don’t want to do. I believe the last point is related to concepts such as self-confidence and self-efficacy. AI seems like this big, scary thing. “I don’t know how to use it. Should I try to use it? Am I going to need to learn coding to use it?” You’ve got all of these things as well. The reality of the situation, as I mentioned at the start of this conversation, is that most of the things you’re used to using already incorporate some AI.

Nolan: One of the things that caught my attention was the use of intelligent search within our LMS and LXP, one of our selling points. Even four or five years ago, we were utilizing intelligent search to provide better answers and implementing LLMs and similar features within the tool to help create more effective responses. That was a significant selling point, and I don’t think people really considered it as, “well, this is AI,” but it really was. It’s not a script: enter X, and it outputs Y. A string of things was entered, and we’ll gradually improve over time. So, yes, I think a lot of that stigma is there.

Encouraging Experimentation and Free Trials

Nolan: Maybe speaking of that, what have you seen as good advice for L&D organizations in particular that are in that space, whether it’s at the CLO level, or maybe you’re a CLO yourself and you sense that this is happening in those layers below? What have you seen as a good tool to break down those barriers a little bit?

Dr. Ashwin Well, the best tool is education. We’re in the learning space, and the cobbler’s children have the worst shoes. Therefore, we must ensure that the L&D organization is equipped with the necessary skills and awareness of current developments. Let’s face it, it is a luxury to be able to innovate and experiment with the various products available on the market. There are plenty of courses available; you can take one of them.

But if you go and experiment, play, and figure out what is useful and what doesn’t work for you, then you’ve probably learned quite a lot just by doing that research for yourself. So, I would generally encourage people to go and play. And that’s not exclusive to the foundational models, as many market tools offer free trials and similar incentives that encourage experimentation with them. All of those things are helpful.

Nolan: That’s how I built my first e-learning course in AI. We had this; I’m not sure if you’re familiar with Slice, a Splice knowledge base. They had touted in a meeting of mine, “essentially, tell it the course you wanted to create, load some assets, and it will create an e-learning course.” I thought, “No way it’s going actually to do that.” I didn’t believe it. So I went and did it, and I loaded it. What it did for me was, as you mentioned, one thing: it showed me. I think it’s a really important point—I hope a lot of people take it—while Ashwin said a lot of the technologies are similar now.

Hence, it decreases user adoption, even when users experiment with it. You can experiment with whatever Colossyan, Slice, or other tools, but the likelihood is that even if you don’t land on that tool, or even if you go with another tool, it’s going to be similar. You actually understand the fundamental layer of how these organizations are structuring the automation and the AI within this space. So there is a ton of learning.

I remember that I gathered understanding: “Okay, what is the step-by-step process that they’re going to want to take me down to build this course? How much of it am I going to build, and then start removing things? Am I going to build it first and then add the—is it going to add the images?” So much of that you learn just by playing. The act of playing is actually how you learn.

So it’s so connected. I think that’s really good advice, especially for those in the development, production, delivery, or design space: take advantage of a free trial. Just about every SaaS product out there offers a free trial, encouraging users to leverage it and provide feedback on the pros and cons. That’s what we did internally at InfoPro Learning when this started back in November, a couple of years ago. We said, “go play, come back, and every week somebody reported on a new tool and said, ‘here’s what I like, here’s what I don’t like.'” Tremendous lessons there.

Dr. Ashwin: Absolutely. And, of course, if it’s a free trial and you don’t like the product, remember to cancel it at the end of the trial period.

Moving Beyond Content to AI Agents

Dr. Ashwin: So that’s effectively content development, content design, and working back and forth with not only market tools but also working back and forth with foundational models, GPTs, Co-pilots, and things like that. As we move into more complex spaces, we start to revisit our purpose. We start to think about slightly different purposes. We began this discussion by exploring how to utilize AI to create content effectively. We discussed various platforms, including content platforms and data management.

However, if we step away from that paradigm for a moment, we can create content that serves as a stimulus for people to learn something, and then hopefully, they will learn it, apply it, practice, and hone that skill. If we move away from that paradigm for a second, towards the probably more realistic one now that we’ve got co-pilots, if you’re a Microsoft house, you’ve got Copilot; if you’re a Google house, you’ve probably got Gemini, ChatGPT, and all the others, Claude.

In all these models, a tiered structure, and that tiered structure allows you to say, “If I have Copilot, I can ask it questions.” As an L&D person, you might want to request content designs, but we are now moving to a stage where your learners actually have access to Copilot. So, if they want to know something, are they really going to stop what they’re doing, go to an LMS, find an e-learning course, sit through 15 minutes of click-next e-learning? I’m not saying it’s going to be that bad, but you get the idea. We’re moving further and further away from the idea of accessing information in the flow of work, as we now have the capability, with tools like Copilot and Gemini, among others, to access that information.

So there’s probably a point at which L&D departments need to work with IT departments to figure out how are we going to embed the skills—the basic literacy in terms of AI and data, because you can’t underplay the importance of data when it comes to AI—how are we going to embed all of these skills in our organization such that people can start to be creative with whatever tool set we have embedded in the organization’s productivity tools?

That’s a very different way to think about it, and it begins to shift the focus away from the idea of content. If you’re doing that, then you start to move into the territory of, “we’re going to use foundation models. We’re going to use things like Retrieval Augmented Generation (RAG) for supplying a foundation model with our company data in a secure way, and then allowing the responses to be way more contextual and way more accurate relating to our processes.”

Everything I’ve just said now shifts from the idea that I can utilize the tools I have, which incorporate a bit of AI (first steps), to perhaps second or third steps, where you want to consider something more complex. Now, depending on the organization, you may have an implementation of something from OpenAI or Microsoft that allows you to input documents and receive a response, enabling everyone to do so. You may want to create bespoke agents to support individuals with specific needs.

This is where you start to get into this territory of, “Is this scary? Do I need support? Will this require some coding? Is it going to require X, Y, and Z?” You start to get into this. There’s a barrier to entry. With that barrier to entry, of course, comes, “how do I do it? Do I need people in my organization? Do I need to liaise with the IT department? Do I need a supplier to do this?” This becomes a little more challenging. That’s why I say this starts to be a step two discussion rather than a first steps discussion.

Nolan: Absolutely. That’s when we started, and we said, “Let us understand the landscape of AI.” I really do think that’s part of it. Even if what we’re talking about now—creating an agent for people to answer questions—even if that seems five years out in advance, it’s important to know that it exists today and know that that is the step two or the step three, wherever you might be, because that is the eventuality of it.

Once Google was created, and subsequently Wikipedia, sales of encyclopedias and library visits likely declined. And now people go to libraries for completely different reasons. My family and I probably go once a week, but it’s rare because that’s where we find the information we’re looking for. We go because we live in northern Idaho; it’s dark and cold by 4:00 PM, and we have to find a place with the kids to keep them occupied and prevent them from tearing our house apart. It’s a nice, peaceful place to go and hang out, where they can draw.

So, it’s essential to know where it is headed. As you said, that’s the one thing that I’m doing right now. I realize that getting my agent to like a 1.0, load in my documents, and get an answer is something I can do relatively easily. Anybody can honestly do that. But then, how do I serve it to my people? Where does my agent live? In my case, I’d like to live in Microsoft, with the agent being in Teams, which is something that many organizations are implementing now within their Slack, Teams, or other platforms.

And that’s when, a couple of years ago, we implemented a chatbot on top of our LMS and LXP. However, we soon realized that it was much more valuable if it wasn’t just pulling in content from the LMS, but also from other sources. But then that created this whole other problem of, “if it’s pulling in the information from everywhere, is this bot best served where it should actually sit?” It’s very rare for an employee to go to their LMS when they have a question. They have 10,000 other places they go to.

Dr. Ashwin This is going to be probably one of the key considerations for the industry over the next, let’s say, five years, for argument’s sake: what I said earlier, the existing paradigm—content, infrastructure, data, content platform, data—effectively that paradigm actually goes away if we adopt a Copilot model. And I mean Copilot, not in the Microsoft sense, but where humans work with AI as their supporting agents. Therefore, it will be something we will all need to consider. It’s something the platform manufacturers are, of course, considering: how do they augment their offering to make it more rounded? So, yes, it’s something we need to consider.

In the space of “where does it live,” because that’s something that you mentioned, “where does it live” has so many different answers. It has answers because it’s where agents live; it could be where you’re hosting it. How is it being accessed? So, all of these things are slightly different ways to think about it. In the Microsoft sense, if you are using Microsoft products, such as Windows, you can use Copilot through Edge. So, effectively, you have access to the tools that you’re using. The Wave 2 announcement from Microsoft, which was two months ago or something, included Copilot integration with most of the Office suites. So, you had Word, PowerPoint, and Excel, and all of these tools had Copilot functionality. To some degree, they’re ahead of the curve and solving some of those issues.

If you’re building your own, which is what you were saying, where does that live? That again depends on your organizational setup. And it depends on how you’re building it. The interesting thing is, it used to be you had to have it almost live in one place because the idea of it living in multiple places meant it was going to be serving up, it was going to create this never-ending string theory; it’s going to create so many different theory versions of itself, and you’re never actually going to know where it ends. Today, with everything being so connected, I suppose the question is more, “where wouldn’t you want it to live?” And it should live everywhere.

Key Factors: Strategy, Governance, Tech, Data, Skills

Nolan: Is there one more component we want to discuss, Ash, or should we come back now and summarize? Is there another component you would like to discuss?

Dr. Ashwin: The key components: strategy, purpose, and make sure you’re doing something purposeful. Make sure you have the right governance in place. If you’re using technology, ensure that you’ve surveyed the market and are aware of what’s available. Data is something we need to touch on, but we’ll address it in another session, as data is the fuel for the AI engine, and that’s a significant topic. We then touched on the concepts of AI and data literacy, as well as the culture of innovation that aligns with the skills required by the business.

So, I would say the key factors to consider when implementing AI are strategy, governance, technology, data, culture, and skills. However, how each of those manifests in an organization is ever so slightly different, depending on factors such as business strategy, risk appetite, regulation, the underlying culture, and the baseline literacy within the organization. So, it’s difficult to say there’s a one-size-fits-all approach, but I would say everybody should consider these things. And that’s not only for AI implementation in L&D; it’s also how the L&D department serves its organization and aligns with business strategy, as well as for AI implementation company-wide. So, this is now a tech strategy. The factors will be the same; the considerations will be different.

Nolan: Very early on in my marketing career, I was fortunate enough to go out to dinner with the owner of our company, and he said, “Nolan, what is the value of a lead to you? What does it cost? Do you know how much you spend on just a lead, or the inverse of that, and how much money you generate from that said lead?” He said, “If you don’t know that, you can’t diagnose any problems associated with that. It’s impossible for you to actually determine what that problem is or what the payoff is for solving that problem.”

Measuring Impact and Linking AI to Business Goals

Nolan: I feel like so many companies, if they started with that phase zero of “why am I doing this,” if you put a dollar figure or some financial figure, or headcount, whatever it is, attached to what this improvement in AI brings to my organization, you know you have the direction where the business is going and where they want to go. But then if you take a financial analysis of what could this do to my organization, my department—whether that’s learning, HR, marketing, sales, whatever it is—and you actually start putting dollars and cents behind the problem, I’ve always found that those problems become much more real and much more important for you to solve and for your organization to solve versus just, “oh yeah, if I got this done, I’d be able to create assets quicker.”

Well, what does 30% faster mean? What is your dollar? What is the payoff at the end of the day? I always feel like if you can really nail that, it actually helps facilitate and grease the wheels for everything that comes later.

Dr. Ashwin: I would make that slightly wider and say, if we take this to the L&D development side of things, which is what we’ve talked about over the last 40, 50 minutes or so, if we take it there and we say, “forget AI, let’s not talk about AI for a second, let’s just talk about the principles of what we do.” Now, the principles of what we do: we are going to spend some money on an intervention, which means other people have to spend their time. We spend money so people can spend their time. That’s a cost; there’s a cost associated with all of that. Now, if we’re going to spend company money, which all of this is, why are we doing it? What are we measuring?

And do we routinely create one e-learning course, for example, a 30-minute course, if people are going to complete it in 30 minutes? We’ve spent 500 hours now. What was the reason for doing that? The reason for doing that should be measured in some way. So, are we doing A/B testing, or pre-post testing, or something that says there was a business metric, there was something around the time it takes to do a particular operation, the number of sales calls we made in a particular month, whatever it happens to be?

Something that’s operationally measurable should be different after we’ve spent this particular amount of time, and we should be able to measure it afterwards, either pre- and post-intervention or by comparing two groups: an intervention group and a non-intervention group. So, we have an A/B-style test.

I’m describing this, and it’s fairly basic stuff, and you’re saying, “yes, yes, yes.” But how many of all of your clients—you’ve got a lot of clients at Infopro Learning—you go and make e-learning for lots of clients. Every single bit of e-learning that you’ve made should have a measurement behind it. How many of them do?

Nolan: That is the subject of, I think, I’ve done at least seven podcasts on just this one topic. When I started in this industry 14 years ago, I think that was what I actually encouraged our company to pin itself on. It was our tagline, “learning for performance,” for a long time. Interestingly enough, we did a lot of things to try to step people there. So, we created something called an outcome success plan, where we say, “If we do X, we will do Y,” categorically, at a program level, not at a 30-minute asset level, but at a program level. Because we realized that was a more straightforward starting point.

And we still didn’t quite get it. It was still tough, I think, to go back and get the data. You mentioned getting the data ahead of time. Most of the conversations wanted to go like, “we’ll see what happens once it’s done.” It’s like, “we need to know where we are today, and we need to know where we want to go so that we can design the program to get us there, versus design the program and then afterwards take a look at a couple of things and see what moved.”

Oddly enough, Ashwin, what moved the needle the most, which was a really interesting concept, is that we started going to our clients and saying, “Listen, we think this program should win a Brandon Hall award,” because most companies that come to us are doing large things. They’re not like, “hey, here’s a 30-minute course.” It’s a large initiative they want to do. We said, “This should win a Brandon Hall award.”

One of the five main components, and it’s the big one they focus on, is “what is the impact?” So that little thing of, “we’re going to do this Brandon Hall award, but if we do it, we have to know the impact, so we have to know where we are today and where we are going,” oddly enough, that has had the biggest impact on actually creating that incentive for a lot of our customers.

I think it’s because it is a framework, it is a foundation. I think there’s still a little bit of, I don’t know if “fear” is the right word, but you are sticking your neck out there when you go to sales and say, “What metrics do you want me to improve with this course?” Because you’re naturally implying that they should get better once they’re done. So, you’re taking a bit of a risk.

Dr. Ashwin: The reason I mention it is the obvious question with AI is going to be, “What are we trying to change? How do we measure that this has actually been successful when we’ve done it?” Suppose we’re not in the habit of asking those questions, obtaining those answers, and measuring outcomes when conducting regular non-AI interventions. In that case, this issue won’t improve once we start adding additional technology.

Nolan: Then you’re the same company that spent a million dollars on AR headsets just because you wanted to see what it would do. That’s never the biggest strategy. What a great way to end an hour, coming all the way back to the beginning, almost like a Christopher Nolan-directed podcast.

Closing Thoughts

Nolan: Thanks, Ashwin, for this series. And those that like this, again, this is a series. Please follow our channel on Spotify and connect with us on LinkedIn. You’ll see our other sessions posted here. Phenomenal content. Thank you again, Dr. Ashwin, for joining us today. We look forward to the next one.

Dr. Ashwin: Thank you, Nolan. Thanks for having me.

Recommended For You...

share