Dr. Ashwin Mehta, Founder and Director, Mehtadology
Dr. Ashwin Mehta is an AI strategist, learning technology expert, and founder of Mehtadology. With over 14 years of experience across industries, he advises organizations on integrating AI to drive learning, performance, and workforce transformation. Holding a PhD in learning technology adoption and an MBA, Dr. Mehta is a sought-after keynote speaker, consultant, and thought leader. He leads AI masterclasses, contributes to global discussions on ethical AI, and produces original research as well as podcast content. Through Mehtadology, he helps businesses cut through the AI noise and implement strategies that deliver measurable, sustainable impact — the kind that matters in real-world transformation.
Nolan Hout, Senior Vice President, Growth, Infopro Learning
Nolan Hout is the Growth leader and host of this podcast. He has over a decade of experience in the Learning & Development (L&D) industry, helping global organizations unlock the potential of their workforce. Nolan is results-driven, investing most of his time in finding ways to identify and improve the performance of learning programs through the lens of return on investment. He is passionate about networking with people in the learning and training community. He is also an avid outdoorsman and fly fisherman, spending most of his free time on rivers across the Pacific Northwest.
We often hear about personalization in L&D, but are we truly reaching the individual learner? In this episode, Dr. Ashwin and Nolan explore how AI and data can move us beyond generic content toward real customization—where interventions are tailored to behavior, performance, and business value.
Listen to this episode to find out:
- Why personalization in learning is often misunderstood and how to redefine it.
- What types of data are essential to customize learning at different levels (object, pathway, platform)?
- How generative AI can scale personalized content and messaging.
- The connection between behavioral data and effective intervention design.
- Why most L&D teams still lack the right data despite technological advances.
- The different categories of AI applications, from content generation to intelligent agents.
- What virtual agents mean for real-time, adaptive learning experiences.
- How organizations can segment learners by performance gaps to tailor development efforts.
Most of what we call personalization today is really just segmentation—true personalization requires behavioral data and individual context.
Founder and Director, Mehtadology
Introduction
Nolan: Hello, Dr. Ashwin Mehta. Welcome to the podcast.
Dr. Ashwin: Thank you for having me.
Nolan: Or, Welcome back to the podcast, might say, as part of our series. Thanks for joining us again today. Today, we will discuss the role of data and personalization in AI. And I thought a good starting point, Ashwin, might be to level-set AI in your perspective on how we have three different core components of learning, including a content platform and data. And if you could, could you set the stage for us on what makes data such an important component of AI, and then we can dig into the details a little bit?
Understanding Personalization
Dr. Ashwin: Yeah, I’ll set the stage with two things, not one thing. As usual, let’s go off. Peace.
Nolan: Yeah, above and beyond. We’ve been going for two minutes and we’re already somewhere.
Dr. Ashwin: The reason I say that is that it’s quite a common question to be thinking about data and AI, what’s the relationship, and why should we be thinking about these two things? However, another aspect you mentioned in the introduction was personalization. And I’d like to start the discussion by tweaking that a bit.
Points means, what does that word mean?
Because it’s something we’ve discussed for quite a while in learning, we naturally incorporate it into marketing as the concept of personalizing things, including messaging, content, and the overall user experience. I’m struggling to think if we’ve ever achieved this or if I’ve ever seen it in the learning space. The reason it’s difficult is that personalizing means to the individual, not to the role, not to the geography, not to the country, not to a particular business unit, but to the person. I state that up front, that personalization should be that.
Nolan: That is an interesting wrinkle because personalization has, I think, increasingly gotten harder to do. It’s the Shrek principle that the first Shrek took 20 hours to produce. And then Shrek 2, although the technology enabled it to be produced 100 times faster. They also made it a hundred times more complex because the tools to make Shrek were even better. You and personalization make me think back to a time when the first iPhone came out; part of that was the ability to personalize it. They’re, Hey, personalize your experience and this, that, and the other. You can also download the apps you want. And then in personalization, you mentioned in marketing,
There are many different tiers.
We were discussing account-based marketing before this call, and it’s almost as if you throw your hands up, saying the best personalization we can do is deliver marketing to someone within a company. I know a person who works for this company; therefore, they’re relevant, or I am familiar with their industry. Personalization is occurring at both the industry and job levels.
A good context to say what we’re after here is that the individual is the focus, and I’m personalizing to this individual at that level of depth. And because I think we now can do versus, know, with, oh my gosh, I think when they came out, a big factor and thing they were pushing was that we can personalize the content experience for your people.
And you peeked into the hood, and you’re, what does it mean? If they’re in sales, we’ll provide them with sales content. Yeah, that’s role-based. Yeah, it’s a very good context setting.
Dr. Ashwin: Role-based. Sure, the reason for talking about personalization is that, effectively, what was the question? What’s the role of data with AI? And the role of data with tech, magic, whatever terminology you use, is the role of data to do something with some magical technology or whatever widgets we’re using. The role of data is essentially about understanding your customers, learners, employees, and the business itself.
When targeting an intervention that involves a technology solution to address a specific problem, as we mentioned in the last session, articulating your problem is crucial. The more you understand about the problem, the more you understand about the person you’re trying to reach with your message, the more you can tailor it to their needs.
It’s a very simple thread, and I haven’t even mentioned AI in that thread. Right? This is the simplicity of it. What do we know about our people? And if we know a lot about our people, we can start to leverage that data to customize the message. And if we customize the message, it lands more effectively. That’s the narrative structure of this. The use of data and AI is a separate discussion.
Because everything we know about artificial intelligence is that we use it, this is the simplest explanation in the world, but bear with me, right? We use computers to analyze data to recognize patterns. With generative AI, we utilize computers, data, and patterns to generate something that resembles the pattern.
Nolan: Yeah. Yeah.
Dr. Ashwin: If that’s what we’re doing, we can already see in that tiny, tiny explanation that there’s a lot of data required because we need data to make computers work, right? Very, very simple stuff, right? Now, we need data to train our models.
When we use models, we’re also generating data.
When people interact with models through call-and-response, that interaction itself generates data. Now we potentially have a much richer data picture than when we trained a model. And now we potentially have something, some data around behavior. Now, let’s return to the original explanation. We use things that are grounded in data.
Whether it’s activities, technology, or whatever, it is grounded in data to hopefully convey a message more effectively. The data we have on people’s behavior, if analyzed, should help us deliver the message more effectively.
Because we know how somebody behaves, this is a very nebulous concept we’re discussing. It’s not grounded in reality. Let’s bring it into reality for a second. Previously, we discussed content, infrastructure, architecture, and data. In the content space, content creators, developers, and others upload their content to a platform, where it resides in a specific pathway. That gives us several levels of abstraction to consider when thinking about personalization.
However, it requires us to understand what people want, what they do, and how they behave. This is a question about human-computer interaction. If we abstract the levels for a moment, we have content objects and media objects that sit within content structures, which in turn sit within course or module structures, which are then placed on platforms within pathway structures. And those pathway structures might sit in clusters, plans, or groups.
A simple level of abstraction of what learning uses as a taxonomy. Now, suppose we apply the principles of personalization and data to all these levels. What we have is a potential feature that allows us to personalize content objects at the bottom. That’s the video I see, which differs from the one you see.
It’s in my language. It’s got somebody who looks like me telling me things in a way that relates to me. It resonates with my behavior, content objects that sit in a content structure. The narrative from A to B that goes through various objects could be different for me compared to what you see. That could be based on data. I’ll discuss what that’s based on in a moment.
Then, the series of courses I go through, which are next level up in a pathway, the sequence of events, could also differ based on my preferences and knowledge. And then, clustering that, we have things that are probably starting to become a little bit generic now, because we might be talking about role-based.
Nolan: Skills or something that? Yeah. Yeah.
Dr. Ashwin: We have levels of abstraction.
Hyper-Personalization at Scale
Nolan: Yeah, and it’s, it’s, the way that I’ve always wanted to think of it. And, now, you mentioned the content example. And I think that I think in people’s minds, and one of the things we can talk about, later, is which of those three tiers is it hardest to leverage the data to operationalize, but I don’t know if you saw the movie here.
I believe it was Joaquin Phoenix, and then Scarlett Johansson played the AI girlfriend of Joaquin Phoenix. And it’s really interesting because it essentially is hyper-personalization, right? You mentioned the video, where the person listening, in this case, Scarlett Johansson, finds her voice unique to them. And it happens to be female because that is what he is interested in.
However, the same application, with which he fell in love, was also used by a male who fell in love with women through other avenues. I was a real-life application of what you’re talking about, in terms of content at a personalization scale, albeit on a much more exaggerated scale. However, the interesting aspect is that this is really what we’re talking about. And the ability to get there is not science fiction.
Dr. Ashwin: Yeah, and I haven’t seen the movie, but I have. I did read about the OpenAI voice assistant when it was first introduced; it sounded like Scarlett Johansson in a film. That resulted in some legal action from Scarlett Johansson. That’s a whole other interesting topic that we could discuss at some point.
Nolan: It did.
Dr. Ashwin: Absolutely. When we’re discussing this, it is a traditional L&D paradigm that involves content, objects, content structures, pathways, and so on. If we move to the other end and have effectively rag agents or agents that are grounded in some documentation you provide, that’s also data, by the way. However, that level of personalization is almost innate in the thing we’re using. It follows a very different path to achieve personalization.
Broad strokes, we have that taxonomy to play with. The reason it’s important to understand taxonomy is that it provides insights into how we can personalize each of these levels, as well as the types of data required at each level. But it’s still not necessarily an AI play. It’s still a traditional L&D content platform play if we tease that out for a second.
Now, let’s talk it out because that’s probably going to be fun, right? If we, nebulous we, developers want to make a video, it’s probably about sales, learning, marketing, or some other topic. We want to create a video that’s personalized for you and me. But the topic is broadly the same. Now let’s call it a one-minute video. What kinds of things might we need to know about each other that would influence what the video looks like, what it says, and how it sounds?
Nolan: Yeah, sales, first thing you’d want to know, you’d probably want to know what’s the difference between me and you from a knowledge perspective. And have you been in this industry for 10 years, and I got started last year. What is my base level knowledge? Probably where is my performance? Am I already a peak performer in this topic? And I’m looking to improve a bit, or is this an area where I’ve struggled in the past and need some help? That’s my seniority and job performance. Think my gender roles, think, know, play a factor into it.
Age, not seniority, but age, millennial, Gen Z, Gen X, boomer, whatever it is. Locale is huge, right? You’re in Britain, I’m in the United States, we have a completely different, mean even in the United States, right, you have something you’re going to deliver to somebody in Seattle, it’s a lot different than what that person in New York wants to hear, wants to see. At a very high level, those are a couple that come to mind. And then my interests, I guess, and then on top of that, there’s the interest, right, for me giving me a football analysis, there we go, football. American football analogy versus football analogy, but sports analogies —things that would resonate with me.
Dr. Ashwin: Yeah, interest, of course, and also ethnicity is a thing, as, right? We have all of these things that effectively give us a clue as to how we might want to tailor differences. Now, regarding the point you made, I’ll throw a challenge back at you. What will that dictate in terms of how we change the video, or is that a next level of abstraction? It’s changing the sequence or core structure for you, rather than for me. What do you think?
Nolan: I think it would be both. I think that the next level of abstraction would probably dictate that I wouldn’t be able to see this video. And if we were that different from each other, it’s possible that I wouldn’t have seen that video.
Dr. Ashwin: Absolutely. And to some degree, it’s semantics. Now, we’re talking. However, it provides a clue as to the types of data we need to know about the person. Now, considering the language I use versus the language you use, given that you are in the U.S. and have a different background than I do, we can, to some degree, tease that out. We can understand that and tailor it. We can also think about sentiment.
Am I a particularly negative person, or do I want to scrutinize the details? Are you a particularly positive person? You want to see the roses and the sunshine. We can start to tease out these kinds of things, provided that we have the data. If we don’t know, then there’s nothing we can do about it, in a roundabout way that takes us to the point. And the point is, is data important if we’re going to try to do personalization? This illustrates that point somewhat.
If we move up in terms of abstraction to a completely different level, and we’re thinking about skills or pathways and related things. Now, a pathway and a platform typically refer to the chain of content that originates from various sources, which, let’s say, isn’t particularly personalized. I might go through a series of videos, and the series itself might be different from the series that you go through. Although the videos themselves, if someone else were to see them, would be the same, they would be the same media. Now, why do we go back to you? Why do we have pathways?
Nolan: I think it’s because. Not everyone learns in the same way or at the same speed, nor does everyone have the same starting point on that journey. And trying to force everybody through that same journey, at some point, it becomes a one-size-fits-all approach, which was designed to be that way.
Dr. Ashwin: Yeah, it’s the filling in the gaps, right? And that requires us to identify the gaps. This is the point of how much assessment we do? How much do we understand about people’s skills? How much do we understand about their proficiency in terms of how comfortable they are exercising those skills? And then we reach a point where we understand what someone does and doesn’t know.
And that leads us eventually to the purpose I mentioned earlier. The purpose of all of this, though, is to provide people with adequate support, right? It’s to adapt our learning and teaching strategies so that people have the right level of knowledge to exercise their roles effectively, right? If we’re taking the responsible approach to support everybody adequately, it helps to know what people don’t. It helps to know where the gaps are, and it helps to be able to articulate that in data, because then we can act. Again, another roundabout way of saying data is important.
Nolan: Yeah, I was discussing skills internally with a consultant. And I was at HRTech, and a lot of the conversation now focuses on HR tools that are centered on skills as they’re being transitioned. I would say down the funnel, right? HR is traditionally focused on recruitment and onboarding, but I think they’re starting to say, ‘ Okay, what do we do once this person is in the door? ‘
The Role of Data in Tailoring Learning Experiences
Nolan: That line between HR and learning is merging. And I was saying that much of what we know about a person revolves around the data and how it is used. If we ever tried to do this type of grouping that we talked about, right? How do we know what Nolan knows, or what Nolan knows might be a little bit easier, but how?
Do I know what 10,000 employees know in my company, and how do I do it systematically? This was a multi-year process, spanning three years, to obtain this information. However, today, the timeline between what my people know and how I can connect that knowledge to start grouping people has shrunk significantly with the advent of AI, which enables the feedback cycle to proceed much faster. This is particularly important because skills are constantly evolving and what we want them to accomplish. Again, the data is all the input that goes into whatever the output we’re trying to accomplish, because the data doesn’t give you anything except for eight.
It doesn’t do anything for you. You have to go forth and prosper, right? However, it won’t solve the problem. It’s there to inform you of the problem and for you to act about it. And I think you’ll appreciate this for those who are listening, if you’re wondering what’s changed. Yes, all of this could have been said 50 years ago. However, the difference is that now our access to this data and our ability to interpret it are significantly faster than they used to be, reducing the time it takes from years to months, days, and weeks.
Dr. Ashwin: The volume of data is also different, right? With A, we have a wealth of data available. But my reflection on some of the technological changes that we’ve had over the last, let’s say, the last year, right? Particularly in the agent space. When we start discussing things, such as the co-pilots and open AI agents of this world, we begin to think about what we’re generating as people type or speak into an agent; we are effectively creating a very sophisticated bank of questions that people have.
And when we receive responses and responses to those responses, we’re starting to generate a very sophisticated picture of how people question, how people behave, what people know, and what people don’t know. Now, I reflect that this is fairly standard stuff, but my reflection on all of that is that we are entering a world where we are building increasingly sophisticated data collection tools.
If we are building those tools, which I think we are, then our capabilities to interrogate and scrutinize increasingly sophisticated data need to match. And if they match, then our ability to get insight also changes. That’s a thread I haven’t yet seen mobilize, as it’s emerging. We’re still in the space where we’re focusing on the tools, what can the tools do? And we’re not yet in a space where we’re focusing on having large amounts of transcript data that provide very detailed insights into behavior, sentiment, themes, and how people discuss and engage in discourse with one another.
With that massive volume of data, which will vary from organization to organization, from role to role, and from person to person, I think we will enter the next year with some curiosity and the ability to start scrutinizing transcript data in a very different way. And then hopefully that gives us very different behavioral personalization capabilities.
Nolan: Yeah, and it’s a lot of the takeaway, a big takeaway I’m getting from what you’re saying is, yeah, we’ve always had data, but we all know, or we’ve seen the chart millions of times, about how much data, the exponential growth of data points that we have and that is getting higher. And I think part of the, I don’t know if it’s a paradox, I don’t know if that’s the right word for it, but it’s almost when you start creating a solution, if you’re looking to actualize and start personalizing, we want to help you do just that.
I almost feel that you ought to put a line in the sand and say ‘OK’ at this point. These are the data sources that we have. Now we know that we’re going to get better, and we know that we’re going to get more, but we almost have to say this is it, because the amount of not. This is the only data available, but these are the sources, and this is how we’re capturing it, and this is how we’re ensuring the data is clean because if you’re trying, I think the idea of having all of the data needed to be personalized and perfecting that data is really hard and can be a never-ending journey which brings me to a question I wanted to ask is when it comes to data and the role of personalization, we’ve always heard garbage in garbage out.
Personalization vs. Data Overload: Finding the Balance
Nolan: Now, with a lot of data and a lot of information coming in, what do you think when we discuss personalization? How do we strike a balance between collecting a large amount of data and ensuring that the data is relevant and accurate?
Dr. Ashwin: It’s an interesting question. I don’t think, I’m sure I agree with the premise, right? Now, the reason I say I’m not sure I agree with the premise is that we’re grounding our conversation in learning and development. In the learning and development space, are we still in the stage of collecting timestamped SCORM data, or have we entered a new era? Is this an industry trend?
Or are we in a space where we have deep and granular data on operational metrics, behaviors, intervention success, what people do and do not know, repeat interventions, and that then forms a picture of effective skills and knowledge gaps which we can address because it seems from what you said that we have all this great data, and I’m challenging you on this. This is the relevant data. Do we have it?
Nolan: No, no, I mean, I should say that data absolutely exists. It absolutely does. Whether I’m delivering sales training to Joe, I know whether Joe’s a good salesman, or I can figure it out. I should say, I don’t know. That data exists, and it is accurate. Now, from what I’ve seen, and we talked a bit about this a little bit in the last session. My ability to access, interpret, and connect that data to an intervention that will have an impact on it is not yet fully developed, but the data is available. Now, my ability to execute, find, use, and interpret, I think we still have a gap.
Dr. Ashwin: I agree with you from a sales perspective. Now, the reason I think sales is a special case is that most businesses with sales functions tend to measure their sales performance reasonably well. Because it’s the closest thing to revenue, right? Everything else, I think it varies from role to role, profession to profession, as to how…
How data is captured on the things that matter, right?
Nolan: I’d like to take a look. What’s the hardest one? Sales is the easiest. I would agree. What would be one of the harder ones we’re talking about here?
Dr. Ashwin: I think if we were to go from sales through things manufacturing through to things that are more conceptual, foundational research and development, if we take that as a vertical, I think we get it’s easy up here and it’s quite difficult down here. And then if we go across, we have support services. We have IT-related metrics, where you would typically report the number of tickets we have and the resolution rate of those tickets.
We have a lot of data on that. We may review aspects such as health and safety, acknowledging that we are aware of potential accidents and take appropriate measures to mitigate them. We know a little about near misses, but there are other nebulous aspects of human factors that we don’t fully understand, for example. I don’t think it’s as mature in every aspect of every business. I think that’s the challenge I’m making.
Challenges in Measuring Performance
Nolan: I don’t know. See, the reason I asked that is that I agree that the level of data that we have is probably not as tangible or believable as sales. I think back; this was 15 years ago, a mom-and-pop meat processing facility in Utah. We built software for them that allowed them to track a cow. Any supermarket that called could ask, ‘Hey, where did this particular package of meat come from?’ They could tell you everybody who touched that, where that cow came from, who sold that cow, what that cow ate, all attached to a specific thing.
And they that was tracking a particular piece of meat. But then, at the local level, they could see, ‘Okay, how much meat can Joe pack a day, an hour, a minute, what type?’ This is a very small operation, known to be worth no more than $ 10 million. And I think I’m even thinking, what about a lot of the mid-manager roles? Right. It’s really hard to track, I think, in a middle-level manager.
But then I think, but if I had to, what could I look at? Could I look at a function of the team? What does that team produce for me? I suppose my challenge would be, now that Ashwin is gone, if you can’t measure somebody’s performance, then how are you supposed to measure their performance? I mean, what are we doing here, I guess, is the question.
Dr. Ashwin: Qualitatively versus quantitatively. With knowledge work, it’s significantly more difficult to track these things, as some of the things you’re mentioning suggest. In the pharmaceutical supply chain, right? Tracking active pharmaceutical ingredients from wherever they come around the world is akin to what you’re saying with the food safety piece. And I did see a piece on a coffee chain, I think it was Costa or Starbucks, but they were able to apply computer vision to CCTV footage and figure out who was making the most cups of coffee, right? The data is, yes, there. The slight disagreement that I’m posing here is in the world of L&D, not in the world of operations. In the world of operations, the data might be there. In the world of L&D, do we have data on the effectiveness of intervening?
Nolan: No, absolutely not. I think, and oddly enough, feel a couple of the stabs I’ve seen at companies; there was a cool early-day LRS called Saltbox in Seattle. Ended up selling, there was all focused on developing better xAPIs to track, to start getting this one their big use cases with insurance and they were able to say insurance agents that took these courses were more likely to produce x results but and I thought for sure this market’s going to take off crazy who wouldn’t want this but it never really has it did. We don’t have access to that data. And I think that’s one of the biggest issues that ought to be addressed before we can truly personalize, because we have to follow the money.
Dr. Ashwin: Yeah, and I think to your point, XAPI adoption has not taken off in the way that we expected in the mid-2000s or whatever came out. Yes, that data adoption is something that we should probably consider, specifically database adoption. But yeah, if the data exists, we should use it. And it might be as simple as asking for the right data. It might be as simple as that. However, if we don’t have the data, everything subsequent becomes very difficult because we’re trying to base our interventions on data that we don’t have access to.
The Importance of Data in Personalization
Nolan: And I guess that’s my talking point, which is really when I said, how much data is right or how accurate that data is. If we have data and we’re talking about personalization, I’m sure many people are wondering, ‘ Are we really that rare? ‘ Where do I begin in this ocean of data? Moreover, it’s also important to note that I know Nolan likes the outdoors, and he’s a male, Caucasian, and so on. I’m not sure which age bracket I fall into, but I’m a millennial. You don’t know if you’re I don’t, I never, because I was on the verge and at that point people weren’t interested in tagging people, at that point. And at a certain point, I moved on with my life.
Is that more important than knowing Nolan is a star performer? In which areas of Nolan’s business is he better at than others? Because I understand it’s a hard thing, because yeah, I could know that Nolan sucks at sales or Nolan’s a bad leader, and all of his people are leaving him. The attrition data is pathetic. What am I doing? Do with that, need to create an intervention to change the outcome. But if that intervention is designed for a 70-year-old black lady out of Australia, it’s not going to be brought home to me. I’m not going to interpret that and listen, and then make an impact really, where do we strike the balance?
Dr. Ashwin: Back to my very first point on personalization, right? It’s essential to ground ourselves and acknowledge that most of what we’re discussing only truly applies at scale. Because if you need to think about personalization and you only have two people in your team, then you go and talk to them. However, if we have 100,000 people and need to change the messaging even slightly, then this becomes more interesting.
It’s important to note the use cases that you’ve discussed.
As I mentioned earlier, the primary reason for doing this is that we can tailor our support strategies and interventions to various use cases. It’s essential to determine the minimum amount or type of data required. Now, if we’re doing an intervention, as you mentioned a few times in the sales space, what do we know about the sales funnel? What do we know about the process of sales?
Is a particular person failing to get the first meeting? Are they failing to close at the third meeting? Are they getting a lot of withdrawals after they’ve been proposed, right? The data that you want to be thinking about is what’s happening in the operational value chain? Why is that important? How can you target your intervention to that?
There’s another level of data abstraction where you say, now that we know which key messages are on the key things that we want to fix in our value chain, the business problem that we discussed, which people need these interventions? How do you tailor it specifically for them? We only really have those couple of levels to play with.
Now, we could discuss adaptive learning, knowledge graphing, and other related topics, but that level of complexity is likely beyond our current scope. If we’re using AI, we need to collect personal data—typically in a JSON or text file format—and tailor our responses based on what we know about the individual. That’s the more complex side of things.
Leveraging AI for Personalized Learning
Dr. Ashwin: On a very basic level, ask yourself: What do you know about your value chain? What do you know about your people? Does that give you any insight into whether you should design differently for the US market, the UK market, or another region entirely?
That’s using data to change your message—which is what I emphasized earlier. However, I wouldn’t call it “personalization” because it’s not tailored to the individual. But it still helps us achieve the outcome we’re aiming for.
Nolan: Yeah, it’s the three-tier model. What is the skill? Then what are the under linked opponents of that? And then who is this person who’s applying these things? As you mentioned, I think that’s a great starting point for those who are collecting data on what brings value back to the company in a person’s day, month, or year. How is this person bringing value to the organization?
Or how is this group of people bringing value to this organization? And then, understanding what is good, what is bad, and what is in between? And then start segmenting users based on that. And then that gets you to that core, now I know, as you said, the person is really bad at opening. It makes a terrible first impression.
We now have a whole group of learners who make a really bad first impression. And now we’ve used data to segment and narrow down our problem. And then the personalization would come if we were able to say, OK, why does Nolan make such a bad first impression? And how can we deliver a message to him that is relevant to why Nolan makes a bad first impression, or not even why he makes a bad first impression, but what does he need to hear to start making a better first impression?
Dr. Ashwin: Yeah, and this takes us down the avenue of diagnostic data, right? Diagnostic assessments even. When we think about assessment, we consider diagnostics, formative assessments, summative assessments, and other related concepts. If we consider diagnostic assessments, and to take your point one level further, if we were to say that you’re not getting the foot in the door because you’re doing X and you should do Y.
It requires us to have data on what X is, what’s the thing that you’re doing. And if you have that level of data, you can pull it in, and you can speak to people and say, ‘Let’s imagine we build a narrative, right?’ Dear [name field], you are currently in the ‘Behavior’ field, but you should be in the ‘Target Behavior’ field. And you could build that as a sentence on a screen, as long as you have the database behind it to insert the name, insert the behavior, and insert the target behavior.
Nolan: Yeah, then where does the AI step into the pool? Because a lot of this is data collection, right? As you mentioned, none of this is necessarily leveraging AI. It’s the things we have access to right now; where does AI then help us either solve within that problem or take it to the Mecca of personalization?
Dr. Ashwin: I think there are a couple of different categories for that, and it’s always going to depend on data. Let’s get that caveat out of the way first. Category one, we’re not going to use AI for anything except making content. Now, if we want to create a hundred different content objects instead of one, and we want to personalize them based on the messaging we know from the behavior and target behavior fields, we can now do that. We can do it at scale. We can do it quickly. That’s point one.
Point two is that we have a lot of data. Let’s imagine we’ve collected fields of data and need to recognize some patterns in them. We need to recognize some trends. We could throw this all into Excel, eyeball it, try to find some trends, or we could apply machine learning and get pattern recognition to work for us in a slightly better way.
The third area of this would be once we are in the space where we’re comfortable with using virtual agents and we’re comfortable with using either rag, agentic rag or multi-agent models, we could be thinking about how to surface the information to the learner that is most effective for them based on their data. But it requires us to have that personal data, effectively as a database somewhere.
These three options are easily available. And then we have the fourth option, which is to purchase something and implement it. And in that space, we might be looking at, again, the likes of some adaptive tooling, where we say we want people’s interventions, broadly speaking, to be linked to these sources of data. And you would be engaging a vendor to do that. Alternatively, you might say we want to enhance our rehearsal capabilities.
What we need from our rehearsal capability is that we’re doing a lot of practice. I’m talking into the camera a lot, and I’m getting some hints and tips on what my behaviors are and how they map to a data model of my most successful salespeople, for example, when we’re talking about sales. And I will get hints as I go, as I rehearse to sharpen my pitch before I do something. We probably have, what was that? That was five categories of using AI to solve that problem.
Nolan: Yeah, the one thing I’m seeing most with our clients right now—and I’m curious to hear your perspective—is the use of AI to create multiple variations of the same program and deliver them to specific audiences. I’m not sure if this has the largest long-term impact, though. A few years from now—five, maybe more—I’m not convinced it will remain as prevalent. As AI agents become more capable, I think this approach will become less dominant.
Previously, we were unable to do this. I couldn’t change an avatar from a white American to a Hispanic person who speaks Spanish. But now, I can. I can go into Colossian, click one button to change the avatar, click another to change the language—and it’s done within 30 seconds.
That’s where I’m seeing the most real-world application right now. Would you say you’re seeing the same? From my end, implementation is focused on this kind of personalization. The conversations, however, tend to lean more toward either “How do I interpret massive amounts of data?” or “How do I create agents that can communicate effectively?”
What trends are you noticing?
Dr. Ashwin: I’m seeing less activity in the pre-built content space—mainly because those aren’t the conversations I typically engage in. Of course, pre-built content is now more scalable than ever, but the discussions I’m part of tend to focus more on real-time learning and learning in the flow of work.
Multi-modal, real-time content generation isn’t fully here yet, but many advancements are moving us in that direction. By examining different levels of abstraction, we can already obtain a text-based response with the click of a button. We can also get audio responses instantly. And now, we can even engage in spoken interactions—potentially up to a Socratic-style dialogue with some of the existing agents.
Once video enters this space, and the ability to customize avatars becomes seamless—for example, being able to say, “I want to talk to someone who looks like me, sounds like me, speaks my dialect, and uses my preferred language”—we’re talking about a whole new level of personalization.
And forgive the reference, but it’s reminiscent of what the Salesforce CEO mentioned about Microsoft’s Clippy: the idea of having a persistent virtual support agent that’s always available, speaking your language—literally and figuratively. I’m intentionally avoiding terms like “avatar” or “content creation” here, because what we’re talking about is surfacing the right information in real-time, based on user triggers.
This shift—from building content in advance, hoping it’s useful, to surfacing exactly what the user needs when they ask for it—is where I’m seeing the real conversation around personalization happen.
Nolan: Yeah, to clarify—I think that’s exactly where the conversations are heading, right? Conversations tend to precede action. And that’s what we’re all talking about now, because we’re witnessing a major shift—almost a sea change—in how we approach this.
In fact, with this new wave of real-time, responsive solutions, it feels as though you might not even need the other. And by “the other,” I mean the traditional, high-cost, resource-heavy approach that many of us are used to—the one that requires constant effort to keep running. For many, it’s a necessary evil… but still, a pain to maintain.
There’s so much more we could unpack. Honestly, one of the biggest topics we haven’t even touched could easily be its podcast. As we push for more personalization, we inevitably need to know more about the individual.
So how much is a company allowed to collect, store, and know about an employee? And what happens to that data when the person leaves? Does the company retain it? Delete it? Who owns that data?
That’s a fascinating area. Here in the U.S., people generally don’t give it much thought. We don’t have GDPR. It’s often like— “sure, do what you want with my data.” However, in Europe, the story is entirely different.
That contrast is going to create some very interesting tension as we move forward. We didn’t have time to dive into it today, but it’s worth asking: how willing are companies to collect employee data responsibly, and how willing are employees to let that data be collected in the first place?
It’s like those opt-in screens—when you set up a new phone or download an app, it asks if you want to see personalized ads. Some people say no. Others think, “Well, if I’m going to see ads anyway, they might as well be relevant.”
Dr. Ashwin: I think people should be way more aware of accepting cookies and things than they probably are.
Closing Thoughts
Nolan: “They say Cookiegate is coming—or Cookiegeddon, whatever you want to call it. We’ll see what happens when that’s gone. But Dr. Ashwin Mehta, as we wrap up—apologies for going a little over time—can we try to summarize this big web we’ve laid out? Is there a way we can put this can of worms back into the can and leave listeners with a clear takeaway from this episode?”
Dr. Ashwin: Yeah, the first thing is this: let’s stop talking about personalization and start talking about customization. They’re very different things.
Second, if we’re customizing, we need to understand what data is required—both from the operational value chain and the individual—so we can tailor messages that truly drive behavior change or skill development in ways that are meaningful to both the person and the business. That’s a much more targeted approach than trying to personalize everything to the nth degree.
Third, we need to recognize the limitations of creating large volumes of content and attempting to personalize it across multiple levels of abstraction—especially when compared to the emerging shift toward virtual agents embedded in the workflow. These agents offer a far more scalable and effective path to delivering personalized messaging.
Nolan: Yeah. At first, I thought you said ‘evil’ instead of ‘able’—that gave me a little laugh there. Thank you so much, Ashwin, for taking the time to speak with us in the second part of this series.
For those who may not be aware, this is part of an ongoing series on AI and its application in L&D. Be on the lookout. If this is the first episode you’re tuning into, we encourage you to check out the others—they’re all available on our Spotify channel.
Thank you for listening. We’ll meet again soon!