Gent Ahmetaj, Head of Insight and Analytics, Mindtools
Gent Ahmetaj brings a diverse background spanning security research and NGO work to his current role analyzing the impact of learning initiatives. His experience studying complex systems—from conflict zones to organizational learning—equips him to tackle challenging problems related to measuring the true business value of development programs. At Mindtools, Gent focuses on understanding what managers need across different career stages and developing evidence-based approaches to support them effectively.
Gemma Towersey, Learning Experience Consultant, Mindtools
Gemma Towersey works directly with clients to understand their challenges and create tailored solutions. With a background in English literature and educational publishing, she brings a unique blend of arts and science to the learning space. Her role at Mindtools has evolved from instructional design to consultancy, with a growing focus on evaluating and measuring learning solutions’ impact.
Nolan Hout, Senior Vice President, Growth, Infopro Learning
Nolan Hout is the Growth leader and host of this podcast. He has over a decade of experience in the Learning & Development industry, helping global organizations unlock the potential of their workforce. Nolan is results-driven, investing most of his time in finding ways to identify and improve the performance of learning programs through the lens of return on investment. He is passionate about networking with people in the learning and training community. He is also an avid outdoorsman and fly fisherman, spending most of his free time on rivers across the Pacific Northwest.
The ROI of training programs is a complex issue, and an effective evaluation process can help. In this insightful episode, Gent, Gemma, and Nolan discuss moving beyond numbers to assess the true learning impact.
Listen to this episode to find out:
- Why measuring ROI for soft skills is challenging, and how to align learner metrics with financial results.
- How to build long-term data frameworks to track key metrics for ROI evaluation.
- Why psychological safety is vital for applying training and overcoming time barriers.
- Will Thalheimer’s LTEM model to assess attendance, engagement, knowledge retention, etc.
- Mindtools’ AI coaching tools, their benefits, and risks of overreliance.
- Why human validation is crucial for AI data analysis amid declining critical skills.
- How L&D can claim impact using measurable outcomes and award frameworks.
How AI helps people is that if you’re an expert in something and you use AI, it makes you better at it—you become more efficient and productive. If you don’t know what you’re doing and use AI, it worsens you because you don’t have the expertise to judge the outcome or understand whether it’s heading in the right direction.
Head of Insight and Analytics, Mindtools
Psychological safety is crucial—having opportunities to practice what you’ve learned and knowing that you’ll get good feedback from peers or that it won’t matter if you make a mistake.
Learning Experience Consultant, Mindtools
Introduction
Nolan: Hello, and welcome to the Learning and Development Podcast sponsored by Infopro Learning. As always, I’m your host, Nolan Hout. Today, we’re very lucky to be joined by not one but two amazing thought leaders in the corporate learning world. We have Gemma Towersey and Gent Ahmetaj. Both work for a company called Mindtools, but in different capacities.
Gent is the Head of Insight and Analytics, where he spends much of his time digging deep into complex data and making it simple for people like me. Gemma is a Learning Experience Consultant who spends much of her time finding that happy mix between creating learning programs that make a business impact and that people also genuinely like, care about, and complete.
As you can tell, both of these guests have different backgrounds and specialties, which makes today’s podcast session so dynamic, and I’m excited to get it started. Today’s focus will be on ROI and how we can measure the impact, specifically of softer skills like leadership training, coaching, etc. But here’s the spoiler: we’ll cover a lot more than that. So, let’s get on with it and meet our guests.
Nolan: Hello, Gent and Gemma, and welcome to the podcast.
Gent: Thank you for having us.
Nolan: Of course, and we have much to discuss today. One of my favorite topics, I think, that I cover, whether podcast guests are coming prepared to talk about it, is the ROI of programs and training programs. But before we get into that, I would love to have a brief intro of both of you. One of the things that we have found in this space, whether you’re on the tech side, like both of you from Mindtools, or you’re in a corporate learning environment, is that the paths are never necessarily straight-lined. So, I’d love to see how you got to where you are today. Maybe we can agree, ladies first?
Gent: I must go first, then.
Gemma: How very gentlemanly of you.
Background and Career Paths: Gent & Gemma
Gemma: Well, I am a Learning Experience Consultant at Mindtools, so I work with clients and prospects to understand their challenges, figure out what’s causing them, and then create solutions that fill the gaps. I do a bit of project management, but I mainly work on the first part of this process.
Nolan: And what did you study in your university days? Did you have an ambition to do this, or were you thinking life would take you one path, and then you ended up here?
Gemma: I studied English Literature, so this was unexpected. I also did a TEFL course (Teaching English as a Foreign Language). I’ve always been a teacher of sorts. I’ve taught piano and other minor things as a side hustle. I started my career in educational publishing, creating digital assets that sat alongside books for teachers and students at school and school curricula materials. Then I moved into corporate learning quite a while ago now. I’ve been at Mindtools for nine years. The role has evolved because I started as an instructional designer and now have a bigger consultancy role.
Nolan: Is there something that draws you to education as a sector? You’ve been in it for most of your life. What is it? Do you love to educate people? Is there something about it that is fulfilling? What do you think is at the core of that drive?
Gemma: I think it’s just fascinating because it has this really lovely blend of arts and science. I could never really settle into school. Although I ended up doing English at university, I did also really enjoyed science. So, I just love the fact that you can bring in psychology and behavioral economics and just be a bit of a magpie. Everything that you learn in your personal life, you can bring in as well, and use evidence-led science to create things that work.
That’s why I’ve loved getting more involved in evaluation and working with Gent’s team. The most frustrating thing about being a vendor is that you don’t necessarily hear from the learners you’re making things for. However, with evaluation, once you start to put in well-structured survey questions and work closely with your clients to analyze all of that data, you can start getting feedback. And with Gent’s work, you can start looking at all the data from measuring impact, and then you get a sense of whether your work is great. As the role has evolved, it’s been endlessly fascinating.
Nolan: There’s a big overlap with marketing, which is my background. A lot of what I do today is still in that field. It was fascinating as I started speaking with clients more and more—I realized that there was such a big overlap, and we started to have a good unity in understanding the issues. A lot of the challenge with training today is getting people to care. And that’s what marketing is about—do you care enough about a product?
The nice thing about learning is that it gives you both the science of why people engage and what motivates them. It’s a little bit more fulfilling than if they just spent $50 buying your product. It’s a little bit more heartwarming than maybe a marketing ad agency.
Gent: Can I just say it was so crazy that you and I, Nolan, picked up our coffee mugs at the right exact time together in harmony as Gemma spoke? It was almost like we were doing the same actions.
Where did I come from? My career started in security research. I was very interested in and worked for an organization that looked at conflict in Libya, Lebanon, Iraq, and Syria, trying to understand the landscape and the reasoning behind some conflicts and institutional structures.
That was cool. I came across fascinating topics like the resource paradox, where countries with natural resources do worse economically over time than countries without natural resources. There are multiple reasons for this. From a research perspective, it’s very interesting. It’s institutional—they develop structures based on rent-seeking economies. The money trickles downward as they essentially pay off people, versus countries like the UK, where they built centuries of bureaucracy to extract money from people, and those institutions create autonomy and checks and balances between each other.
I then moved on to the NGO and government space and worked in the Balkans for quite some time. It was a natural fit. Then I married my wife, who asked me to return to the UK, and I agreed. I had offers from two companies in similar areas of work—workplace psychology and learning. I chose the one with an office in an abandoned old church that looked terrible, versus the other one with an office in Summerton House, looking over the Thames. I think I made the right decision.
That’s why I am in this position now. Currently, I’m the Head of Insight and Analytics at Mindtools. I look specifically at interesting organizational problems related to impact within the learning, talent, and development space.
About Mindtools and ROI in Leadership Development
Nolan: It seems like your drive is a curiosity to get to the core of an issue. When you look at those countries and why they are in the position they are, leaping having abundant resources that should make them incredibly wealthy, to why they’re not doing well, it’s not a direct causal link.
That leads us to what we’re going to talk about today: measuring ROI and the challenges that organizations face. Both of you are with Mindtools. Could you give a quick 30-second primer on Mindtools so the audience understands your perspective?
Gent: The ethos of Mindtools is building better managers. We’re all about trying to better understand what managers do, the challenges they face, what leaders do, and supporting them across their careers, different spectrums, ages, and entry points. We do this with resources at the point of need, with specific customized solutions that Gemma’s team creates, with insight solutions that my team provides, or with more off-the-shelf solutions like our content hub.
Nolan: The real value proposition is making better managers. That’s one of the biggest challenges I know many of our client’s face. It gets brought up a lot on this podcast: measuring ROI for things like sales training, onboarding, and customer service can have a somewhat more direct link. For example, onboarding costs were a million dollars; now they’re half a million. Retention went from 13% to 5%.
But how do you showcase an ROI for leadership training or a coaching network? What’s the biggest obstacle organizations face when trying to invest in their manager development? How do they show that their investment will return to the company in the short or long term? What strategies do you help your clients employ?
Gent: The key to all this is that “return on investment” is a financial term. In many cases, learning and development have not really been in that space for a very long time, or have shied away from having business language attributed to it. We are often very learner-focused—we measure learning hours, completion rates, acceptance rates, bounce rates, and compliance. ROI is very alien to us in terms of how we can measure it, and frankly, it’s very difficult to get to that level of measurement where you’re measuring actual financial output.
Dave Ulrich, a great author and prolific writer in this space, talks about “return on intangibles”—getting returns on things that are not really tangible. Finance is not everything in a company; sometimes, there are indirect causes.
We advise companies that if they want to measure something, they need to think long-term. Behavior change does not happen overnight, nor does any financial outcome arrive within minutes, months, or even sometimes years. You have to have the framework in place for the data collection you want to do and the questions you want to answer over time. When you put that in place, you start collecting the right data and can answer the right questions. My first advice is to start from scratch if you can—think about the framework you want to build and think about the long term, not just the short term.
Gemma: There are so many obstacles. It’s not easy getting the data itself, but then you’ve got all the confounding influences on performance change that you’ve got to try and separate. There’s a lot of complexity around how you design evaluations to separate these factors. It needs to be designed from the start; otherwise, you might miss the chance to create an evaluation process that allows you to unpick that complexity.
As Gent mentioned, it takes time. You have to be comfortable with the lag. A lot of businesses just want immediate results, and that’s not reality. You have to be comfortable with waiting and spending the time designing data collection points over time, and seeing what happens.
The Value of Training and Data Collection Points
Nolan: I’m thinking of two things. One is that quote: “What if we train our people and they leave?” and the response: “What if we don’t and they stay?” If you don’t believe training will make a difference, then keep your people where they are. They’ll just be there for three years, leave, and then you’ll bring in new people. That’s the alternative.
Sometimes people miss that it’s hard to sell a belief to a CFO—investing a million dollars in a program based on faith. Our CEO has often pointed out how marketing tools like HubSpot or Salesforce make claims like “After implementing Salesforce, sales rep productivity climbed 5 million percent.” Is the reality that they bought a CRM tool that salespeople hate to use? Is this tool that you have to beg salespeople to use the reason for the increase, or is it probably everything else you’ve done to improve sales?
The answer is that it’s probably a little bit of both, but Salesforce is willing to claim the victory. Sometimes learning needs to have the confidence to claim a victory, even if they’re not fully responsible, because everybody else is claiming their victories.
One thing we always say is “begin with the end in mind.” We use the Brandon Hall awards as a framework with our clients at the beginning of a project. If we want to win an award, we have to demonstrate business impact. If we don’t identify what to measure now, it will be really hard to get that data afterward.
What are the data points that you see? Are there some cross-cutting metrics that you want to make sure you’re tracking from the beginning of a program?
Gent: I wouldn’t say there are specific metrics, just because that clouds the actual purpose. It’s more about categories of metrics that you should think about. The feasibility of measurement will be determined by the size of your program. If your program costs £5,000-10,000, measurement might not be viable because it will cost you more to measure than the program itself. But if it’s £100,000, £500,000, or a million pounds, then measurement is critical.
We think about three to four categories of metrics:
- Business level metrics: KPIs related to what the business is already measuring—things like retention rates, onboarding speed, growth, revenue, and invoicing. These are what a CFO or CEO would be looking at.
- Employee performance or behavioral change: Things like speed to competence, skills application, skills transfer, and knowledge transfer across the organization. Knowledge flow across the organization is incredibly important.
- Experience metrics: These are more learner-focused and learning team-focused. They can include perceptions, the relationship between facilitator and learner, and behavioral aspects like course attendance or completion rates. All of this paints a picture of the experience someone has going through what you’ve designed. You don’t want to make learning harder than it already is—it’s an uncomfortable thing that requires self-motivation. You want the experience to be smooth.
Nolan: I don’t think learning is hard. What’s hard is trying to teach someone. Once somebody has truly decided “I want to learn,” that’s the easy part. It’s trying to convince somebody that they need to learn that’s difficult. It’s like Mark Twain said, “I always want to learn, but I never want to be taught.”
Gent: It’s a muscle though. I remember seeing a cool video from a lecturer who described the states between not knowing and knowing, drawing squiggles in between. She said the longer you can stay in this in-between phase, the more you’ll develop as a human because you’re comfortable with this deep sense of uncertainty. When you don’t know, you’re trying and failing until you get to the knowing part—it’s really hard in that middle bit.
Learning Models and Environmental Factors
Gemma: We use Will Thalheimer’s LTEM model of evaluation. He’s expanded the Kirkpatrick four-part model into eight tiers, chunking it up similarly to what Gent described.
At the bottom, you’ve got the basics: can you get people to attend, and will they engage? Then there’s a bigger chunk that looks at whether people can make the right decisions, get the knowledge, and apply it in real-world tasks. He talks about the difference between being able to do something in the learning environment immediately versus remembering how to do it days later, which is very different. You need to be able to remember to take it into your workplace. Then there’s transfer and the effects of transfer, which is the impact.
He also talks a lot about making sure the environment is right. Does the learner have the support to transfer what they’ve learned into the workplace? Do they have the motivation and all the resources they need? It’s not just “here’s the learning, off you go.” A whole support system is needed, and you’ve got to be comfortable with not knowing—that’s part of it, having permission to be uncomfortable.
Nolan: I was fortunate to do a webinar with an expert on DE&I who had worked at Google. I said I didn’t understand the ROI of it, and she explained that if people don’t feel comfortable for whatever reason—whether perceived or real—they’re not going to be their best selves on the job. For her at Google, it was about getting everybody to show up and removing whatever barriers they have to allow them to speak up in meetings, give innovative ideas, be open to coaching, and admit when they’re wrong.
These days, I feel like the bar is even lower. “Do I have time in my day to invest in myself?” That seems like one of the biggest barriers now. Who’s comfortable telling their boss, “I know you wanted this report, but I have a two-hour training session on becoming a better communicator”?
Gemma: Exactly, and psychological safety is crucial—having opportunities to practice what you’ve learned and knowing that you’ll get good feedback from peers or that it won’t matter if you make a mistake.
Leveraging Technology for Coaching ‘AI Conversation Tool’
Nolan: I’m keenly interested in leveraging technology for virtual coaching—AI coaching, on-the-spot coaching, coaching in the flow of work. I tend to prefer to learn when I have a specific problem to solve. Having a coach that doesn’t need to be on camera but is available exactly when I need it and can give me answers and allow me to explore works well for me. I believe MindTools has a product like this—could you tell me more about it and the feedback you’re getting?
Gent: Yes, we have an AI conversations tool and a coach as well. There are two tools with different purposes, but the general idea is the same: I want to talk about something specific, like having a difficult conversation or brainstorming ideas about better communication, and I don’t want to go through all the available resources or a longer course. I just want an answer that’s hyper-personalized to my situation.
The major difference between our tool and something like ChatGPT is the training data. Our training data consists of all our resources and expertise brought together in one place, so whenever you get an answer, it’s based on vetted, rigorous, evidence-based approaches.
Nolan: Are people adopting these tools? What trends are you seeing?
Gent: I think it’s still early stages for all of these products. Just to give you an idea, Google gets more than a trillion searches every day. ChatGPT interactions are less than 1% of that in comparison. We’re still very early in how people utilize AI. We might be at the frontier exploring it, but hundreds of millions of people have probably never tried it because they don’t need to, don’t care, or don’t trust it.
Is it useful? Absolutely. I think it democratizes certain areas that were never available to many people—a coach costs a lot of money, so being able to have that bespoke conversation to some degree is incredibly useful. There are unintended consequences, but for us at the moment, it’s very valuable.
AI in Learning: Benefits and Challenges
Nolan: What are some of the unintended consequences?
Gent: One is the transfer of cognitive load to the AI—you offload thinking to a machine, which can have long-term consequences for skills acquisition and application. If I’m used to writing code and have to work through it, asking AI to write the code for me means I lose that muscle memory of how to structure code and become dependent on AI, especially if I’m an early learner.
The key difference in how AI helps people is that if you’re an expert in something and you use AI, it makes you better at it—you become more efficient and productive. If you don’t know what you’re doing and use AI, it makes you worse because you don’t have the expertise to judge the outcome or understand whether it’s heading in the right direction.
Nolan: I equate it to being a graphic artist. If you had to learn why typeface, kerning, and padding are important and actually use Photoshop to do it, you would have a certain level of expectation. But if you just had ChatGPT do all of it, you would get there faster. Someone off the street who knew nothing would get results much quicker, but they’ve essentially peaked because they’re not learning anything new. Their skill is just the ability to leverage AI, with no knowledge behind that skill.
That’s the challenge the current generation is going to face—they’ll be more productive than the generation before, but they won’t have experienced the pain of having to do things the hard way.
Gent: It’s not necessarily the hard way—it’s just losing the process by which you get the outcome. There was a picture that won a nature photography category that was AI-generated. The author could do that because he could describe to the AI in technical detail what lens to use, what camera angle to use, and other specific aspects. ChatGPT may understand that because it’s trained on photography data, but I wouldn’t be able to give those specific technical instructions.
Nolan: It’s like when I was in school and asked why I needed to learn math when I had a calculator. They still teach you what one plus one is because it’s important for foundational knowledge.
Gemma: And to be able to critically assess whether they’ve got it right. You can’t work out what’s gone wrong if you don’t have that foundation.
Using AI for Measurement and Analytics
Nolan: The ability to critically analyze is a skill that’s getting worse, at least in the United States. When they look at reading comprehension, they used to only look at literacy rates—can you read? Most people in the US can read, but most can only comprehend what they’re reading at a ninth-grade level. The percentage who can understand the intent behind an article is even smaller. The ability to assess why an article was written in the first place is already facing a shortage, and I’m fearful of what AI tools will do to exacerbate that.
Gent: That’s why measurement is important. If we were measuring the intended and unintended consequences, we could address these issues.
Nolan: Are there other areas where AI or other technology is making our lives easier in terms of measuring manager training and leadership programs?
Gemma: We’ve produced adaptive learning modules that give us incredibly detailed data, different from what we get from an average SCORM package. With AI, you can get verbal and written questions for a different type of analysis.
Gent: Technology definitely speeds everything up if you know what you’re doing. For qualitative data analysis, which used to be incredibly time-consuming—20 interviews would take two hours to transcribe and four hours to analyze each—AI helps tremendously. It can’t replace the rigor if you want to be thorough, but for light-touch analysis, AI definitely speeds things up by providing transcripts and initial analysis.
I would still encourage people to read the transcripts and understand what’s going on because AI picks specific concepts based on statistical probability, not by reasoning. There can be underlying messages in text that AI won’t pick up.
I would be cautious about using AI for quantitative data analysis unless you understand some foundations of statistics. There are many assumptions that need to be met for statistical analysis. If you want to do a regression, you need to pick the right type based on your data and dependent variables. Otherwise, you can get very wrong answers that lead to poor decisions.
Nolan: As someone who had to calculate regression analysis by hand in university, I appreciate that. What I’ve found is that AI will give you an answer regardless—it’s like a golden retriever puppy that just wants to make you happy and will do whatever it takes. If you realize that’s where AI is, you can look at it with the right perspective.
I don’t think we’re quite there yet for a lot of measurement and analytics—it still takes manual input. But AI can help identify patterns, like scanning through thousands of employee interactions in an AI coaching tool to determine overall sentiment or top questions. It can scan mountains of text in seconds and pinpoint trends, though you still need to verify the findings.
Microsoft’s graph database and tools that sit on top of everything you do to tell you what kind of manager you are aren’t fully there yet. It’s a dangerous game if you think they are—these tools are still like puppy dogs.
Gent: Keep in mind that AI models are based on probabilities and trained on codifiable data. There’s a lot of uncodifiable data in organizations that you cannot feed an algorithm. As humans, we’re definitely fallible and sometimes silly, but AI has its own limitations.
For example, I use AI to do quick analyses of spreadsheets. Recently I needed a specific type of average—the mode rather than the mean. AI will naturally give you one average, but it may not be the one you need. You have to know what you’re looking for; otherwise, it makes assumptions and the outcome can be incredibly different.
Closing Thoughts
Nolan: Well, this is a great time to stop the podcast because if I have to dig into my mind to remember the difference between the mean, median, and mode, we’ve now reached the tipping point where I am no longer able to talk!
Thank you both for joining us. It’s been absolutely a pleasure talking about ROI and how we ended up bending into almost Terminator-like “end of days with AI” territory. I appreciate working with you on these topics.
Gemma: Thanks for having us. It’s been great to chat.
Nolan: We’ll talk to you again soon. Thanks, everyone.