Clark Quinn, Executive Director, Quinnovation

Clark Quinn is the Executive Director of Quinnovation and a recognized thought leader in learning science, instructional technology, and organizational performance. With decades of experience in designing evidence-informed learning solutions, Clark has authored multiple influential books on workplace learning and innovation. He is widely known for helping organizations bridge the gap between cognitive science and practical training strategies—turning research into results that stick. As a consultant, speaker, and writer, Clark advises global companies on how to align learning with the way people really think, work, and perform. He’s currently serving as the Co-Director of the evidence-based L&D Society, the Learning Development Accelerator, and Science Advisor to the learning-extending platform Elevator 9.

Nolan Hout, Senior Vice President, Growth, Infopro Learning

Nolan Hout is the growth leader and host of this podcast. He has over a decade of experience in the Learning & Development (L&D) industry, helping global organizations unlock the potential of their workforce. Nolan is results-driven, investing most of his time in finding ways to identify and improve the performance of learning programs through the lens of return on investment. He is passionate about networking with people in the learning and training community. He is also an avid outdoorsman and fly fisherman, spending most of his free time on rivers across the Pacific Northwest. 

How do you turn learning science into meaningful workplace impact? In this insightful episode, Clark and Nolan explore evidence-informed learning design. Together, they discuss how organizations can use research-backed methods—ranging from job aids to AI-powered tools—to build skills that last, avoid common learning pitfalls, and create experiences that truly make a difference.

Listen to the episode to find out:

  • How Clark’s career journey shaped his evidence-informed approach to L&D.
  • Why do many traditional training methods fail to deliver lasting results?
  • Practical ways to apply cognitive science principles in workplace learning.
  • The role of job aids, performance support, and real-world practice in skill building.
  • How AI can amplify learning design without replacing human judgment.
  • Why focusing on context and application is key to retention and transfer.
  • Strategies to overcome “learning myths” that persist in organizations.
  • Clark’s vision for the future of learning that blends science, technology, and performance.

Quote Icon

AI is a powerful ally in learning design, but it only adds value when guided by human expertise and an evidence-based approach.

Clark Quinn,

Executive Director, Quinnovation

Introduction

Nolan: Hello, everyone, and welcome to the Learning and Development podcast, sponsored by Infopro Learning. As always, I’m your host, Nolan Hout. Joining me today is Clark Quinn, a well-known thought leader in the space with over 30 years of experience. He’s the executive director of Quinnovation, where they help organizations work smarter, aligning with how we think, work, and learn.

Today, we will discuss a topic that may seem somewhat advanced: evidence-informed learning design for achieving meaningful skill acquisition. I promise we’ll break it down; it’s going to be a lot simpler than it sounds. Considering we have someone with over 30 years of experience in learning design, I don’t think there’s anyone better to navigate us through this topic than Clark.

Clark, welcome to the podcast.

Clark: Thank you very much, Nolan. It’s a pleasure to be here with you today. Sad to say, it’s actually about 40 years now.

Nolan: Sad. I have to admit, Clark, sometimes, depending on the guest, I’ll cap it at something because I don’t want them to feel a certain way.

Clark: No worries, mate. It was a child prodigy.

Clark’s Four Decades in L&D

Nolan: Great. Speaking of child prodigies, Clark, I’d love to start before we delve into the topic’s meat. I’d love to start and learn a little bit: how did you get into this field 40 years ago? Back then, it wasn’t nearly as big a field. What drew you to this? Where did you get your start?

Clark: I won’t go into the long, sorted, twisted tale, but briefly, I saw the connection. I provided computer support for an office that offered tutoring, and I also tutored on campus. I got that job and saw the connection between computers and learning. I said, ‘This sounds like something interesting.’

Our university at that time didn’t have a program in it, but they had a program where you could design your own. I designed my own major in computer science and learning, and essentially, it has been my career ever since. It’s taken strange twists and turns. My first job out of college was designing and programming educational computer games. That has remained a recurring theme: how do we create experiences that are both effective educationally and engaging?

Realized we didn’t know enough, went back to get a PhD in applied cognitive science, did the academic route for a while, got involved in corporate learning, and have ended up here. It was the recognition of the connection between computers supporting learning and the background in cognitive science that explains how we think, work, and learn. The sad thing is, we don’t do a good enough job of aligning them particularly well. That’s been my career.

Quinnovation’s Mission: Science in Learning Design

Nolan: You mentioned the twists and turns. What are you doing today at Quinnovation? Where do you find yourself spending the majority of your time?

Clark: I continue to unpack the details of learning science because, several years ago, Cammy Bean wrote a book titled “The Accidental Instructional Designer.” That’s the sad state of affairs; we end up there. You wouldn’t want an accidental plumber or an accidental surgeon. Yet we are trying to systematically change arguably the most complex thing in the known universe—the human brain—in reliable ways. We’re sort of like a monkey with a hammer, just banging away without really knowing what we’re doing in way too many instances.

Nolan: So, you’re helping them bring back the science to a field that, as you said, we’re taking the science out of it.

Clark: We are, in various ways. Numerous pressures prevent us from doing what we could and should be doing, and we need to fight against them. I’m working in various ways. I help people create and improve the output of their products, which typically involves examining their processes and identifying ways to enhance them. It’s really hard to go in and totally revise what you’re doing; that’s rarely what happens. Instead, if you can find those initial small tweaks, you can make that will really improve the learning outcomes you’re achieving. You track that and evaluate it, and with that evidence, you can get the support you need to make bigger changes and eventually start doing what evidence shows we should be doing.

Why Focus on Skill Acquisition

Nolan: Wonderful. Discussing what we should be doing, I mentioned a very lofty title: Evidence-Informed Learning Design for Meaningful Skill Acquisition. Let’s start breaking that down. The first thing I want to discuss is the end goal, the reason we do it: skill acquisition. There’s been a lot of talk about job versus skill. Are we training to do the job or to develop a specific skill? Why is it important that we focus on skill acquisition? Why does that become the benchmark of where we want to be monitoring and measuring?

Clark: There are a couple of reasons. It was somewhat dismaying to hear you describe evidence-based as a heavy title. It shouldn’t be; it should be based on established practices, as most other professions are. What we often see is a lot of information being dumped. To address your question, roles are changing at an increasingly rapid pace. That’s not just because of people introducing AI.

What matters are the skills. Different organizations will break down the tasks they perform. In some organizations, someone will be responsible for analysis, hand it off to someone else for design, and then hand it off to someone else for development. In other organizations, one person will oversee a project from start to finish, and another person will oversee another project from start to finish.

So, what matters are the skills of doing analysis, the skills of doing design, and the skills of doing development, as opposed to what your job is. We’re now recognizing that having clear definitions of those skills enables us to develop them effectively. If we have vague definitions, we tend to overload people with information and assume they’ll figure out what that means for them in practice. However, we have reliable evidence that this doesn’t happen. So we really need to focus on specific skills: how do we develop them and how do we evaluate them so that we can be sure that we’re creating the capabilities our organizations need?

Strategizing & Defining Critical Skills

Nolan: How do you do that? I recall that five years ago, I went through extensive exercises to map my skills. Then AI came in and said, ‘We can get you close. We can get a measure of the accuracy of what the top 10 skills a marketer needs.’ We hope you come in and correct the model, so we can identify where we’ve made a mistake. I feel like it lent itself more to the idea of ‘close enough is good’ rather than nothing, or having zero definition.

Where do you really fit on this? Perhaps achieving 100% accuracy will never be possible, or even 95%. We’re better off being 80% accurate across a 3,000-person organization versus being 100% accurate and then changing it every two years, because you mentioned that skill decay is at its all-time high. Is it the highest it’s ever been now? I’m sure it will be higher tomorrow. How do you know when to stop? When is ‘good enough’ good enough?

Clark: You don’t know when to stop; you’re doing this continually. What you need to do is determine which skills are essential for where you’re going. It’s a strategic thing. Where are we going as an organization? What are the directions? What are the skills we absolutely have to master? Certain things are background, and others are critical to the organization’s success. I’m listening to people like Koreen Pagano, who’s coming out with a skills book, and Kevin Wheeler, a talent advisor who used to work at corporate universities. They’re repeatedly discussing the need to go beyond simply taking orders for a course and identifying what the organization needs in terms of critical skills, and focusing on those.

So, you’re selecting a subset of the approximately 3,000 roles. Some of those skills may be distributed across multiple roles, but you ask, ‘What do we have to understand? What do we have to be able to do? Do we have to understand this material that’s core to our product? Do we need to understand this approach that’s core to our services?’ What will be critical, and where are things headed? What are our directions as an organization, what skills are necessary, and where should we focus our efforts? So, you’re not trying to boil the ocean; you’re being very focused. You do a bit more analysis, and AI can help, as you’re pointing out. You’re never going to get it 100%, but you’ll find those critical skills and develop them properly, and that’s what will make the biggest difference to your organization.

Designing for Performance: Beyond Knowledge Dumps

Nolan: As we narrowed in on our skills, we said that’s what we want to work on versus the job, because the job changes so quickly. If we can actually get the skills right, that should echo across many roles. Being a good communicator benefits not only salespeople, customer service representatives, and leaders, but also individuals in all roles. Exactly. So, then, let’s move on to how we design programs to maximize that impact. We talked about evidence-informed learning design. How do we start focusing on ‘This is the skill we’re trying to move,’ and how are we designing using evidence to make an impact on enhancing those skills?

Clark: The first thing we have to do is find out what the barrier is to that performance. Performance consulting, which is an adjacent field to instructional design, asks: What are the barriers to people doing this? Is it a lack of knowledge and skill? Or is it that they could do it if they had the resources? Or could they do it if they thought that’s what they had to do, but they believe they should be doing this other thing? There are several reasons why people don’t do what they’re supposed to do, and only some of them are related to skills.

Job aids can better solve some of them than courses, because you’re not trying to put information ahead. You say, ‘It can be in the world. It’s something they don’t do very often. They must do it right when they do it, but they don’t do it often enough. The training will be gone by the time they need to do it. Let’s give them a job aid, to guide them through it, a little tool support.’ When you find the right thing, that’s when you move to the evidence about how we then learn skills, as opposed to how we develop job aids or how we change incentives in the organization.

Core Principles of Skill Acquisition: Practice, Models, Examples

Clark: But when you need to figure out, ‘Okay, it’s very clear that this is a skill people don’t know and they need to know,’ then we need to design a skill acquisition sequence. That’s when you start examining the evidence specifically related to learning. The first thing we need is a clear definition of what success looks like. What is a good objective? For all that it’s very behaviorist, I like ‘maker style’ objectives because they get down into criteria that say ‘doing this in this context to this level of accuracy.’

Then you can say, ‘Okay, I know what the result is. I can test and see if they can achieve this level of accuracy now. They’re good to go.’ So, you state a good objective about what it is, and you know who the audience is and what their existing skill levels are. Then you can start by saying the most critical thing to do, which is retrieval practice.

We need to have people practicing the skills they’ll need to apply in the real world after the learning experience. If you need to communicate, we have to give you practice communicating before it counts, because if you get it wrong when it matters… There are multiple dimensions: how important is it if they get it wrong?

How frequently do they perform it in the real world? How many are they coming to the game with already? All of these factors contribute to the design of your learning experience. However, when you start instructing them on what they need to do, you must ensure they practice the correct way, and then provide them with the minimal necessary information. It’s not just content.

Too often in a learning project, we talk about content. But what role in skill development does that content play? You begin to understand that we need mental models that explain how the world works, so we can make informed decisions about what to do based on what we know and the consequences of our actions. We can predict, ‘If I do this, this will happen; if I do that, that’ll happen. This is better than that. I’m doing this.’

But you can’t know that if you don’t have a model of how the world works. Then you need examples to see that model in play. We have reliable evidence that, when giving people a few worked examples before they actually take a turn themselves, they learn better, faster. Then we give them retrieval practice.

But you have to understand all this, and it has to be spaced out over time. Too much of what we’re doing is based on an event. ‘It’s hard to get people together face-to-face; it’s expensive, so you minimize that.’ But with just an event, or e-learning, you sit down, complete your half-hour of e-learning, and then you go away. Most of that will be gone in a day or two. What do you do?

Nolan: Everything we’re doing, we’re trying to shrink that investment versus payoff and get that gratification as quickly as we can. But you can’t. If you’re really trying to acquire a skill, it takes time. If it were so easy, everybody would have every skill. You could tell somebody, ‘You’ve got 30 days, learn every skill.’ If somebody said, ‘Oh, of course, they can learn this skill in a three-day session,’ I’d say, ‘Let’s invest the next 60 days to teach them the top 20 skills, and we’ll be good to go and never have to train them again.’

The Human Capacity for Skill Growth

Nolan: I was talking to this gentleman at Amazon, and he said, ‘I don’t really know what the number is. I try to say you can’t be working on any more than two or three skills at one time because it is such a deep thing.’ What are your thoughts? Because now everything is about this skills conversation, and everybody’s creating these skills gaps, and they’re in a race to close them as quickly as they can. How much can we do at once?

If I examine my skills gap, are you a proponent of filling each bucket along the way, mastering one skill at a time, and then moving on to the next? What are your thoughts on how much the human brain can consume and actually make a difference?

Clark: My answer is, it depends, back to those factors I mentioned earlier: how important they are. You’ll want to invest more. How frequently they perform it afterwards really plays a big role. If it only happens once a week, you’ll need a lot more practice to ensure you’re doing it correctly than if it happens several times a day. If it happens several times a day, you may need different types of support.

Atul Gawande wrote his book The Checklist Manifesto about people who performed several tasks multiple times a day, only to realize later in the day that they’d not completed a step in that instance, despite having done it earlier in the day. That’s why he created his checklists. So, it’s a mix of tools and training.

To address your question, one of the things we need to do is reach a minimum level in some aspect of the skill on the first day. But then you literally need sleep. Then you need to reactivate it a couple of days later. The best time to reactivate it would be when you’re just about to forget it, but that’s really hard to predict, particularly at scale. So, you do some good things. By the way, the 2-2-2 isn’t quite right (two days, two weeks, two months). It’s more complex than that. I’m currently working with a startup that’s trying to figure out how to provide support for learning events to extend the learning experience afterwards and determine what works best in this context. You need that half-day, but you can only learn a couple of things.

It helps to interleave. Learning a couple of things is good. Interleaving means I study a bit of this, then I study a bit of that, and then I revisit this a couple of days later, and I revisit that. You’re mixing things up. This is better than doing the same thing all at once, because you have a little less predictability about what you’re going to face, which makes it a slightly more challenging retrieval task. This actually leads to better learning, quicker acquisition, and deeper understanding; it strengthens the links more effectively.

Iterative Design, Prototyping & Reflection

Clark: So, you need some spacing and interleaving. What also matters is the level of challenge, or the desirable difficulty. Learners think that if it’s easy, it’s good, but that turns out not to be what the evidence tells us. They need to struggle a bit, what Seymour Papert called ‘hard fun.’ When you play a game, you don’t always get it right, or it’s not quite fun, but you can’t fail too much, or it’s frustrating.

There’s this zone, and that zone changes over time. Mihaly Csikszentmihalyi discussed the zone of flow, which is what games tap into, but so does learning, as Vygotsky also explored the zone of proximal development. There’s stuff that’s too hard, too easy, and in between is where learning happens. So, the level of challenge, the amount of spacing, and the amount of interleaving all combine with the inherent complexity of the task (is this something that has only a few factors or multiple factors?). All of this comes together.

That’s why you make your first best guess, and then you test and tune. You have to build in some testing and tuning time into your schedule and find a way to make that work. That’s really challenging for many people. But if you actually care about improving people’s ability to do instead of just ticking a box saying, ‘Okay, I gave them the information, it’s up to them,’ as you were hinting at, ‘Oh, we give them three days and it’s golden.’

Navigating Constraints in Learning and Development

Nolan: I want to expand on one of those things. I think one of the significant struggles that many L&D practitioners face now is trying to accomplish tasks from a seemingly endless list, while also working within a budget with a very finite end. You said something, and obviously, somebody like yourself who has such a rich history and so much knowledge in the space, it might be easy for somebody to say, ‘Gosh, if I can’t do all these things, then I shouldn’t even start.’

But what I heard you say was that if you don’t have the means or the time, or if you feel that more research won’t yield a significantly better answer, go with your gut. If you don’t have a ton of evidence, go with whatever you have, implement the course, but then go back and look for, ‘Was I right?’

And if I wasn’t right, then tweak it. Use your first launch essentially as a means to gather more evidence. So, you don’t necessarily need to be perfect the first time, every time. Get something out there. Lean prototype, get it out. But if you’re not going back and saying, ‘What am I learning from this?’ — either to edit this one, but also what did I learn from this that I now need to take into my next — then you’re leaving a lot of meat on the bone. Are you a proponent of that?

Clark: Absolutely. You should be a reflective practitioner. I’m a big fan of Megan Torrence’s LLAMA and Michael Allen’s SAM, as they are processes based on agile principles and are iterative in nature. Michael says that in most cases, three iterations are usually sufficient. What I’m advocating is having enough knowledge about the background so that your prototype will be quite good. If it’s really bad, you’ll test it, make some adjustments, test it again, and make further adjustments.

It takes a lot longer. The more you know, the better your first iteration is. They have the advantage that Michael Allen’s Allen Interactions team works in teams, which makes life a lot easier than trying to be a solo practitioner. However, even a solo practitioner can gain valuable insights into using mini scenarios instead of writing knowledge test questions.

Mini scenarios are retrieval practice. Knowledge test questions don’t have any effect. Research shows that asking low-level questions helps develop your ability to ask more complex questions. What your job requires is high-level questions. If you do that, you don’t need the low-level questions; just the high-level questions will get you the high-level abilities.

However, if you’re not familiar with this, and it’s convenient and easy to absorb the knowledge from that PDF and ask some random questions about it, you’ll do that. So it helps to know upfront. Even if you know as much as possible, you should still allow time to test, tune, and refine it.

Prototype the final exam, the final retrieval practice first, refine that, and use that as a basis to work backwards. You should determine your final assessment criteria and then work backwards to ensure they can achieve that. That’s what they prototype first: that practice. They align everything to succeed at that after they’ve got that right. That is a shorthand way to achieve the best outcome with the least use of resources.

The Rise of AI in Learning Design

Nolan: Let’s build off that idea of getting something out there, but getting it as close to accurate as possible. That is the claim of using AI to build many of these learning designs, right? In the shortest amount of time possible, I can create and share content with the world. People can retrieve it in various formats, such as videos, infographics, or other digital media. I’ve been thinking about it, and I’ve spoken with many people about this, as I utilize AI extensively in my field. I realized that when I use it versus when I had a college grad join, I said, ‘Listen, I’m experimenting.

I know you know nothing about our field. I only want you to use AI. I want this to be your starting point.’ I realized that the accuracy of what I’m using the tool for and what this person is using the tool for is based on the same knowledge base, essentially everything. My output is 90% accurate; their output was 50% accurate because they didn’t know what was right. With AI, as we disseminate more knowledge, a potential problem arises from the unintended dissemination of information that is not what we intended, because the people disseminating it don’t know the evidence. They don’t know why it’s doing that thing, why it’s designed and programmed this way.

Understanding AI’s Evolution & Impact

Nolan: What are your thoughts? You’ve witnessed numerous evolutions over the past 40 years of your learning journey, including m-learning, microlearning, CBT, and various other approaches. How do you envision this idea of AI impacting that field, especially given that everything we discussed was evidence-based and science-backed?

Clark: I’m going to take a brief bit to characterize the fact that I’ve been involved in AI relatively deeply for those decades. I’ve remained an AI enthusiast, not a practitioner, but I follow its implications and have a conceptual understanding of how it works. I’ve played with it and programmed it, so I have a good understanding of it. I’ve also been around several seminal moments.

There was symbolic AI. I was in a graduate student lab of Don Norman, and the other lab leader was Dave Rumelhart. Rumelhart, McClelland, and their graduate students were the ones who recognized that computational models of cognition and AI actually struggled to capture what humans really did. In cognitive science, there was a sort of post-cognitive situational vision. Computational stuff wasn’t working.

They were building all these models and trying to extract this unusual human behavior from these systems. They were doing things like Douglas Hofstadter was trying out slip nets, slippery versions of connections, and Hofstadter had his fuzzy logic. Rumelhart went back and fundamentally revisited what Minsky and Papert had done with perceptrons, realizing that what they hadn’t had was a hidden layer. They produced the PDP books, Parallel Distributed Processing, which revolutionized the field of machine learning. They created what we now use as neural nets, on which generative AI is built.

The Limitations and Risks of Generative AI

Clark: I must clarify: when you mention AI, most people today are referring to generative AI (ChatGPT, Claude, etc.). I’ve witnessed the transitions; I’ve worked in a lab, taught at a computer science school, interacted with the AI team, built an adaptive learning system, and attended conferences on AI and education.

I have some understanding of this. Regarding generative AI, I’m somewhat negative about the hype. I have no problem with the underlying concept, except that it’s just doing a much better job of predicting what to say next. When it generates content, it does so by drawing from the vastness of the internet.

A bit of a worry about stolen IP, but we’ll leave that aside for now. It’s creating stuff that’s the average of the internet. So, it’s average content, not excellent content. It’s average content, and it hallucinates. The way it produces things means it will produce things that sound right, but may not be correct.

They will talk about learning styles happily, for instance, until you point out that’s not a valid thing, and they’ll say, ‘Oh, excuse me.’ They’re very nice and say, ‘Oh, sorry, I’ll retry.’ I tell people, view your agent like a golden retriever: it wants to please you more than anything in the world.

So, if it doesn’t have an answer, it will give you one because it thinks that’s what you want. Absolutely. You have to be the expert on learning design, and you need to have experts on the content to validate what it says, which slows down the process. I think it can be a valuable partner in generating ideas for scenarios, as well as situations in which these scenarios can occur. The evidence is, it doesn’t build models of the world; it doesn’t understand context. It’s just highly predictive of language, video, or images.

So, you have to figure out what the core decisions are that should be made solely for the learning, and then you can come up with ideas. You can vet some, and it’ll come up with ones you haven’t thought of that are good, but it’ll also come up with ones you haven’t thought of that are not good, and you have to sort through them. So it’s a great partner for thinking; it’s just not to be trusted to do anything on its own.

This is why I worry about agentic AI. We have evidence that people have been able to corrupt them and get them to do bad things because they want to please. They’ll go off and do stuff that you’ve told it to do that it shouldn’t do, and it’s been told not to do. Asimov’s Three Laws of Robotics come to mind. I worry about the IP. I worry about the environmental costs. I worry about the business models.

Right now, the costs are supported by venture capital. When that goes away, you have to account for the environmental energy costs. There is now evidence that smaller, purpose-built AIs are less environmentally costly and more effective at meeting needs. But that doesn’t fuel the business models of the big AI engines.

The Environmental and Social Impacts of AI

Nolan: My wife is mostly scared of AI. Environment is a big one for her. Anytime I use Claude, she’s like, ‘Well, there goes another gallon of water and this, that, and the other.’ It is true—absolutely a valid concern. We’re seeing the impact on the environment. It’s challenging for the masses to comprehend every technological innovation. Even the printing press had a huge impact: how many more trees did we cut down to produce the paper, how much more ink did we have to create and leech into our water system?

Mass cropping has a significantly harsher impact on the environment than growing corn in one’s backyard. Then you have the social side. I don’t recall if it was in Sapiens or A Brief History of Nearly Everything, but they discuss how every major innovation is often well-intended. Email came out; ‘Oh my gosh, I don’t have to go to the post office, I don’t have to spend 30 cents for a stamp, I don’t have to write this letter, I don’t have to wait five days for the answer. I can send an email right from my machine.’

We all thought this was going to be great. How much time will I save? However, we now receive 300 emails a day. Are we really saving more time? Is it really more efficient? Have we made our lives much better? I also think about that with AI. If I can do the job of 10 with AI, yes, it’s helping me. As you said, it’s a good thinking partner. But now I’m going to be asked to produce ten times that. The job that used to take eight hours a day now only takes me four hours with the help of AI. It’s only a matter of time before that four hours is going to say, ‘Now that you use AI, we’re going to expect eight hours of AI work.’ There are a lot of these things. The ability to get it wrong is magnified.

Is that where you’re advising people today? That people using AI ought to know its limitations, know what it can do well and what it can’t, so that we’re not expanding and multiplying every mistake?

Clark: I just shared a report this morning with a colleague about how organizations succeeding with AI are using it for very focused purposes. Using them as a general solution is not achieving the outcomes they were hoping for. There’s evidence that when you offload cognitive processing, you’re not engaging in that cognitive processing; therefore, you are not learning. It’s even worse for kids who are supposed to be doing that cognitive processing. When they don’t do the writing themselves, they’re not thinking, they’re not learning. Therefore, you must be very careful.

Everybody’s saying, ‘Oh, I’m going to use AI to support learning.’ I was communicating with someone on LinkedIn yesterday who said, ‘Oh, I’ve got this company, and we take your content and make videos about it.’ Engaging videos. But it’s still a content dump. Where’s the interaction that’s supposed to lead to actually developing abilities? They didn’t have an answer for that. There are big worries. You really shouldn’t be throwing a technology you don’t fully understand the trade-offs for to problems.

The best advice I hear from people like Marcus Bernhardt, who are strategists about assets, is to know what you’re trying to achieve, do experiments, but do smart experiments. Avoid forming a relationship with a company for longer than three months, as the market is constantly evolving. The prices you’re currently paying and the products you’re using may not remain stable in the future.

So, there are lots of issues. My take is, use it to complement the things you want to offload. You’ve heard the silly meme, ‘I don’t want AI doing art and making music while I’m doing the laundry. I want to reverse that. I want the AI doing the laundry; I want to be doing art and making music.’ The meme is apt. I think we have made a choice.

We made it decades ago. I was reading Popular Science as a kid, and they were discussing how we would work one day a week in the future because technology would make our lives easier. We didn’t make that choice. We need to consider our choices in this regard in the future. To your point, being asked to do, ‘If I give you AI, you can do twice the work, so I now expect twice the work from you,’ instead of, ‘You can work half the time.’

Nolan: Half the time. 

Clark: We have a choice about that.

Nolan: Absolutely. Thank you, Clark, for sharing your wisdom with us. This has been an excellent podcast. Before we wrap up, are there any closing remarks or points we didn’t cover?

Clark: I didn’t mention this evidence-based stuff. It takes time to acquire, put into practice, and recognize it. I’m co-directing the Learning Development Accelerator Society around evidence-based L&D. If you want a way to accelerate your understanding in an easy and vetted way, we don’t have corporate sponsors, and we have an advisory board that includes some of the best-known translators of research to practice: Will Thalheimer, Ruth Clark, Patti Shank, and Julie Dirksen.

These are individuals who understand the research findings and can effectively translate them into practical applications. They are people you should be paying attention to because evidence-based practice medicine shouldn’t be such a heavy, scary topic; it should just be part of our practice.

Closing Thoughts

Nolan: TWhere can people find that information, Clark?

Clark: ldaccelerator.com 

Nolan: ldaccelerator.com for those that want to learn more. Clark, you can find him on LinkedIn—a great follow, really good stuff. Encourage you to check that out. Thank you so much, Clark. I appreciate you taking the time to spend with us. I look forward to possibly doing this again soon.

Clark: I appreciate the opportunity, Nolan, and I wish all your listeners to stay curious.

Nolan: Thank you, Clark.

Recommended For You...

share