Craig Lutz, Senior Director, Enterprise Learning, Gannett | USA TODAY NETWORK
Craig Lutz is Senior Director of Enterprise Learning at Gannett | USA TODAY NETWORK, where he leads innovation in workplace learning with a strong focus on AI adoption and governance. With over 20 years of experience in L&D, spanning roles from instructional designer to LMS administrator to strategist, Craig has developed award-winning programs that have saved millions and transformed onboarding, automation, and digital learning delivery. A Penn State alum, he is known for blending hands-on expertise with a strategic vision, helping organizations adopt new technologies responsibly while keeping people at the center of learning.
Nolan Hout, Senior Vice President, Growth, Infopro Learning
Nolan Hout is the growth leader and host of this podcast. He has over a decade of experience in the Learning & Development (L&D) industry, helping global organizations unlock the potential of their workforce. Nolan is results-driven, investing most of his time in finding ways to identify and improve the performance of learning programs through the lens of return on investment. He is passionate about networking with people in the learning and training community. He is also an avid outdoorsman and fly fisherman, spending most of his free time on rivers across the Pacific Northwest.
Every L&D leader is asking the same question today: How can we harness AI to make learning smarter, faster, and safer? In this insightful episode, Craig and Nolan delve into how organizations can responsibly adopt AI, unlocking greater efficiency and new opportunities in workplace learning.
Listen to the episode to find out:
- How Craig’s career journey evolved from tech to learning leadership.
- How did Craig’s early automation initiatives save Gannett over $300,000 annually?
- The role of AI councils in setting governance, ethics, and security guardrails.
- How to avoid AI “hallucinations” and ensure accuracy with human oversight.
- Practical ways L&D teams can pilot AI tools safely before full adoption.
- Craig’s experience with Synthesia, Microsoft Copilot, and other AI tools.
- How AI streamlines stakeholder collaboration and speeds up content creation.
- Why the future of L&D lies in curiosity, exploration, and responsible AI use.
It’s not just about how I’m using AI or the quality of the output, but about the environment and the impacts of using that tool. The human in the loop is critical.
Senior Director, Enterprise Learning, Gannett | USA TODAY NETWORK
Introduction
Nolan: Hello, everyone, and welcome to the Learning and Development podcast sponsored by Infopro Learning. As always, I’m your host, Nolan Hout. Joining me today is Craig Lutz. Craig and I go way back. We first met when he presented an award-winning program at the Chief Learning Officer event about six or seven years ago. Craig has really done it all in over 20 years in learning development, from instructional designers to LMS admin work, and now a Senior Director at Gannett, the company responsible for USA Today.
I recently had the opportunity to catch up with Craig and hear about all the work he’s doing in the field of AI. I thought I had to have him on the podcast now, which is exactly where we are today. Today, we’re going to discuss all aspects of AI, including security, ethics, tools, and results—the list goes on and on—so there’s a lot to cover today. So, let’s meet our guest, Craig.
Craig, welcome to the podcast.
Craig: Nolan, thank you so much for the invitation, and I’m happy to be here. I’m certainly excited that we were able to reconnect again more recently, given our current focus. It’s good to talk about this subject. I think it’s on the minds of many people, and they are continuing to learn and upskill at all levels. In many ways, we’re all learning from each other.
Nolan: I have to say to my listeners who don’t know Craig, the award that I’m just now putting this together, the program Craig was talking about, involved building a lot of automation and making the onboarding experience more asynchronous at USA and Gannett. If I recall right, you saved over $300,000 a year in training costs for that program. So, this is nothing new for Craig—being on the cutting edge of automation and finding smoother ways to work. I’m excited to get going here.
Craig’s Career Journey
Nolan: Before we delve into the topic of AI, Craig, I always like to start by learning a little more about how you got to where you are today. I mentioned earlier that you’re now the Senior Director at Gannett, but you didn’t start there. Tell us a little bit about your origin story.
Craig: I’d be happy to. I started after college. I went to Penn State University, so I’m a Nittany Lion alum. Let’s go, PSU! Football season’s just around the corner. After college, I attended Penn State, where I discovered a strong interest in technology. Back in the early 2000s, technology was definitely a path that people were considering or wanted to be on. A great deal has transpired since then, particularly in the world of technology. Back then, AI was a thing. I know some people feel like AI has just started to emerge, but AI has been around for a long time. Generative AI really took off a few years ago with the introduction of ChatGPT and other tools that became available.
But I started in technology. I worked for Network Solutions, which is the original domain registrar. They were getting into websites and SEO when that was really starting to take off, and moving into the digital advertising and digital storefront space. While I was there, I began working with the learning and development team and providing onboarding training. A spot opened up, and I had an opportunity come my way, asking if I would join that team. Really, the rest is history. You mentioned earlier, I’ve done it all in L&D; I’ve done a lot of the roles in L&D, from just an instructor to an instructional designer, to a learning technologist, to managing learning teams, learning strategy analytics, you name it, I’ve probably worn the L&D hat over the years.
Motivation for Working in L&D
Nolan: Lovely. I was talking to someone this morning about AI, and they mentioned, ‘Somebody on their team had referred to, ‘I am the CEO of my AI bots,’ and I have bots that are like my employees that do all these different things.’ I do think, as we start to delve into AI, that’s always my big caveat: the more you know, the better you are, but if you don’t know, AI can exacerbate some of those issues, which we’ll likely discuss. So, getting into that, Craig, or I guess I should say, a question before that: what do you think drew you to the learning and development field, given your understanding of the tech domain and tech background?
Craig: I really like to help other people, and I’ve been helped a lot along the way and have had investments made in my life. I truly saw L&D as a way to help others. When I first started, it was about individuals joining a new organization, not just seeking a job, but looking for a career. Perhaps they were fresh out of college, or maybe they were seeking a career change. I had the opportunity to spend the first three to five weeks with them, investing my knowledge and creating a path forward for them not just to have a job, but to begin a career.
Out of those roles, I can reflect on the people I’ve trained, and they are now in senior leadership positions. I know one person who is the CEO of an organization, and knowing that I had a hand in their early career development and, in some way, helped them down that path, I derive great value from that.
Nolan: Well, that’s the best answer to that question I’ve heard, so that’s pretty darn cool.
Navigating AI: Security, Ethics, and Governance
Nolan: So, getting to the topic everyone came to hear about is AI. We’re going to cover everything, but a good place to start is with a very macro-level view of security and ethics, that component. I’ll add a caveat: we work at Infopro Learning, and I end up speaking mostly to people in large organizations—these are heavily regulated. Craig, too, works for Gannett, which is obviously a very large organization.
I’m sure there were roadblocks, including security, ethics, and compliance issues. Talk us through a little bit about how you got started, how you navigated some of those waters, and maybe a good starting point.
Craig: I think a good starting point is that I can remember very clearly. We have moments in life where things happen, and you can recall exactly where you were when they occurred. I remember where I was the day that I really said to myself, “I really needed to dig into AI. Now is the time.” Many people might wait; they might try to see what this really turns out to be. It was at that very point that I just started to dig in. Being in the field of Learning and Development, I was a self-starter. I just started reading and trying different things, downloading tools, looking for champions on LinkedIn or out on the web who were trying things, and trying to apply them to my world. That’s really the starting point. At that point, you don’t really think a whole lot about the impacts or governance and security and all those important things that really do need to be part of any AI strategy, whether it’s an organizational strategy or an L&D strategy.
I had the opportunity to be a part of our AI council and work with folks as we got moving as an organization. Many of those issues were addressed. The company knew that we needed to institute governance. We wanted to focus on the safe and ethical use of AI and discuss topics such as when dealing with company information, it’s essential to consider where you store and don’t store that information, as well as how you interact with it. When you think about a lot of these AI tools that are out there, understanding when you put data into those tools, how is that data being used, and where is that data being stored, and where is it going?
When you think about the very entry level, you download a tool and start working with it, learning as you go. But then you begin to think, it’s not just good enough how I’m using it and what I’m getting out of it and the quality of what I’m getting out of it, but what’s the environment, and what are the impacts of me using that tool? So, definitely having that, and also from a cultural standpoint, the human in the loop.
I know that’s a phrase often used in this environment today. Still, it is really important to ensure that you have that human aspect, that oversight, and the ability to think critically about the outputs, responses, and information you’re getting from AI, and ask yourself, “Does that even make sense?”
Because, as we know, these tools can hallucinate. I always like to say it this way: just like people, often our nature is to want to have the answer. We don’t ever want to be there and say, “Gee, I don’t know.” Perhaps I think the AI tools work that way too; they don’t want to come back too often, and yet they often do, but they don’t want to say, “I don’t know.”
Sometimes, they’ll make it up, and you’ll look at it and question it. So, even making sure that you’re sourcing information from that tool, fact-checking it, and ensuring accuracy ultimately has numerous benefits, as it helps build trust in a world where people sometimes aren’t entirely sure about this AI journey we’re on.
Understanding and Addressing AI Hallucinations
Nolan: It’s interesting. I think one of the big things was when I started seeing hallucinations pop up. For those who don’t know, a hallucination occurs when you ask Claude or Copilot, “I want to know what the typical onboarding time is for a publishing salesperson in publishing.” It is going to very emphatically come back to you and tell you, “It’s 45 days. A typical company is 45 days.” If you ask, “Can you cite that source for me?
‘ Help me understand where that came from,” it’s going to say. “Actually, you caught me.” I don’t have a source. I drew from numerous sources. Let me actually try to answer that again and give you some more links that you can go in and explore.” I just did this with a buddy of mine. He wanted to know whether his ad copy was good or not. I’m not in his industry, and I said, “Hey, is 3% good?” I looked and thought, “Oh, it actually sucks.” And I was like, “This number doesn’t look right. There’s something wrong about it”.
A hallucination is when the tool, like a golden retriever, wants to please you, and so it’s going to please you. You have to, again, not let it stop you from doing it. Think of it as almost an early kind of Wikipedia day, when your professors were like, “Hey, don’t submit a paper using Wikipedia as your reference.” Broad level, great. Take that answer, use it. If you’re looking for really specific details, the beauty of it is that you can actually ask it if it is lying, and it will tell you. So, if you ever want to make sure, say, “Hey, do not make up any number. I only want a number you have.” Anyway, very good on the front end.
Gannett’s AI Council and Tool Vetting Process
Nolan: You talked a little bit, Craig, that you’re on the AI council at Gannett. Is that right?
Craig: That’s correct.
Nolan: So, what does that entail? What are the kinds of things that Gannett is addressing at a large scale? Is it mostly just, “Hey, don’t use hallucinations,” or is it more like a security risk?
Craig: It’s several different things. Certainly, security is an important part. We have different roles within that council, ranging from tools and technology. What tools and technologies will we use, and which ones are approved? We’re looking at this from both a legal, security, and quality impact perspective. It’s also a way to ensure that, as people come up with ideas, which we very much encourage, and they test different things out, we have a formal strategy in place for a review process.
All of our AI initiatives are run through a process that allows representation from different parts of the organization, enabling input and review from those who need to be involved. Folks will come, and they’ll present what they’re looking to do. It’s an opportunity for groups across the organization to ask clarifying questions and then approve the continuation.
Nolan: I like that. I like that idea of having a process for if it is something related to AI, whatever it is, you have some place to go through the process. It’s not just a “no” or “oh, let’s see.” That’s really nice that you took the time. You mentioned tools, and I literally just got off with someone in a large IT organization, tech people as well, and she was saying, “We’ve been trying to buy Synthesia for three months. We can’t get it past the line there.”
She’s like, “And this is the age-old problem: by the time it gets approved, I want to switch to Colossyan.” Which tools do I use? So, talk a little bit, Craig, about this predicament, especially because most of the people listening, working in larger companies, regulated industries, things like that, how do you choose in a flooded market of tools when there are so many to choose from, and maybe the acquisition process of the tools isn’t the easiest? How do you prevent yourself from getting caught up in this tool for three months, chasing all these shiny things?
Craig: That’s not new to tools and technology. It’s not new to tools and technology within L&D. When you think about the traditional LMS, we could say the same thing about LMSs. I get one today, but there’s a whole lot of them out there, and then I see that this tool does this, and I’d like it to do that. How do you keep jumping from one to the next? It’s not that different when you consider the approach.
Strategic Tool Selection and Requirements for AI
Craig: Of course, with the AI lens, there’s additional scrutiny and focus just because of the content and how that’s being handled. What I’ve always believed in, whether it’s an AI tool, any L&D tool, or any technology for that matter, is that a good place to start is to define and document what my requirements are for this type of tool. If I’m going to choose an LMS or I’m looking for an AI video production tool, what specific needs must that tool fulfill?
Understanding the guidance and the governance that we have around AI within our organization, making sure that that’s included in the requirements. What do I mean by that? For example, if we’re going to be interacting with content within a tool, we don’t want that tool to extract our information and train the larger model.
We need that safe space, if you will, where we can work, and we don’t have to worry about that information going outside of that bubble for the most part. That’s just one example where you may have requirements related more to video production, but now you’re layering in those requirements that AI will have an impact on.
Then we have some metrics. If we’re going to use an AI video tool, part of the reason we want to do that is to generate efficiency. What kind of efficiency are we aiming to achieve, given our current workload and available resources? Do we want to be able to produce 50 videos 50% more efficiently, 90% faster? What is that metric, and can that vendor prove that they’re able to meet that metric?
AI Use Cases and Vendor Engagement
Nolan: It’s interesting, Craig. You said something, and I immediately was like, “Yeah, yeah, yeah.” You don’t buy the tool; the tool doesn’t decide which features you use. You decide which features you use, and then you find the tool that matches those. However, I do think that we are at a time when it’s sometimes impossible to know what you need with AI.
With Colossyan, I would have never thought I needed the ability to take a PowerPoint slide and turn it into a video using an avatar of our CEO. I would have never even known that use case existed. So, I think I find myself doing this, where that is what attracts me in; I’m like, “Oh, well, I want to know what these use cases are?” So, I think it’s a little different from an LMS, which is a highly consolidated marketplace; they basically do the same thing, and you should be familiar with what they do at this point. But it comes back to what the value of this thing is. It’s cool to chase a shiny idea, like, “Oh, cool, converts PowerPoints into things. Great. What is the value that brings back to my company then?”
This is something that I recently just did: I was like, “I want an AI tool that can write better emails in marketing. I do, and I want to be able to actually personalize every single individual email one-off.” I found it through a tool. There was a tool that was marketed to me; they said, “I’ll send you a free Yeti cooler if you take a demo of this tool.” I was like, “Summer’s coming, I need a Yeti, why not?”
So, the idea came, “Okay, personalization at scale, AI writing, great.” But then I went, I cracked the nut open, and I was like, “Okay, now let me go find all the tools that do this, and let me put some metrics down.” So, it is a little bit like the LMS space, where you definitely cannot let the tail wag the dog. You cannot let that happen. However, I have found that, at least in this space, simply getting demos of tools is beneficial, as it helps expand one’s horizons.
I’ll ask you this question; you don’t have to answer if you don’t want to, but would you mind sharing what tools you are using? I think everybody’s always interested in what tools the professionals out there are actually leveraging. How do they like them? What do you find yourself using?
Craig: In relation to AI, I assume you’re looking at?
Nolan: Yeah.
Craig: For video production, we are using Synthesia. One thing I will say about some of the comments you made is that a significant part of our approach, especially with AI at the moment, involves piloting tools. Where we can work with an organization, to your point, we may not know everything. We don’t know what we don’t know, and sometimes AI can help us explore all the possibilities, or at least provide direction on the use cases surrounding these types of tools in general.
We often conduct pilots, typically for a three- or six-month period. Just dip our toe in the water a little bit. Let’s see how that works before we make a longer-term commitment. Given the fact that the industry is moving so fast, and new tools are popping up left and right all over the place, not necessarily making those long-term commitments when we do engage, but maybe shorter-term commitments that will allow us to pivot more quickly than if we get into a three or five-year agreement with a particular tool.
Nolan: Really good point. I don’t want to be out there saying that, but no, I mean, I think in this space, we want to tread carefully, if you will. Something that I do as default, and this is now turning into a “how to buy AI” podcast, I say, “Listen, I’m not going to assign anything more than six months; three months if I can.” Depending on the tool, you can get away with that. If they’re bigger, usually they’ll force a year, but I say, “I refuse to do an opt-out. I will do only an opt-in.”
Many pilots are on a one-month schedule, and then they automatically roll into their next one. Alternatively, they may be on a three-month schedule, and then they automatically roll into the next one. So, this leads me to do two things: either if it’s a larger purchase that requires a PO and things like that, I immediately—it’s either in the contract that it’s an opt-out or an opt-in—or the second I sign the contract, I also send a cancellation notice. I say, “Listen, I intend to cancel at the end of three months, so that you know.” Because often, you have to let me know 30 days in advance, or 60 days. I say, “Now it’s on you.” It’s amazing how that.
It flips the script in their mind: “Okay, this person has chosen not to go with us unless we can prove that they are willing to go with us.” One, it ensures you don’t spend money when you don’t need to, and two, it puts the vendor in a good position. The other thing, which I’m sure many of you must know, is that if you’re buying a DIY tool like Synthesia—I think Colossian probably has one as well—there’s often a trial program available.
I bought Gemini for two weeks to see if I could produce a video similar to what I was seeing. I couldn’t, I wasn’t nearly good enough to do so, or at least I didn’t love the video that came out. However, I had paid for a year’s subscription, and then I canceled it immediately, but I still got to use it for that month. So, that’s another good thing: the second you sign up, go and cancel. They’re still going to give it to you for that time, but you don’t have to worry about auto-renewals. That’s a long talk to Nolan.
Craig: I believe in being upfront: this is where we’re at, this is what we’re looking to do. Of course, we’re treading cautiously because the proof really is in the pudding. I see where this could lead, based on what you’re saying, but we really need to ensure that it actually achieves the intended outcome. So, we laid all that out. To your point, we’ll ensure that certain things are included in the contract.
We have a legal team that will review it and serve as a resource and guide. If you don’t have access to it, you definitely want to ensure you read the contract or negotiate those terms upfront. A lot of these online tools, it’s like, “Hey, I’m going to go out, I’m going to put in my email, I’m going to click the terms of agreement.” You never read the terms of the agreement, and you’re not really sure what you’ve just signed up for. Therefore, it is essential to negotiate those terms as much as possible upfront.
Nolan: Totally. There are numerous tools available.
Craig: I always look at it this way: someone’s willing to play.
Nolan: 100%, 100%.
AI Tools in Practice: Synthesia and Microsoft Copilot
Nolan: You mentioned you’re using Synthesia.
Craig: Yep.
Nolan: I think I heard that you are also using a GenAI tool, like a version of Copilot or something like that?
Craig: Our primary GenAI tool across the board is Microsoft Copilot. We do use M365 Copilot, so that’s integrated. You have Copilot chat, but then you have work and the web, so it’s integrated into your Microsoft Graph, encompassing all your Teams messages, emails, and documents stored on OneDrive. It will be able to reference those, and you’ll be able to prompt referencing those types of things. We use that a lot, even in the L&D world.
One use case that seems very simple but has been very helpful to us, and I think everyone listening to this podcast who is in L&D or an instructional designer understands the importance of working with stakeholders. Often, they come with a need, and you’ll conduct a proper needs analysis. At some point in the process, you will need the stakeholder to provide you with an outline or other relevant information. That’s typically where, if it’s going to get hung up, that’s the first place it gets hung up. You’re waiting for that stakeholder to provide you with everything, because they may or may not have it documented.
What we have found is that we can speed up the process by using the information gathered from the needs analysis to prompt Copilot or reference key parts of that information to create an outline of what the training program looks like. Then we’re able to take that—of course, there’s a human in the loop, so we’re reviewing it for accuracy. We’re ensuring it didn’t hallucinate or fabricate information because it lacked proper references. So, we may make some tweaks to that. Still, there’s an efficiency gained just from that alone.
Then we can take that and present it to the stakeholder, so instead of having to start with a blank slate and a blinking cursor, they can react to something that already exists. We have found that when we can do that and they’re able to react to something, we get turnaround much quicker for several reasons.
Mistakes That Prompt Responses
Nolan: “Oh, no, no, that’s not right. We need…” or, “Okay, I’m not starting from ground zero now. I feel like I can get this done in a reasonable amount of time.” What is the principle where, if nobody is answering a question, give them the wrong answer, and then they’ll be much faster to answer that way? If you ask, “What’s the square root of 81?” I guess it’s nine. If I said 12, somebody would immediately say, “No, it’s nine.” But if I were to ask, not a single person would answer.
So, I love that use case. It’s one that I demo. I might have demoed it to you recently as well. I demo that all the time; I say, “Listen, you can’t imagine how much time it saves giving a SME a document that says, ‘Is this good?’ versus a blinking cursor, ‘Tell me what I need to know,’ and they’re like, ‘Well, okay, what do you need to know to train people?’
I don’t know.” Yes, 100%. That is the number one call to action that I always give people: more than likely, your company has some generic tool. They have Claude, Copilot, Gemini, and ChatGPT. Load in your reference documents. Say, “Create a course outline for me.” Show that to your SME. You will save yourself countless hours of just back and forth and waiting on people.
Copilot vs. Claude: Sticking With What Works
Nolan: How do you like Copilot, by the way? I used Copilot right when it came out, and I absolutely hated it. I was like, “This can’t do anything for me,” and I honestly never looked back because I just went so deep into Claude, and it’s got all my stuff. Once you establish yourself on a platform, you gain access to a wealth of knowledge and expertise. Are you liking Copilot?
Craig: Personally, I do. I think it applies to any tool that’s out there; there are pros and cons, as well as pluses and minuses. People have their favorites. They like them for different things, and often it’s dependent on what they’re primarily using the tool for in their role. What I like a lot about Microsoft Copilot is its integration into emails, Teams chats, and documents. Instead of having to take a document and upload it, I can prompt, slash, or reference it, and it’ll pull that document in. It’s all contained. From that perspective, it’s more seamless, and for those reasons, as well as the integration, I think it’s personally why I find it very helpful.
Nolan: Maybe I should give it another try. I haven’t looked at it or used it in two years. I just got so deep into Claude, but it is a hassle. There’s no denying the hassle of loading it into this document; reaching it in this way is so annoying. So, I’m happy to hear that it’s made those strides and organizations are using it. That’s a good shout-out.
Craig: I am a curious person, so I will say, explore. Depending on what you’re doing, sometimes you’ll find maybe a different one is better at analytics, or it’s getting to the point where with a number of these tools, you really need to define, “Okay, this tool does these types of things really well. This tool does these types of things really well.”
As you consider your tech stack for AI and the necessary tools, it’s hard to say that having none would be ideal. Still, once you get one and then start exploring with another, I think the direction things are heading is that it will be a multitude, even of these GenAI tools, such as Copilots, Claudes, and ChatGPTs. It will be a combination of those that will add value as people learn which ones are truly good and suitable for each individual.
Nolan: That’s why I bought Gemini; I wanted to do video, and I couldn’t in Claude, and I was like, “Oh well, I heard Gemini Pro.” So, you’re right, I think that’s absolutely where it’s going to go.
Measuring AI Impact and Future of Work
Nolan: We talked about purchasing these tools. The one thing we didn’t really discuss, though, is that we’ve made investments; you’ve obviously made personal investments in your time, but you’ve also purchased and acquired some tools. What are you doing to measure the impact of those?
Craig: I’ll be honest with you, we’re still working through all that. This is an evolution. When you think about learning, and we talk about this a lot, what’s the success of learning? Is it that someone attended one of the training programs, applied what they learned, and successfully implemented it? They were able to complete the task or tasks.
But then you look at, by doing that and building the skill ability across many different tasks or scenarios, what are the advantages of doing that? One is time-saving, and it’s efficient. This is where many people start getting concerned, as they say, “Oh, AI is coming for my job.” I’m sure you’ve heard it, and I think the reality is that some jobs will change, while others will go away. New ones will be created; that’s not anything new, just because of AI. That’s happened over the course of history with technological advancements.
As we examine the efficiencies gained, we’re asking the question: if people are saving X amount of time, what can they now do with that time? Some groups may have a backlog of work that they can now address or reduce in time because they have saved time on basic tasks, such as summarizing meetings, drafting emails, and creating outlines. Nothing really earth-shattering; with a little bit of practice, you can be effective at it. So, there’s that aspect of it.
There’s a lot to discuss here, but when it comes to a leader and a manager of people, at that level, understanding how my team is using AI, having conversations with them about where it’s working, where they’re gaining efficiencies, and then talking about what the future now looks like. What work can you now focus on that you couldn’t before because you did have a lot of administrative work that was going on in your role, and it was the kind of stuff that was keeping the train on the tracks; it had to be done, it couldn’t be ignored.
But now, what can we do with our curiosity and what can we do with time to experiment and explore, and how can that move the individual, the team, the department, the organization forward because of those benefits of AI?
Looking at how we measure that, looking for ways for accountability. Once you reach the point where people are starting to become educated and buy in at a deeper level, what metrics will we put in place for accountability that will drive increasingly deeper engagement with AI learning programs, AI upskilling, and ultimately, AI utilization? To say we’re doing this, we’re in the process of doing that right now and figuring it out.
Learning is an important component. I think it’s just one piece of the puzzle of successfully moving into this, continuing to move into this AI world.
Nolan: I think if you were to contrast it to Kirk Patrick’s four levels of knowledge consumption, it’s almost like AI is starting with the fourth, of making the business impact. I used to be able to do 40; now, can I do… well, I guess if we start it the other way, what would the equivalent of the smiley sheet be? “Were you able to leverage AI to produce an asset faster?”
I think that number one is almost like the table stakes of, “Can I do this job that took me eight hours; can I get it done in one?” Okay, cool. If that is the case, then you should invest, even if you don’t know exactly what you’ll use the other seven hours for. You should take advantage of those cost savings.
Level two, maybe, comes in then: what are we going to do with this extra time, and how much of it are we going to save versus how much are we going to reinvest? Are we going to shrink our budget, or will we maintain our current budget and do more of these things? We’re going to create an eLearning platform with a video for every course. Every course will be translated into 10 different languages, regardless of the format. Then you get back to three and four, and they stayed the same: three is about what the impact was on the job, or did we help people learn faster? Then, four is, did they perform, and what were the business results?
But I think we almost overthink ourselves out of the discussion to do number one. We’re like, “Well, we can’t define four, so let’s not even start at one. What’s the point of one? What’s the point of gaining efficiency if we can’t really measure that efficiency?” But we’re in such an early field, and the efficiencies are so high, such an efficiency gain, that I feel like we’re better off saying, “Listen, I know it took me X amount. If it can take me an eighth of that, I’m going to buy the tool and figure out what the efficiencies are down the road.” Unfortunately, I feel that this is already underway. I can see how it’s working in my company and others.
The Email Efficiency Myth
Nolan: When email came out, people thought, “Oh my gosh, think of how much time and money I’m going to save,” not with all these stupid stamps and going to the post office and writing. Please think of how much time I’ll have. Now we’re beyond email, we’re in chat messages, and that’s not what happened. Now I get a thousand letters written to me a day. Some are short, some are long. So now it doesn’t make my life easier; it makes my life harder because anyone can send me a message at any point in time, and I’m expected to respond immediately. I’m sure we’ll get there.
So, what I’m saying is that right now, enjoy the heyday of not everybody using it. You’re in a position to be eight to nine times faster than those who are still writing handwritten letters and mailing them. You are the equivalent of emailing people. Make some hay while the sun is shining.
Craig: Exactly. Right, because that’s just for now. Ultimately, history has shown us that more will eventually get on board, whether they initially wanted to or not. I would say my advice to folks is to ask yourself, as you’re going through your day, “How can AI help me?” I’ll put it that way in general. If you don’t know, I found that if you go to the AI and ask how it can help, it can actually help you start getting ideas, such as, “Oh, wait a minute, maybe this actually can help me.” Because that’s another barrier, people say, “Well, I think maybe it can, but I don’t know how to go about it.” Use the AI to help you down the road of not knowing about it. Now you’re using it as a learning tool as well.
Nolan: That’s why I said, you can ask it, “Are you lying to me?” It’s a wild time to be in. You can never compare it to anything; there was nothing, and I couldn’t even equate it to anything close to that. Well, thanks, Craig, for your time. I know we kept you a little bit longer. We experienced some technical issues at the beginning, but thank you for your patience. This has been a phenomenal podcast on all things AI.
Craig is on LinkedIn as well. If you want to ever sync up with Craig, he can discuss with you what he’s done, whether it’s on the council he’s part of with Gannett or in the real-world application. He is one of those guys who knows hands-on what to do, but also thinks very strategically at the top. So, he’s a wonderful resource for you, and I’m sure he’d love to connect and share his stories with you.
Closing Thoughts
Craig: Absolutely, Nolan, thanks so much. I enjoyed being here, enjoyed chatting with you. I look forward to continuing to chat with you about this topic in the future, as I’m sure we will for a good while to come.
Nolan: Sounds good. Thanks, Craig. Enjoy your day. Take care. Alright, bye-bye.