AI literacy is becoming essential across every organization, but most leaders are still figuring out how to measure it, teach it, and communicate its value.
In this episode of AI Answers, Paul and Cathy answer questions about emerging AI skills frameworks, why literacy matters for every employee, how to talk about risk and responsible AI guidelines, and what to do when teams resist training or demand proof before pilots begin.
Listen or watch below—and see below for show notes and the transcript.
Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 40,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike.
AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.
In this episode, we address 12 of the top questions from our November 14th Scaling AI class, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time—unscripted and unfiltered—just like we do live.
Whether you're just getting started or scaling fast, these are answers that can benefit you and your team.
00:00:00 — Intro
00:05:33 — Question #1: Have any AI literacy frameworks emerged that help assess and track employees’ AI skills?
00:09:45 — Question #2: How important is it for all employees to develop basic AI literacy?
00:12:34 — Question #3: How can leaders articulate the business value of investing in AI literacy when stakeholders aren’t yet convinced it matters?
00:14:27 — Question #4: What’s the most effective way to help senior executives understand the risk of not having AI guidelines in place?
00:16:36 — Question #5: When companies start drafting responsible AI guidance, do you recommend formal “policies,” more flexible “guidelines,” or something in between?
00:20:00 — Question #6: Many teams love the idea of AI but resist assessments, training, or structured onboarding. How can leaders overcome that resistance?
00:23:15 — Question #7: How should organizations respond when proof is demanded before pilots have happened?
00:26:29 — Question #8: Are there organizations successfully using a single overarching KPI to measure the impact of AI?
00:28:20 — Question #9: What’s your advice for getting data, governance, and access into shape so AI can actually deliver results?
00:32:48 — Question #10: How close are we to real enterprise adoption of AI Agents, and what should organizations be preparing for now?
00:38:39 — Question #11: Have you had a chance to use GPT-5.1 yet?
00:42:10 — Question #12: As generative AI reshapes search, what should marketers know about the shift from SEO to GEO?
00:45:39 — What do you think organizations should keep an eye on in the next few months?
This episode is brought to you by Google Cloud:
Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.
Learn more about Google Cloud here: https://cloud.google.com/
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: There's way more people resistant to AI than there are excited about. It has been my experience in companies, and so we need to do our part to try and bring those people along. Welcome to AI Answers, a special Q&A series from the Artificial Intelligence Show. I'm Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute.
[00:00:19] Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of ai. But we never have enough time to get to all of them. So we created the AI Answers Series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing.
[00:00:42] Whether you're just starting your AI journey or already putting it to work in your organization. These are the practical insights, use cases, and strategies you need to grow smarter. Let's explore AI together.
[00:00:58] Welcome to episode [00:01:00] 181 of the Artificial Intelligence Show. I'm your co wait. I'm your host actually. All right, sir. Along with my co-host, Cathy Mc Phillips. I was just telling Cathy it feels like a Friday because we have a
[00:01:12] Cathy McPhillips: fun team lunch today.
[00:01:14] Paul Roetzer: We do, and it's like next week's the holiday, and I think my mind is already like just done, and I'm doing three podcasts this week, so I'm like.
[00:01:24] Oh, we're gonna make it through though. I promise. We, we we're gonna do this. This is a special edition of the podcast. So if you are tuning in and you're wondering why on a Thursday there's a new episode of the Artificial Intelligence Show, 'cause we usually drop the Weekly on Tuesdays. We have a series called AI Answers.
[00:01:41] It is presented by Google Cloud. This is the ninth edition of this we are doing. The series is based on questions from our monthly intro to AI and scaling AI classes that Cathy and I do together. oh, I forgot to say Cathy is our Chief Marketing Officer at SmarterX if I didn't give the full proper introduction.
[00:01:58] so thanks to Google Cloud for [00:02:00] sponsoring this series. As part of our AI literacy project, we have an amazing partnership with the Google Cloud marketing team that's been going strong for a year now. We're looking forward to a bunch of exciting things next year. but they sponsor the AI Pod Answers podcast series.
[00:02:15] They're also our partner for the intro to AI and the scaling AI classes that we do for each month. A collection of AI blueprints that are gonna be coming out soon. And then we team up on the Marketing AI Industry Council, so you can learn more about Google cloud@cloud.google.com. So Cathy, this is, if I'm getting this information correct, we did a November 14th scaling ai.
[00:02:37] Does that sound right? Friday, November 14th. We did a scaling AI class.
[00:02:41] Cathy McPhillips: That is correct.
[00:02:42] Paul Roetzer: And these are questions from that class that we didn't get to during it, maybe a few that we did. Just to reiterate them for a larger audience. so we are recording this on Wednesday, November 19th. So this will drop on November 20th, and then we'll be back for our regular weekly episode for episode [00:03:00] 182.
[00:03:00] Alright, if I didn't confuse everybody yet, Cathy, this recap, this is AI answers. It is a special edition of our artificial intelligence show podcast that Cathy and I host together, and I'm gonna turn it over to Cathy. To bring us in to that, first question of the day.
[00:03:17] Cathy McPhillips: That's a great intro. All thanks
[00:03:20] Paul Roetzer: all over the place.
[00:03:21] Cathy McPhillips: yeah. Hopefully by the ninth time we're doing this, people know to expect this every few Thursdays, but this week we get someone, new listeners
[00:03:29] Paul Roetzer: each week. We do have new listeners. They, they've never been here, so Yeah. So
[00:03:32] Cathy McPhillips: if you are a new listener. If you want to go back and listen to past episodes, obviously, they're very timely when Ma Pike and Mike and Paul do the Tuesdays.
[00:03:42] Paul Roetzer: We are both on our game today, huh.
[00:03:44] Cathy McPhillips: but if you look at the AI answers questions, what those are, are questions from all these classes that we do, and those are more evergreen than I think the week, than the week to week episode. So if you enjoy this, there are eight more episodes you can go back and listen to and listen to in the future.
[00:03:59] Paul Roetzer: And we do [00:04:00] try and mix like we like our team. Claire and Cathy do a good job of trying, like sometimes we'll do similar questions 'cause we get a lot of similar questions as we do these classes, but we do try and introduce like new topics each time and try and pick some of the questions that maybe we haven't previously answered.
[00:04:14] So it's not completely redundant if you do go back and listen to the other eight, so for sure. Yeah, and again, like you can scan the timestamps and just pick the ones that you want to hear the answers to that are most relevant to you if you want.
[00:04:25] Cathy McPhillips: The other thing is I may take some creative license every now and then that if a question is very similar, like a legal question.
[00:04:30] Yeah, we get legal questions all the time. I try to put a different spin on it based on some news or something just to make sure that it is relevant and not completely redundant to things we've asked before. So, very quickly, we do the episode, we get all the questions. We export the questions from Zoom, Claire runs them through AI and through, you know, listening to the episode and pulling out some, some key questions.
[00:04:53] Removes redundancies, prioritizes them, puts them in a format where they segue from one to the next. I go through, [00:05:00] do a quick once over, get them in a good spot, and then here we are today.
[00:05:04] Paul Roetzer: And then just from an answering perspective, we do this, like we would do it live. I don't see the questions when we do 'em live.
[00:05:09] We just kind of go off the cuff and so we actually do the same thing here. Cathy and Claire put together this doc, but I actually don't pre-read these questions, so if I don't have a great answer. I will say that and we'll say like, we'll look into it a little bit more, get more resources. But yeah, so this is just sort of a, you know, for me it's just kind of a live thing.
[00:05:29] We jump in and we record it and we move on. So do my best.
[00:05:33] Cathy McPhillips: Alright, well let's ju let's get started. Okay. Number one, have any credible AI literacy or competency frameworks emerge that help leaders assess and track employees' AI skills over time?
[00:05:44] Paul Roetzer: That's an interesting one. I don't know that I've, I've really seen a great model of this yet.
[00:05:49] It's definitely something that we think about internally. I have seen more examples of people saying it's going to be part of performance reviews, where there is some expectation level and whether it's [00:06:00] milestone or certificate based. Like, we want you to achieve these, you know, five, certificates or want you to go through these programs or it's more, um.
[00:06:09] Activity based, like we want you to build at least three GPTs for your job, like things like that. So we are starting to hear about those examples where more and more companies are starting to make AI literacy a requirement as part of your professional development program and starting to provide more resources to enable that to be possible.
[00:06:28] So I do think 2026 we'll probably start to see a lot more, about how organizations are not only enabling AI literacy, but tracking it and then. determining, you know, performance based pay and de determining promotions based on, you know, how not only literate, but how competent you are with AI and how much impact you're having with your use of ai.
[00:06:50] Cathy McPhillips: Have you given any thought to how you might assess your team? All of us.
[00:06:54] Paul Roetzer: that's a good thought. No, I, I don't know that, even outside [00:07:00] of like, we look at skills and traits from a, from an individual perspective and something we'll probably do a lot more of moving into next year, now that we're scaling our team.
[00:07:07] So in integrating AI capabilities, AI literacy into the skills and traits that are required will, you know, become an expectation. I could see us having some more activity based things like certainly there's certificates in the trainings, like we have this internally now. Like we wanted to go through this and honestly we'll probably use our, our new AI academy, learning management system ourselves, like function as a business account within there and set required learning journeys and set milestones for our team, and then monitor if people are actually achieving those things.
[00:07:37] But I do think that for us, a lot of it really comes down to the impact you're having with the tools. How are you using 'em to improve workflows? What innovations are you driving? What you know, growth are you, contributing to for the company overall based on your use of them. So I think that's kind of how we'll start to look at it, but I do not have a blueprint in place for that.
[00:07:56] Cathy McPhillips: Yeah. And actually I talked to Mike earlier this week and I said I would love to [00:08:00] spend, you know, an hour with him going through like, okay, here's my. Here's what I'm doing for MAICON, for example, what's my marketing plan? And here's the places I have AI integrated and everything can help me, help me figure out what am I missing.
[00:08:11] Yeah. Are there any like big picture things because I'm so close to it that he could just be like, oh my gosh, you know, you can try this or try this and I'm excited. I know that's a little bit off topic from the question.
[00:08:19] Paul Roetzer: Yeah.
[00:08:19] Cathy McPhillips: But working with each other just to surface some new opportunities, I think will be really critical and helpful.
[00:08:25] I wanna give you like
[00:08:25] Paul Roetzer: one example, and again, this is more like performance based. Um. Yeah, like we're looking at the customer support side of AI for Academy, or AI Academy, by SmarterX. And as we look to scale customer support to thousands and potentially tens of thousands and hundreds of thousands of, learners in that system.
[00:08:45] The integration of an AI agent that can solve, you know, 80% of customer inquiries, we on demand 24 7 is like a really fundamental thing that we think is gonna be very important. And we, we think it's something we can actually introduce quite quickly. [00:09:00] And so that's one of those things where you can look at a team member and say, listen, we want you to be able to help us make this significant impact.
[00:09:07] And so you going and knowing how to do this and actually creating the agent and testing the agent. That is like, that's a great impact. And so when you look at the year in totality and like oftentimes with our professional reviews, you give someone a chance to tell us your story. Like what is the year, what impact did you make?
[00:09:22] What are the big things you worked on? And so be able to have people, this is more qualitative I would say. Like to have, be able to say like, listen, I actually contributed to the building. This agent that does has this impact on the business. I created this GPT, which saved us 300 hours of it. That's the kind of stuff I would love to see.
[00:09:38] And I think the stories we'd all love to be able to tell as employees and leaders.
[00:09:43] Cathy McPhillips: Absolutely.
[00:09:45] Cathy McPhillips: So speaking of training, number two, how important is it for all employees, not just early adopters to develop basic AI literacy? And what risks to organizations run if some people opt out or lag behind?
[00:09:58] Paul Roetzer: Yeah, I mean, I would, [00:10:00] I come at this from a quite a biased perspective, and I understand that, but I think it is like the most fundamental thing, like, so the way that AI is going to transform organizations is there's gonna be some top down vision and resources provided and, you know, organizational structuring to enable.
[00:10:18] But a lot of the innovation, a lot of the most impactful use cases are gonna come from the bottom up. And that can be from an executive assistant to an intern, to a manager, a director, a vp, like it can, it can really bubble up. And so the more you empower people with not only knowledge, but the tools, the greater chance you have of actually doing this and accelerating now.
[00:10:40] In terms of the risks of people opting out, the risk is they shouldn't be in the company in one to two years because, you know, let's say that you have like 80% adoption, so I don't know, let's just take a team of a hundred people and 80 of them are like, all right, we get it. We're in, we're participating in this process, we're using, you know, the AI assistance.
[00:10:57] We're finding ways to innovate. And then there's the other [00:11:00] 20 people who are like, I want nothing to do with this. Well think about, go back to question one about assessing the impact and assessing people and their AI skills over time. If you know that someone who invests the time in getting the certificates and using the tools every day has a 10%, 20%, 30% greater impact on efficiency, productivity, revenue, growth, whatever those metrics are, how can you justify keeping the employees who refuse to do it?
[00:11:28] And again, this is, I am as human-centered approaches, anybody to ai, that's just a reality check. Like you cannot even as a leader.
[00:11:36] Cathy McPhillips: And that's not even, that's not even ai, right?
[00:11:38] Paul Roetzer: That's anything, any, any technology, right. That they just refuse to do. So like, you know, back in my agency days, we would go in and do, you know, marketing automation set up and we would advise on strategies like go to market strategies and you would have the sales reps who refuse to use the CRM.
[00:11:54] Like they're, they're just everything is manager managed on a paper notepad or in Excel and nobody has [00:12:00] visibility into it. It's like they might be a good salesperson, but over time like that just doesn't work. And so I do think that, you know, there's a lot of lessons learned when we look back of just technology transformation change management.
[00:12:13] But I think it's gonna be more obvious than ever with AI because the potential for it to impact workflows and productivity, is so dramatic. That the people who refuse to do it are, are just gonna not only be left behind that they're gonna start to drag down the team of people who want to grow the company and Sure.
[00:12:33] Create those career opportunities.
[00:12:34] Cathy McPhillips: Yeah. That leads us to number three. how can stake, how can leaders articulate the business value of investing in AI literacy when stakeholders aren't yet convinced it matters?
[00:12:44] Paul Roetzer: I'm a big believer in just making it as tangible as possible and talking to people in ways that.
[00:12:50] Make it relevant to them. And maybe it's like the communications background, Cathy, that you and I have, but I'm not someone who feels like I said it [00:13:00] as the CEO, so it matters to you. So go figure it out. Like I, I know that there's probably a lot of leaders who take that approach. I don't think that that's the right approach, oftentimes.
[00:13:09] so what I mean by all this is if you, like, let's say you have salespeople or someone in HR or finance that just wants nothing to do with this. Show them AGI PT built to help them do the thing they don't like to do. You don't like doing these reports every week. Let me show you how you can do 'em in 10 minutes instead of three hours.
[00:13:29] Once you show them something that's personally relevant to them and you help them, then connect the dots of how they could be using it to do other things, and maybe it is just starting with the things they don't really find fulfilling in their job. And then say, you know, make me a list of the five things you wish you had time to do each month.
[00:13:47] And then like help them move things from the bucket of stuff I'm wasting my time on to things I want to be doing more time on. And once you do that, you, you know, it starts to matter. Or what are the KPIs [00:14:00] that impact their own raises and performance bonuses and show 'em how AI can help them achieve those KPIs, more quickly.
[00:14:08] But, you know, surpass them. so you, you have to make AI personal to people, especially if they're just not those early adopters who are gonna race forward and try everything. There's a, there's way more people resistant to AI than there are excited about. It has been my experience in companies, and so we need to do our part to try and bring those people along.
[00:14:27] Cathy McPhillips: And definitely. Number four, What's the most effective way to help senior executives, especially in those regulated industries, understand the risk of not having AI literacy guardrails or gen AI guidelines in place?
[00:14:41] Paul Roetzer: Well, I mean, from a risk perspective, you could show 'em how it goes wrong if they make a mistake with it.
[00:14:47] but I don't know. I mean, I think show 'em what the alternative looks like. So like I was working on this morning, I was working on a concept of like building AGI [00:15:00] PT, Google Gem Board. And what I mean by that is like, I don't have a board for my company. And so the idea I had was like, what if I talked to Gemini three, like new model that I've been pretty impressed with in the first 12 hours I've had it.
[00:15:14] and said, Hey, I want to, what would an ideal structure of a board be for a company of our size? Like who would be on that board? What would be their backgrounds? And then work with the developers like, okay, that's the prototypical five person board. Now go create those personas for me. Now I'm gonna train you on the personas you created, and we're going to have an on-demand board for me to talk to with those five backgrounds at any point.
[00:15:39] So like. If you take that example and you give that to a, a small mid-size business, CEO, who maybe doesn't have a board or a really fast growing startup company that like, maybe they want some other opinions and things like that, and you just show them this really practical way to use ai. Now all of a sudden it's like, oh my gosh, okay, I understand now the risk of not knowing this, like I'm gonna, [00:16:00] my company's gonna fall behind if I don't know these kinds of things.
[00:16:03] And so to find those like really innovative e examples or use cases, um. You have to understand it. And so again, it almost goes back to the previous question of you have to make it personal for them. This is what could go wrong if you don't deeply understand this yourself. This is what you're missing out on doing.
[00:16:21] If you don't understand this, here's what happens if all the people that report to you understand it and you don't. So you have to know how to talk to people and what their triggers are that's gonna get them to care more, I guess is how I think about it. Okay.
[00:16:36] Cathy McPhillips: Which kind of leads into number five, when questions start draft, when companies start drafting responsible AI guidance, do you recommend formal policies, more flexible guidelines or something in between?
[00:16:47] And we talked about this a little bit in scaling, like yeah, is there a name that resonates more and is it a more important word to say guidelines versus, or policies versus, you know what I mean? We're trying to figure out like what's giving them, [00:17:00] what, how are you enabling your company to and your team to do things?
[00:17:04] Knowing that it's super critical that they follow the rules.
[00:17:08] Paul Roetzer: Yeah, I think this came up on the actual class as like a policies versus guardrails question, something along those lines. And I don't honestly remember exactly what I said then, but like my basic premise here is we've seen this done many ways.
[00:17:23] Sometimes they are formal policies because they have sign off on the executive team, and these are literally what you're required to do. Other times they can be more informal and they, they are general guidelines. I think that depending on how you are using ai, like so for example, what is allowed to be uploaded into Chad, CPT, Gemini, Claude, that better be a policy, right?
[00:17:49] You, you have, because there is higher risk associated with that. It needs to be a policy, not a guideline. A guideline could be. More along the [00:18:00] lines of when do you disclose your use of generative a and where there's probably some like fuzzy middle where it's like, okay, we're allowing you to have some autonomy yourself of like making these decisions, but here is the general guideline as to when you should or should not disclose.
[00:18:15] and and so I do think that while. It might be faster to get going with a bunch of guidelines, especially if it's like, let's say it's a marketing AI council and you don't have a full blown corporate council and you're trying to just get things going within the marketing department. You may not actually have the authority to set like formal policies around some of the things that would be required to be in these.
[00:18:38] but that's where it, and legal and the C-Suite might need to get involved. So you may just set some basic guidelines so people are being safe and doing things in a responsible way. So part of it is what your authority is and what the kind of the charter of, a council may be or what the mission of creating the guidelines or policies is.
[00:18:59] [00:19:00] and and part of it might be, yeah, just like what is the spirit of what you're trying to do.
[00:19:05] Cathy McPhillips: Yeah. And I think, you know, I look back on guidelines I've written now brand guidelines, social media guidelines. If someone doesn't adhere to those, to those, it's not critical, right? If someone doesn't adhere to a upload policy, that's a, that's a different ball of wax.
[00:19:22] So like that, I get that, that differentiation and why you should call it different things.
[00:19:28] Paul Roetzer: Yeah. And if you're in an industry where you, you adherence is critical, highly regulated. You know, being an example there. Or we are dealing with, you know, personal identifiable information, things like that.
[00:19:40] Right? Then you, you have to have actual policies and they're probably in the employee handbook and they're probably agreed to by everybody versus, you know, here's how you experiment with AI agents. But like, we don't, we suggest not using them to do these things because we just don't know if it's dangerous yet or not.
[00:19:57] Right? So, yeah, it's, it's a good [00:20:00] question though.
[00:20:00] Cathy McPhillips: Yeah. Number six. Many teams love the idea of ai, but resist assessments, training, or structured onboarding because they see them as time consuming. How can leaders overcome that resistance and get teams moving?
[00:20:13] Paul Roetzer: My instinct is to kind of go back to the first couple of answers here with, show them the difference.
[00:20:19] So you know, again, say, okay, you're in your current role. You are spending five hours a week doing this one workflow that has these 10 tasks in it. Once you complete this training, we expect that process to now take you 20 minutes and you're only gonna have to do, do these two things. And so the more tangible you make the benefit of the training versus yeah, we're just checking a box and I'm watching these stupid onboarding videos and like I'm not really paying attention and I'm just gonna get through it to, if you do this, you are gonna be more efficient and productive.
[00:20:52] And the AI forward CEO memo, if you refer to that. It clearly states the people who do this are going to have a greater impact on the company. They're [00:21:00] gonna get paid more money, they're gonna have greater, like, connect the dots of why you are doing it. and it isn't just to check it off. It is literally to help you transform your own capabilities so you can make a greater impact on the business and open up career opportunities for yourself.
[00:21:15] And part of what he does is like show it. Like, here's Cathy. Cathy went through this training last year. She now, she saved 40% on the podcast production every week in, in, in exchange for that time. She went and launched these two campaigns that drove a million dollars around like show them that. Right. And don't just like talk about it.
[00:21:31] So, and if you don't have those use cases yourself or those case studies yourself, go find them and like show this. But this does have to be supported from the leadership. It can't just be words on paper. It has to be. We stand behind this, we are going to promote and advance the people who help the company become more efficient, more productive, more innovative.
[00:21:50] More creative.
[00:21:52] Cathy McPhillips: Yeah. I mean, and from an employee's side of things, if you said, okay, you needed to finish piloting AI in the next three weeks, right? [00:22:00] Cathy, find eight hours to take this. I'd be like, I don't got eight hours to do this. But if you say, this needs to be done and here's why. I think, okay, yeah, I'll find the time and here's what's going to not get done, or here's what I need to be more efficient on, or whatever.
[00:22:15] But like, I think also giving deadlines makes people get through stuff. And also I think, you know, if I knew my whole, and maybe this is just me as a people pleaser, but if I knew my whole team was going through this and I was important, like if I didn't do it, I was screwing up everybody else, I'd be more, I'd be more inclined to do it because I'm like, I don't wanna let everyone down.
[00:22:35] Yeah. We're doing this together.
[00:22:37] Paul Roetzer: Yeah, and it could be an integration of like some cohort based stuff where you say, Hey, we're going through this as a team and we are meeting on December 5th, and we are gonna spend two hours talking about this and then doing an applied AI workshop where we're going to take these learnings.
[00:22:51] And integrate them into what we're doing. So, yes, don't just make, and that's why at the end of every one of the courses we created for AI Academy, I end with an applied [00:23:00] AI experience. Because the whole premise isn't to take the course, the premise is to learn the information right. And apply it to what you do.
[00:23:07] So that is, that's one way to think about it. To your point, it's like, have it be something more than just course completion. It is application of knowledge that we're going for.
[00:23:15] Cathy McPhillips: Yeah, absolutely. Okay. Number seven. some leaders want hard ROI numbers before approving a centralized AI office or a top down strategy.
[00:23:24] How should organizations respond when proof proof is demanded before pilots have happened?
[00:23:29] Paul Roetzer: I think there's very, efficient ways to show hypotheticals, and I know hypotheticals isn't maybe the example or, or even like minimum viable, examples. And so the way I have done this and that I've trained it in our courses is take like, again, I'll just go back to the podcast example.
[00:23:48] If your company does a podcast or pick, you know, insert any campaign workflow into there and you know, it takes 20 hours, like benchmark it, get the actual data of here's how long it takes us to do this [00:24:00] thing. Then you go through and say, listen, we, we found that we can use Google Gemini to do these five things and it's actually gonna save us 10 out of the 20 hours.
[00:24:10] and here's what it's gonna look like. We need Gemini licenses kind of thing, like, so you can make cases, and we've done this since a pretty pretty big enterprises, this exact model we used, show what it looks like today and how much time and money it takes, and then show what it would look like tomorrow.
[00:24:25] So like we have, we'll put this link in the show notes. I created an AI value calculator that tries to do this, where you take how much time goes into something you put in, you know, an assumed efficiency gain, say 10%. the cost of the time that you're currently spending, and then like the cost of the tech, the cost of the training, and like what is the potential ROI.
[00:24:45] So you could use that tool also to kind of show an example, like a third party tool that helps you, project these things, but showing actual workflows, actual business cases of here's what it is today, here's what it could be. Tomorrow is the [00:25:00] best way to do it if you don't have the actual data. And it's really hard to refute at least getting permission for pilot tests based on that.
[00:25:07] All, all we need is three months and a thousand dollars and we think we can save $5,000 or we can generate $15,000. Like just make a business case.
[00:25:17] Cathy McPhillips: And we've been saying this now for what four years is like. Just start with that one small thing. The, you know, spend a few hours doing something, see how much time you can save, see how many resources you can save and go with that as your first use case.
[00:25:29] And it's not some big departmental overhaul, but it is like, look at this one little thing I can do that is meaningful. Can I do more? And,
[00:25:37] Paul Roetzer: and literally, I mean, you could do this in a spreadsheet that's just like column A is the task or the activity. Column B is current time to complete each activity. 30 minutes here, hour there.
[00:25:49] Column C is predicted time, like how much time would we actually save? And if you want, you can throw a column D in there of what is the cost per hour of that time? Like is it an employee making [00:26:00] $120,000 a year? $10,000 a month, 176 hours a month. Like you can get to a quick equation. It says, okay, it costs us, I'm just gonna make it up, $65 per hour to employ this person.
[00:26:11] So here's what it would cost us before, and here's what it cost us after, from a labor perspective to do this thing. So you, I mean, we all know the tasks that go into what we do. You can knock something out like this in 10 minutes, right? Ask Gemini to do it for whatever. It's just it, you can do it. You can build a business case pretty quickly.
[00:26:27] Yep.
[00:26:29] Cathy McPhillips: Okay. Number eight, are there organizations successfully using a single overarching KPI like net revenue per employee to measure the impact of AI and is a unifying KPI even realistic right now.
[00:26:40] Paul Roetzer: So the value calculator that I'm, that I referenced that again, will throw the link in the show notes, does use revenue for employee as sort of a universal metric.
[00:26:48] It is not a perfect KPI, but it is a very common KPI of, how much revenue does each employee in the company generate? So I think I might have said this on the [00:27:00] class, but like in a, in a professional services firm, like I owned a marketing agency for 16 years, you would usually want, you know, three times the cost of the employee to be the revenue they could generate.
[00:27:12] So if you're, you know, you know, cost a hundred thousand a year total comp, you'd want them at least billing $300,000 a year, that sort of thing. revenue for employee numbers vary by industry. Roughly, they're probably in the quarter million to $400,000 range depending on the kind of company. Some companies might be much higher than that, you know, into the 500 to 700,000 range.
[00:27:34] But it is a pretty good metric and it is a number I expect a lot more companies to monitor because they, and report on, because it indicates that you're more efficiently generating revenue as a result of integration of ai. So that revenue per employee number should increase because we should be able to be more productive and more efficient.
[00:27:58] Thereby we should be able to generate more [00:28:00] revenue with fewer people. Is, is the premise. doesn't mean you have to get rid of people, but as you grow you shouldn't need as many people. Therefore, your revenue per employee number should keep increasing
[00:28:12] Cathy McPhillips: unless you're just growing that much and you need more people.
[00:28:15] Paul Roetzer: Right? Unless you're hiring ahead of, you know, ahead of, you know, the growth curve basically.
[00:28:20] Cathy McPhillips: Number nine, many companies are realizing their data isn't ready for ai. What's your advice for getting data, data governance and access into shape so AI can actually deliver results?
[00:28:30] Paul Roetzer: And can AI actually help here, work with experts in that space?
[00:28:34] You know, bring in the people within your company or the outside advisors. You need to do all of those things. The biggest key for me is don't wait for that all to happen to get benefits from generative ai. There. There's too many times where I've gone into organizations and they're like, oh, you know, the IT team's working on this, or the data team's working on that.
[00:28:52] And so yeah, we've got some copilot licenses, but we're not training anybody yet, and only like 20 people actually have access to 'em and they're just like waiting. [00:29:00] It's perpetual waiting game and that is just not the way to do this. There are hundreds of of use cases for, specifically for generative AI tools that don't need to touch any data.
[00:29:12] And so that's what I would advise people is just get good at identifying use cases that have low to no risk profile. So there would be no objections from the legal team, the IT team, whomever, for the use cases you're pursuing. So that, I think that's like key for me is find the right experts who can help you figure out the hard stuff and then get really good at doing the low to no risk use cases where generative AI can create immediate value.
[00:29:39] Right now while you're waiting for the big stuff to be solved.
[00:29:43] Cathy McPhillips: And then from the side of AI actually helping, you know, if you are using a CRM and you're trying to, complete some records, there's ai, there's Breeze intelligence, there's things like that built into some of the CRMs that, that we're using that could help us clean up some of our data, fix some things, [00:30:00] populate some things, from that side of things.
[00:30:02] So I think there, when you say your data's not ready. Maybe it's right. More ready than you think it is.
[00:30:08] Paul Roetzer: Yeah. And I will add to that, Cathy, like Microsoft recently announced a, a custom fine tuned version of copilot into Microsoft Excel. So they have an advanced version of openAI's models, is what it is using, but they've fine tuned, meaning they took the core model and they trained it to function within Excel as an expert Excel user.
[00:30:31] And then, Google Sheets is doing the same thing with Gemini. So now Gemini is baked right in not the most current model that just came out yesterday. but it will soon be, and so you can talk to the assistant right within sheets, and it's been trained to function within sheets. So you can say, I'm trying to use this data to do this.
[00:30:53] Can you analyze the data for me and find out where the flaws might be? What are the things I'm not seeing? What are the anomalies within the data? How [00:31:00] could I improve the data? So yeah, it's, it is gonna be the point where you just talk to the AI assistant that's embedded right into the database, software that you're using, and it could help you surface like, what am I missing?
[00:31:12] Cathy McPhillips: What's, what do I need to clean up first? Rather than like, just like you said, sitting and like being like, okay. I'm just too paralyzed to get started.
[00:31:21] Paul Roetzer: Yeah,
[00:31:21] Cathy McPhillips: help it. Let it help you figure out where you need to start,
[00:31:24] Paul Roetzer: which is, you know, and part of this leads to like, you know, this debate about who's gonna benefit most from AI tools and who might be impacted most at the job front.
[00:31:33] This bodes well for the senior people because the value you extract from these tools comes from asking the right questions. So being able to go in and say, what am I missing here? What are the anomalies in the data? Do you see any trends from Q3 that might be hinged? Like, I noticed this, did you, if you're entry level, you don't know to ask those questions yet, right?
[00:31:56] You're just being trained on these things. But if you're a senior level person [00:32:00] and you now know you have an expert data analyst with you on demand 24 7, what would you ask that person? And so that's the mindset shift to just know you have that. And like, what do I ask it to do and then to know what it's capable of doing.
[00:32:14] Again, this is why literacy is the foundation of everything. You have to know what AI is capable of. You have to know what the tools you have access to are capable of, and then you gotta know what to ask them to do, and then what to do with the output and be able to verify that the output is accurate.
[00:32:30] Cathy McPhillips: I did that with some MAICON.
[00:32:31] Forecasting, like, is this an anomaly? Is this a trend? Is what I'm seeing actually. Are you seeing the same thing stripping out any customer data? Yeah, just looking at numbers and weeks and things like that. It's been really helpful.
[00:32:43] Paul Roetzer: Yeah, it's awesome. I, I've used it many times in the last, like 30 days.
[00:32:48] Cathy McPhillips: Okay. Number 10, AI agents are favorite topic.
[00:32:52] Paul Roetzer: Yeah.
[00:32:52] Cathy McPhillips: how close are we to real enterprise adoption and what should organizations be prepared for now? What can we learn from more nimble [00:33:00] SMBs and startups?
[00:33:01] Paul Roetzer: So I, I'm people who listen to, I know we have some community members who listen to like, you know, $10 classes, listen to every podcast.
[00:33:09] And so this might sound like a broken record to some people, but for others, I, I'll just reiterate some key, talking points related to AI agents. one, the definition of what people call AI agents is not universally agreed upon. So there's a lot of confusion when we talk about AI agents. The main confusion point comes in with how autonomous they are, so how much the human has to be involved in the planning, the execution, and then the evaluation of what they output.
[00:33:42] That's where a lot of the confusion comes in. So what I will say at a very high level is they are marketed as being more autonomous than they are. So you may hear about these agents and think, oh wow, it's gonna take my job, or I can replace three people on my team, or I can like [00:34:00] really, you know, a hundred percent increase the performance of the efficiency.
[00:34:03] That is generally not true. What, what is happening is AI agents are basically just AI systems that can take actions to achieve a goal you give them, or a project you ask them to complete. So they can perform deep research. They can go off and look at a hundred web pages and summarize the findings, and then write a report based on the findings.
[00:34:20] That's an agent. It's going and doing things. It's often helping develop the plan of things to go do. So you may say, Hey, I wanna research this topic in Google Deep Research. It'll go build the 10 step, plan of what to go do, and then it'll go do it once you approve it. So agents are at the point where they're very helpful.
[00:34:40] Um. In some instances they can be, mostly reliable, but they are not autonomous. They, they cannot do entire jobs for people. We are actually probably years away from that being a case. We're saying we don't need any writers on staff. We're just gonna hire agents, and the agents are gonna [00:35:00] do everything.
[00:35:01] That's not what's happening. So the agents are able to take a specific set of actions. that, that complete tasks that are part of a larger job. And so the human is still there. It cannot do the job of the human, but there are some things, especially like customer service or like BDR work, where these agents really can start to do 50, 60, 70% of the tasks that those roles would normally do.
[00:35:27] So they're evolving. what needs to happen with them is they have to be trained to do specific work. So right now, the hot one this week is, financial services. all, all the AI labs seem to simultaneously be working on building, financial agents. So, like OpenAI announced a partnership with Intuit yesterday.
[00:35:50] And they're basically gonna try, if you think about Intuit owns, I, I'm pretty sure they're in like Quicken or I, there's a personal app I use and then they have QuickBooks and things like that. Yep. So imagine [00:36:00] six months from now you have an expert level financial analyst and advisor embedded right within QuickBooks and Quicken.
[00:36:08] That's the kind of stuff that's happening and that becomes a, an agent basically, Hey, I wanna evaluate these five stocks. Can you go do it for me? Does it comes back, gives you a report. Same thing in business. I want you to evaluate our spending over the last 12 months in our QuickBooks account. find ways we can cut costs of things maybe we're not getting full value out of.
[00:36:26] Cathy McPhillips: Yep. It'll go through.
[00:36:27] Paul Roetzer: That's the kind of stuff that'll happen. It'll happen like industry by industry, career by career, but it, it's gonna be. sort of a progressive thing over the next couple years,
[00:36:36] Cathy McPhillips: and I'm sure that these companies building, these agents are watching all the things that we're doing with these agents, you know, at whatever form
[00:36:43] Paul Roetzer: Yeah.
[00:36:44] Cathy McPhillips: To develop them, to make them, you know, a few years. Seems like a long way away though.
[00:36:48] Paul Roetzer: I mean, the, the, my current, marker for this is that OpenAI has publicly stated their intention to build an AI research agent by [00:37:00] spring of 26. And then a full blown AI researcher. I think by late 27, AI researcher is the probably number one thing.
[00:37:10] Many of the labs are focused on building, so if they haven't achieved building the AI researcher agent, that is their number one priority. My assumption is that agents in other fields are gonna probably. Follow behind, but I'm talking about replacement of job agents by 27. Yeah. in, in that instance, agents that can help a marketer or a consultant or a CEO or a BDR or a customer service rep, that, that literally just takes a startup, building a thing to do that.
[00:37:42] And that, that is happening now. So you will in 26 see a lot more advancement in these agents, more reliability. Longer term horizons of the tasks they can do. So it's coming, it, it's just not as here as it maybe feels like or sounds like when you hear [00:38:00] these companies talk about their products.
[00:38:01] Cathy McPhillips: Yeah.
[00:38:02] Jeremy and I were talking to someone, one of our partners earlier this week and hearing that what they're doing with some of their agents, and it was pretty remarkable. And, you know, we're trying to figure out what the human's doing, what the agent is doing at this point will obviously grow. But it was very fascinating to listen to what they're doing and where the human staying in the loop and what they're able to just set it and go.
[00:38:22] It's been interesting.
[00:38:23] Paul Roetzer: The, the custom built, finely tuned agents are way further along than the general agent that I just go into ChatGPT and I have it's agent mode. Go do something for me. Yeah. So there, there are definitely pockets where agents are, are moving very quickly.
[00:38:39] Cathy McPhillips: Okay. Number 11, have you had a chance to use GPT-5.1 yet and we can throw in the new Google?
[00:38:44] Yeah. as well, what stands out to you in terms of capability, reliability, or safety, and what does it signal about where models are headed?
[00:38:52] Paul Roetzer: So I would refer to episode 180, right. Are we on 181 right now? Yeah. Or is this 182. Okay. So episode 180, [00:39:00] that would've dropped on November 18. the lead main topic was GPT-5.1, so you can go here.
[00:39:06] The, the full conversation that Mike and I had my experience with, it has been relatively limited so far in the first like five days or so that, that had been out. I have used it quite a bit, not in any use cases where I can tell a massive difference yet. Like I haven't run it through internal benchmarks of like, okay, here's what, you know, I did before his, I did it after.
[00:39:29] So it's, it's been very good. The basic premise, what we talked about in episode 180 is that OpenAI sort of intentionally underplayed 5.1. It it's actually seems to be a very good, fine tuned version of the model. I think, as I explained on the podcast, it's not a new model in terms of they retrained a full new model.
[00:39:52] It, it seems like they just fine tuned the existing model for things like writing and coding and math and science. [00:40:00] financial advice and things like that. Like they're, they're basically taking the core model and making it smarter at, at, at certain things through reinforcement training. that being said, Gemini three.
[00:40:10] Which we will talk about on the next weekly episode, which be episode 182.
[00:40:15] 82 2.
[00:40:16] Paul Roetzer: 182. Yeah. coming out next Tuesday. I gave it a use case this morning that kind of blew my mind. I listened to a podcast, would've been Tuesday night, the night it came out with Demi Saabas, where he was talking about, its visual capabilities, like to look at handwriting or to look at like a whiteboard, and process things.
[00:40:40] Even with like super sloppy writing. I gave it a whiteboard session from an internal meeting from a few weeks ago. honestly, like it did a better job of organizing the information than I would've done. And I'm the one that wrote everything on the whiteboard and I was struggling to understand it myself, and I was like, oh, let me see.
[00:40:58] This is what de said it was good [00:41:00] at. So I literally just gave it. I said, here's a brainstorm session. Here's what I'm trying to understand. Here was the gist of the meeting. And it's like, boom. And I was like, all right, well what, what should we be thinking about that we didn't include in this? And it like, gave me this 10 point.
[00:41:11] It was really good. That's, so Gemini three is, all reports right now from people who had early access are that it is a, it is the state of the art now. it is, it is beyond, in, in some ways a leap ahead of the other models that are out right now.
[00:41:27] Cathy McPhillips: Wow. Yeah. So when Claire is doing this, these episodes, AI answers, she's using AI to help her, you know, re remove the redundancies, everything I said earlier in the episode.
[00:41:37] And she put a note in and she said, um. You don't have to listen to these insights, but the insights from 5.1 were distinctly different and abundantly more helpful compared to earlier models.
[00:41:48] Paul Roetzer: That's cool.
[00:41:49] Cathy McPhillips: Yeah,
[00:41:49] Paul Roetzer: yeah, yeah. It's better at thinking like, that's the main thing, 5.1, which means like it's reasoning capabilities, that is a frontier that's being pushed by all [00:42:00] the labs, is the, you know, continued advancement of reasoning with which lets it solve, more complex problems, more strategic situations.
[00:42:08] Yep.
[00:42:10] Cathy McPhillips: Number 12, as generative AI reshape, re reshapes search, what should marketers know about the shift from SEO to GEO? How should teams adapt to where search traffic is actually going?
[00:42:23] Paul Roetzer: Gee. I've heard of A EOG. Yeah, it's a, yeah, basically like how do we show up in the language models? Yeah. Is like the question people are basically asking, um.
[00:42:33] So, yeah, again, if you're not in this world, if you're not in like SEL or marketing and you're like maybe just a, a business leader or educator, somebody listen to this. in, in the marketing world for 25 years, you have been trying to show up high in search results on Google. and so you do that by publishing content, authoritative content through blog posts and podcasts and webinars and, [00:43:00] you know, you do all these things to get found on the internet.
[00:43:02] And then there's some. you know, there's other things like meta descriptions and stuff like that. so the question in the last couple years has been, well, how are these language models? How are these AI assistants surfacing information as they start to integrate more links into the outputs of what they do?
[00:43:19] We have AI mode and Google, which is becoming more and more prominent by the day, and figuring into the future of what they're doing. How do we show up in those results? And the short answer is no one's really sure yet. probably easier to gauge how Google's doing it than openAI's and perplexity and others because they, they, they're newer to that game and how they index things and how they surface things, where that's Google's life for 26 years.
[00:43:48] That's what they've done. So the general advice that we give right now is, diversity of content in many places. [00:44:00] So when I think about our own strategy. You know, we do the podcast, it gets published on podcast networks. It goes on YouTube. There's a transcript. We do have a blog post with the transcript in the show notes, but we also enable the transcript to be other places.
[00:44:12] Like just get the information out there, go do podcast interviews, go do, you know, be in all these places. Ironically, I was having this conversation with somebody last week. PR probably, actually is ha, may have a renaissance because yeah. Media reports, clippings, mentions in, in those kinds of things where you're going and earning media might actually be incredibly important to showing up in these, the AI assistance and , and within agents and things like that.
[00:44:43] So yeah, it's something to keep an eye on. There are people who are way more expert at this topic than I am. will Reynolds, Andy Cina, a couple people that have been talking about this, Chris Penn talks a lot about this. so I would say go find the people. If this is what you do and you really are trying for [00:45:00] 2026, figure out what is our plan, what are we gonna do?
[00:45:02] go, go look at that. Go run a deep research project, say, what are the smartest people in the SEO space saying about these things? And what are the strategies you could be looking at? it's a, it's a good one to lean on a Gemini three or AGI PT 5.1 and see what they can help you with.
[00:45:16] Cathy McPhillips: Yeah. And the things that I'm hearing are like, go back to our content marketing roots.
[00:45:20] Be helpful. Be relevant. Yeah. Be consistent. Be everywhere. All of those things matter right now more than ever.
[00:45:26] Paul Roetzer: Yeah. Be where your audience is. Don't, don't make them come to your site. That might, might just be AI agents coming to your site, like Yeah, just solve for the customer as, as crazy as it sounds.
[00:45:36] It's like everything that's old is kind of new again.
[00:45:39] Cathy McPhillips: Yep. Okay. Number 13, before we close out, anything else you're watching that you think organizations should keep an eye on in the next few months? I know you're gonna talk about Yeah. Predictions at some point soon, but any
[00:45:53] Paul Roetzer: thoughts? I, I just, and again, I don't want to end this on a negative note, but I really think that [00:46:00] there's gonna be a far greater impact on jobs than people realize in, in the near term.
[00:46:04] And I mean that with like three to six months and. I would just tell people to have a sense of urgency, to figure this stuff out, to keep taking the next step, to improve your knowledge and capabilities, to do what you can to bring along the people in your company who maybe don't want to do this or are sitting on the sidelines.
[00:46:24] I think we need, not fear, but we need a greater sense of urgency to figure this stuff out. The models are getting smarter. I say this all the time. They're getting smarter. They're getting smarter, fast. ge just based on the Gemini three data from yesterday, there are no signs of a slowdown. there's a lot of, in the US there's a lot of chatter right now.
[00:46:50] a lot of chatter around AI regulations at the state level. and the federal government is trying to stop the state level efforts. Those regulations [00:47:00] could play in here, but I just really feel like. There is an an a necessity to figure this out for our own good, but there is also this incredible opportunity to frame it in a positive way to, to truly just reimagine what we can do with our businesses.
[00:47:16] And I know for me, like every day, I just want to find those four hours in the day to just sit back and think about what's possible now, what can we do next year with events, with education, where we can just like totally do something new and exciting and . Yeah. And that to me is like an amazing time to be living through that.
[00:47:35] We get that chance. Like the board thing, I mean, I literally thought that this morning, driving home from the gym, like I was like, oh my God, I wanna build that. Like this weekend, like in my head, I'm like so excited to do this thing. And I don't have to go hire developers. I don't have to find anybody technical.
[00:47:49] I literally just have to find like a half hour to write a system, prompt, throw it into the Gemini and see what happens. Like that's amazing. And so I think if we have that mindset of. [00:48:00] All these things that we can now do if we understand what it's capable of and learn to look at problems differently and growth opportunities differently.
[00:48:07] so again, I sense of urgency on both fronts for the good stuff and the stuff that might affect us negatively, that we're at least honest with ourselves and our peers about it and we're doing something about it. 'cause sitting back is just not gonna help anything.
[00:48:21] Cathy McPhillips: Right. Which, you know, brings us to our free content.
[00:48:24] We have lots of stuff. You can buy, lots of things from us, but we also have two awesome free classes that we do once a month. Our intro to ai, which our next one's December 3rd, our next scaling AI is December 12th. Those both happen at New Eastern. You can go to our website and register for free. Send your teams.
[00:48:40] If you're one of the leaders who is trying to get your teams to understand some of this, let Paul explain it to them. And then go and then get to work. So I would implore all of you just to. Take, kinda take the class. If you've done it before, come back. We'd love to see you again and, let us help you get there.
[00:48:57] Paul Roetzer: And if you're ready to really drive the transformation, you know, go [00:49:00] to Academy dot SmarterX ai, check about all the professional certificates. Get started with the foundations collection, and then personalize your learning journey or your team's journey from there. that is a big focus of what we're doing right now is trying to make that stuff as.
[00:49:14] Robust and, customizable and valuable as possible. So it really helps people be ready for what's coming and bring others along with them. So, absolutely.
[00:49:23] Paul Roetzer: alright, thanks Cathy. And we will, as I said, episode 182. We will be back on Tuesday, Thanksgiving week. we will have another episode and yeah, that'll be my third one of the week.
[00:49:36] It's just like, just keep going, keep going. Alright, thanks everyone.
[00:49:40] Cathy McPhillips: Thanks everyone.
[00:49:41] Paul Roetzer: Thanks for listening to AI Answers to Keep Learning. Visit SmarterX dot ai where you'll find on-demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it.
[00:49:58] That's it for now. Continue [00:50:00] exploring and keep asking great questions about ai.