The Artificial Intelligence Show Blog

[The AI Show Episode 187]: AI Answers - Overcoming the AI Stigma, Vibe Coding, Redefining Productivity, Building AI-Native Companies, and Finding Trusted Sources

Written by Claire Prudhomme | Dec 18, 2025 1:15:00 PM

As we close out the year, this AI Answers episode offers a reflective look at how organizations are actually navigating AI adoption. 

Cathy McPhillips and Paul Roetzer take a step back from tools and headlines to talk about the human side of AI: leadership behavior, workplace culture, and how long-held ideas about productivity and value are being quietly challenged as AI becomes part of everyday work.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

What Is AI Answers?

Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 40,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike.

AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.

In this episode, we address 14 of the top questions from our December 12th Scaling AI class, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time, unscripted and unfiltered, just like we do live.

Whether you're just getting started or scaling fast, these are answers that can benefit you and your team.

Timestamps

00:00:00 — Intro

00:04:05 — What responsibility do leaders have to confront the fear of AI head-on?

00:05:53 — Is there value in intentionally keeping some work, not just for fact-checking or “human-in-the-loop” oversight, but as a form of cognitive reset?

00:09:18 — Should productivity still be the primary measure of an employee’s value?

00:012:13 — What are behaviors executives should model to make AI use feel safe, normal, and expected across teams?

00:17:16 — What are the clearest structural signs an organization is talking about AI transformation while actively resisting it?

00:20:47 — Why do so many organizations default to treating AI as an IT initiative?

00:22:17 — What is vibe coding?

00:23:47 — If you could go back to the very first AI Show episode and correct one major prediction or assumption you had about AI, what would it be and why?

00:28:04 — What is one listener question that fundamentally changed how you think about AI?

00:30:43 — What has been the most personally challenging part of leading conversations about AI’s impact on jobs, identity, and the future?

00:35:48 — Where do you think most companies actually over-invested in AI?

00:39:53 — What is one thing you would refuse to automate, no matter how good the tech gets, and why?

00:43:04 — What is your measure for adding a podcast or other medium to your trusted resources?

00:45:03 — How can listeners think about simplifying how they’re thinking about, piloting, and scaling AI? 

Links Mentioned

This episode is brought to you by Google Cloud: 

Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.

Learn more about Google Cloud here: https://cloud.google.com/  

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: I thought that by 2020, AI was gonna be everywhere. Everyone would've already adopted it, and we wouldn't even need separate AI education and events. We would just have marketing events and business events. I was very wrong on that. I realized that I had overestimated how quickly adoption would happen, but I'd actually underestimated the total impact it was gonna have on business and society and the economy.

[00:00:23] Welcome to AI Answers, a special Q&A series from the Artificial Intelligence Show. I'm Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of ai.

[00:00:42] But we never have enough time to get to all of them. So we created the AI Answers Series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization.[00:01:00] 

[00:01:00] These are the practical insights, use cases, and strategies you need to grow smarter. Let's explore AI together.

[00:01:12] Welcome to episode 187 of the Artificial Intelligence Show. I am your host, Paul Roetzer, along with my co-host Cathy McPhillips, chief marketing Officer at SmarterX. Hello, Cathy. 

[00:01:23] Cathy McPhillips: Hello. 

[00:01:24] Paul Roetzer: Although I've seen you today, we're actually in the office together today, which is lovely. It is lovely. Good part of the team is here today.

[00:01:30] Everybody's kind of cramming in before the holidays, trying to get every everything done so we can take a little time off. So this is a special edition. If you are hearing our voices and not sure why you're hearing this on a, maybe on a Thursday or Friday, a new episode dropped in your feed. We do these special AI answers episodes every other week basically.

[00:01:48] And so this is a series we started beginning of this year, late last year. I don't know when we actually started doing this. I'm not sure the scaling, well, the scaling eye when we started this year for sure. 

[00:01:58] Cathy McPhillips: It's like I'm not even sure what day of the week it is, so [00:02:00] I know that's true. 

[00:02:01] Paul Roetzer: All right, so this is the 11th episode of our series AI Answers.

[00:02:04] This is presented by Google Cloud. the series is based on questions from our monthly intro to AI and scaling AI classes, along with some of our other virtual events. So basically, if you're not familiar, I teach an Intro to AI class for free each month. It's on a Zoom webinar and then a scaling AI class.

[00:02:22] Cathy co-hosts that with me and moderates the q and a at the end. Intro to AI will get anywhere from, I don't know, like a thousand to 1500 people. We usually get over 2000 registrants. Yeah. And so, you know, you get about a thousand or so people show up and we'll get dozens of questions. And so the idea behind the series was like, we can't possibly answer all those questions and the hour we have, with attendees.

[00:02:44] So let's do this AI answers to get through some of those other questions. And then we started doing the same thing with scaling ai. So that class we just did the 13th. So today's AI answers episode is actually based on the 13th edition of the Scaling AI class. And so same deal. We get a bunch of questions we can't get [00:03:00] to.

[00:03:00] And so in this AI Answers podcast series, we just try and answer as many questions as we can. So right now we've got 14 questions lined up for you. and that's what we're gonna go through today. So this series is brought to us by Google Cloud. So they are partner and sponsor for a number of initiatives under our AI literacy project, including, this class, the intro to AI and scaling AI classes, themselves as the the AI Answers podcast series.

[00:03:25] and as well as a number of AI blueprints that are gonna be coming out in January we're very excited about. And then our marketing AI industry council that, we partnered with them for, so you can learn more about. Google cloud@cloud.google.com. You can also check out their AI boost bytes. we'll put a link in the show notes.

[00:03:42] It's a series of short training videos that are designed to help build AI skills and capabilities in 10 minutes or less. So, Cathy, I'll turn it over to you if there's any housekeeping items I missed here. Otherwise, we'll jump into some questions. 

[00:03:56] Cathy McPhillips: No housekeeping, other than just saying thanks, Claire for alway, for always helping us [00:04:00] get these questions organized and ready to go for this episode.

[00:04:03] So let's jump in. Great. All right. 

[00:04:05] Question #1

[00:04:05] Cathy McPhillips: Number one. In many organizations, people quietly mock or look down at colleagues who use ai, treating it like a shortcut or a crutch. What fear is actually driving their that reaction? And what responsibility do leaders have to confront it head on? 

[00:04:19] Paul Roetzer: This is an issue I, I've been personally noticing, I don't know Cathy if you've heard this a lot, but increasingly when we think about adoption of AI within organizations, and obviously like a lot of what we think about, at SmarterX is adoption of AI literacy, AI training, and education.

[00:04:33] There is, there's definitely, a perception from some that AI is not a good thing. That whether it's a fear factor, like they fear job replacement or it's too abstract and they maybe just don't understand it, they feel threatened by it, or they've gotten to where they are in their career after 5, 10, 20 years being an expert and doing things the way they've always be been done.

[00:04:57] and this is like a change and maybe they just don't [00:05:00] have the same level of confidence. So there's lots of different reasons. Psychologically, like why people wouldn't just, you know, jump in and embrace ai. But we are definitely seeing this and so I think this is why we often talk about, when we look about AI literacy education training, it's very much a change management thing as well.

[00:05:19] And so what we've seen time and time again is people just go buy copilot licenses or Gemini licenses or Chat licenses and just give them to the team. There's going to be a percentage of those people who want nothing to do with those licenses. Maybe they don't know how to, what to do with them, but a lot of 'em just don't want to have to do it.

[00:05:37] And so I do think that it's very important that we think about that. We, we accept that and we address that, and we do take a change management approach to the integration of AI technology and the integration of AI education. 

[00:05:51] Cathy McPhillips: Absolutely. 

[00:05:53] Number 2

[00:05:53] Cathy McPhillips: Okay, number two. This is a two-parter, our first two-part question in our series.

[00:05:57] We often talk about AI eliminating [00:06:00] rot, repetitive work, but some of that work while mindless is still productive, it creates momentum and white space and gives people a mental breather. So two questions. Number one, is there value in intentionally keeping some of that work, not just for fact checking or human loop oversight, but is a form of cognitive reset?

[00:06:18] Paul Roetzer: That's a really good question. I'm not sure if I've ever really thought about it in that way. I totally get it. Like sometimes the mindless stuff, like for me, and maybe I'm just gonna like zoom out and think like personally here for a second. like when I'm doing creative work, strategic work, I have to listen to instrumental music.

[00:06:38] Like I can't listen to, the normal kind of music I would listen to, like maybe get in, get you inspired and fired up and stuff like that. The kind of stuff you do like during a workout or, when, when, but when you really need to focus on like the words and things like that. So, I find that deeply fulfilling work to do, [00:07:00] but I also love the work to this, you know, attendee or listener's perspective here, where I can put my other kind of music on with like words and like, where, and I'm just like filling out a spreadsheet or I'm just like doing the thing that requires me to just spend an hour on the thing and you still feel fulfilled when you're done.

[00:07:22] The reality is a lot of those kinds of things, will be able to be done by ai. And so this question of like, what is the value of that work? Sometimes it's not just doing the tasks, but the tasks themselves actually like clear your mind to allow you to get back into the deep thought thing. I don't know.

[00:07:41] I mean, that's interesting. I, I totally get the perspective that that is valuable to go through that process. I think you're gonna have to work for companies and leaders who would understand that value and be like, yeah, I get that you like doing the thing that takes an hour, but realistically we can do that in five minutes with ai, so we'd rather you [00:08:00] didn't waste the hour.

[00:08:01] Cathy McPhillips: Yeah. 

[00:08:01] Paul Roetzer: I don't know. I could see that being a challenging discussion internally for people. 

[00:08:05] Cathy McPhillips: Yeah. I mean, like last week I was putting post-its on the calendar on my wall. Just the mental break while I knew I was still doing something productive was just, it got me a chance to just kind of reset before I jumped into the next thing where I really needed to be heads down focused.

[00:08:21] Paul Roetzer: Yeah. And I do like, again, I didn't really stop and think about this until this question came up, but I, there are definitely things I do that are just kind of monotonous, but they are. Like a mental break for me. And I, I like going through the process and I, I don't know, and part of this goes back to this idea, you know, I focus this on my AI for writer's keynote this year, specifically related to writers.

[00:08:42] Like, just 'cause AI can do something doesn't mean we have to let it, like, right. There's just tasks that are valuable to us as humans for different reasons. Sometimes the human in the loop matters and other times it's because we actually enjoy it. It's a part of our job. We don't really wanna let go. Yeah, I, I could see that going both [00:09:00] ways in the future, but I think knowing what those things are that still matter to you to do are, you know, important first step for people when they're thinking about AI integration.

[00:09:07] Cathy McPhillips: Right. And what gets them through that process. If they know they need 45 minutes of downtime to do something productive, to get them to that next thing, then yeah, that does make sense. 

[00:09:18] Question #3

[00:09:18] Cathy McPhillips: Which brings us to the second part of the question. Mm-hmm. In an AI enabled workspace, should productivity still be the primary measure of an employee's value?

[00:09:27] Paul Roetzer: I don't know who asked these questions, but these are really, really good questions. should it be versus is it going to be, or probably like, you know, the important distinction here. so should it be, my argument would be probably no. again, this depends on what your job is, what the industry is, things like that.

[00:09:47] So if I go back to, and I'll go back to my agency days, so I, if people aren't familiar, I ran a marketing agency for 16 years. Productivity mattered greatly. Like how much you got done in a one hour time period. What was the value [00:10:00] created for the client like that that was, that was very, important, but again, productivity versus value.

[00:10:07] if an employee spent an hour. Producing two outputs and they created no value for the client. And then another employee spent 10 minutes producing something that created dramatic value for the client. Like it's the value of the output that actually matters, not the process of just creating a bunch of things.

[00:10:24] And so my general feeling, and I certainly, you know, take this approach with SmarterX is I don't track like when people work or where they work or like what they're actually doing down to a tactical level. Like I'm not that concerned with those things. I think about what is the value they're creating and we definitely have people within our organization who maybe in fewer hours can create disproportionate value.

[00:10:49] And it's, it's funny actually, this question's coming up and I'm sort of heading in this direction. I got home yesterday, so I started working yesterday at five 30 in the morning. Now I listened to podcasts while I'm working out. So it's like kind of working out, kind of [00:11:00] like work, working at the same time.

[00:11:02] Then we came in, we had multiple meetings. I did a bunch of like what I think might end up being disproportionately valuable, things like thinking about stuff we could launch next year. And so I had what was probably like a really good day. And then I was fried and I go home at like three 30 and I was literally sitting in the living room with my wife and my kids and I'm like, you know what?

[00:11:21] I should probably work until 5:00 PM. Like there's still more work that could be done. I think I actually created a disproportionate amount of value today. Like I think it's okay if I take an hour and a half off and I don't continually be productive the rest of the day. And they were like laughing at me, but my wife's like, that's actually probably a good way to look at it.

[00:11:38] So I think maybe that is related here. It's like sometimes the value is just like, you know, what is the value you're creating or that you're setting the future value to create. Not, did I, did I work nine straight hours and did I, you know, do the 15 things on my list. 

[00:11:52] Cathy McPhillips: Right. And by the way, five 30 to three 30 is 10 hours, so Yeah.

[00:11:55] Paul Roetzer: It's probably sufficient. But you did the mind of entrepreneur, like there's no [00:12:00] turning off. Yeah. 

[00:12:00] Cathy McPhillips: and you, we had talked about that before. Like you've walked outta your office before and I'm like right next to you and you walk out and you're like, my brain's done. 

[00:12:08] Paul Roetzer: Like Yeah. And you gotta know that, like for sure.

[00:12:11] Yeah. Yeah. I was definitely done yesterday. 

[00:12:13] Question #4

[00:12:13] Cathy McPhillips: This is question number four, 'cause of 'cause of that two-parter. Yeah. number four. If AI adoption is a leadership responsibility, not a technical one, what are two to three visible behaviors executives must model to make AI use feel safe, normal, and expected across teams?

[00:12:28] Paul Roetzer: So, you know, I think it, I mean there's certainly test technical aspects to AI adoption. there's no debating the importance of like, which platforms you're gonna build on and allow and how are they gonna be integrated into what you do, things like that. I do think this idea of modeling the behavior from an executive level, you know, there's, there's some things that immediately come to mind, like one is having a point of view on this.

[00:12:50] So we've stressed in, you know, the courses I create through AI Academy and some of the talks I've given this idea of like an AI forward CEO memo where you have to have the executive [00:13:00] saying, listen, we think this is really important. We believe it is very important to you if you would like to continue to develop your career path at this organization.

[00:13:08] You have to demonstrate not only an understanding of ai, but a competency with it and the like. This idea of pursuing mastery. So one is just setting the tone and the expectation is like the most fundamental behavior from the executive. Then it's being a part of the process. So telling people we're gonna get you co-pilot licenses or Chat licenses or Google Gemini and like go do the thing.

[00:13:29] And then, you know, those executives themselves have no idea what they're talking about. They're not using the tools themselves. They have no idea how to build a GPT. So I just think about being engaged in that process. So like we were talking last week, or maybe it might have been yesterday, about like running hackathons internally to like experiment with specific tools and technologies in generative AI and like involving them.

[00:13:50] Well, if I build this process, hey team, we're gonna start running hackathons in January. We're gonna learn how to use different tools. and then I don't show up. [00:14:00] What example is that setting? I think like setting the vision, but then being a part of executing that vision, and then enabling people to like, you know, democratize innovation, I think maybe is the last piece where it's not just me or any leader top down saying, this is how we're gonna use it.

[00:14:18] These are the tools you're allowed and here's the use cases, and don't deviate from this. But it's more like, Hey, we're gonna give you the autonomy to actually figure out how to use these tools in really creative ways. We're gonna put some frameworks in place to help you do that. Like hackathons and council meetings and whatever they are, workshops we're gonna run together.

[00:14:33] but we want you to actually bring to us ideas to do this. So, yeah, I know those are again, again, but people know how to work with, I have not seen these questions. I don't know what the questions are gonna be. It's basically like we're doing this, you know, in the live class itself. So I'm thinking off the top of my head here, but those are some of the things that sort of come to me as a great way to model that behavior and then give people the inspiration and the freedom to also experiment.

[00:14:58] Cathy McPhillips: And this might vary on [00:15:00] company size, but is there a value to a CEO saying outright or in some way, I, I know what our business goals are, but I am not the AI person on our team. I need you all to help me figure this out. Or you. 

[00:15:15] Paul Roetzer: For sure, like I, you know, most of the time the CEO or, you know, the different leaders probably aren't gonna be the most knowledgeable when it comes to these things.

[00:15:22] Like I've said that internally last week to, to, I think a few people, like, I wish I had more time to spend on these tools. Like there are certain applications that I, I see as being tremendously valuable and holding enormous potential for what we can do as an organization. And I just don't have the time to experiment with them the way I wish I could.

[00:15:42] I'm also trying to just be okay with that and realize, okay, my value to the organization isn't to be the expert in all these tools and be building the Gen AI app reviews like Mike and Claire get to do each week. it's like I have to have the vision to do the other thing and to doc democratize it and I'll learn the best I can in the process, but I'm not gonna be that [00:16:00] person.

[00:16:00] And so I think yeah, from a leadership perspective, when it's not gonna be you, that's like the expert in these tools. Putting the people in place that are, or enabling, you know, the different people in your organization to each develop their own expertise. So that's kinda like, even when we think about the Gen AI app reviews that we do as part of AI Academy, it's, we've sort of fallen into this initial niche where Claire is the one doing audio, video image.

[00:16:21] 'cause that's like more her world. Mike's focusing on productivity and agents. Mm-hmm. And as we start to build that instructor network, we might diversify that and bring in other people who focus on different areas. But that's what we're kind of doing. Like there's things, Claire's creating gen ed app reviews, where I would love to have an hour to like go experiment with the things like she's doing.

[00:16:39] Agreed. but I just accept that Okay, that's, that's not my thing. Like I've, I've got other stuff I gotta worry about. 

[00:16:45] Cathy McPhillips: I told Macy that the other day, I, she was working on some social and using ai and I said, this should be a goal of yours totally to you, to do, a gen AI op series or class on what you're learning.

[00:16:55] Paul Roetzer: And I would love that if like our whole team down the road, like everybody is creating gen AI reviews of the things [00:17:00] that they're using for their specific roles. Like that's part of the vision of Academy is like you diversify, right? The perspectives and the tools and you, you know, tie it to different departments and different roles.

[00:17:08] So yeah, I think that's a, a good, 

[00:17:10] Cathy McPhillips: especially if they're the same tools and we're using 'em in different ways. It doesn't need to just be different tools. 

[00:17:15] Paul Roetzer: Correct. 

[00:17:16] Question #5

[00:17:16] Cathy McPhillips: Okay. Number five. What are the clearest structural signs An organization is talking about AI transformation while actively resisting it? And why do leaders often mistake visibility for progress?

[00:17:27] Paul Roetzer: This is funny. Like the, so if you listen to episode 180 6, I, I sort of shared this like parody of, copilot adoption. This tweet that, somebody shared about like, oh, I, I bought copilot licenses. We called it AI transformation, and they asked how we were gonna measure the transformation. I was like, dashboards and what kind of dashboard?

[00:17:44] Like, it just, that's like what it is. Like if you go listen to 180 6, you'll laugh. it is a very funny tweet. not my tweet, it was somebody else's. But I do think, like if it's, you know, we, in scaling ai, the class that we taught that these [00:18:00] questions are based on, we teach this like five step process of, you know, creating a AI academy.

[00:18:05] We're doing education and training, developing AI councils, generative AI policies, doing impact assessments, having an AI roadmap. And then when you, you know, go beyond there, it's like, okay, you have a center of excellence that's talking about like top use cases. You're providing, you know, workshop, education and training where you're teaching people personalized use cases.

[00:18:21] You're monitoring, adoption and usages of the platforms you're buying. Like there's all these other things you could look at. But I do think that the most common misstep we see is, let's just go get some generative AI technology for everybody. We bought licenses and that equals transformation. It's like, no, like that's.

[00:18:38] You're just gonna have like 80% of the people maybe log in once and never do anything with 'em. Or maybe it's like a couple times a month. I think we shared a statin episode 180 6, if I'm not mistaken. It was like a Gallup poll that said only 10% of knowledge workers like use General I daily I think is where we're at with their numbers.

[00:18:58] And so think about [00:19:00] that, like there's all kinds of. Perceived AI transformation that isn't reality, right? and that's often what it comes down to is just buy the tech and hope people adopt it without actually going through the change management. 

[00:19:12] Cathy McPhillips: And if you think about AI Academy, you know, we're in the process of making sure that team leaders can look in and see the progress of their team because we don't want them just to buy, we want them to use it.

[00:19:19] So what can we do on our end to make, to work with these team leaders to say, check these on a regular basis, get people in there. How can we help you? And trying to build plans for them to roll this out and operationalize all of it. 

[00:19:31] Paul Roetzer: And that's where my, I don't think you were in the meeting yesterday, Cathy, when I was talking about this.

[00:19:35] It was a, a different meeting. but I'm thinking a lot about this idea of like, you know, we have, we have a lot of companies coming in buying business accounts for AI Academy and in some cases, you know, more than a hundred licenses. We have companies looking at buying thousands of license. I think of RAI academy, much like, you know, buying a gen AI platform like copilot or, or Gemini.

[00:19:58] and it's like, that's [00:20:00] great. Like you took the first step, you got the education and training, but now how are we gonna get people to take it and not only to take it, to actually transform their careers as a result of it. And then the byproduct of that is transform the organization. Like that is not like one equals the other.

[00:20:15] We don't just get the training and then it happens. So I'm spending a lot of time right now thinking about what can we do as SmarterX to, to help drive that adoption value creation and the change management. So I think there's gonna be some things we're gonna be doing, you know, starting in Q1, where we'll get much more involved in trying to provide, guidance for organizations to actually implement the AI education and training and make sure it's making a difference, not just something to check off the list.

[00:20:45] Right. 

[00:20:47] Question #6

[00:20:47] Cathy McPhillips: Okay. Number six. Why do so many organizations default to treating AI as an IT initiative and what breaks when AI isn't owned by the people closest to customers content and decision making? [00:21:00] 

[00:21:00] Paul Roetzer: My experience has been that it is not usually where. The vision for growth and accelerated change comes from, like, it is often more about protecting, reducing risk, making sure security and compliance is adhered to, things like that.

[00:21:18] So they're just not, charged in most organizations with driving innovation and growth. And that is what AI to me is, is it is an innovation and growth opportunity. So you need to be empowering the leaders of different units and teams with an understanding of what it is, what it's capable of doing now, what it'll be capable of doing six to 12 months from now.

[00:21:41] So that as they're thinking about their talent, technology, strategy, decisions, they're doing it, layering in what AI is gonna make possible, and that is not the domain of it most of the time. So that it's just a pretty direct like. If the people that are deciding hiring strategies, [00:22:00] budget allocation, aren't the people empowered to infuse ai, then what are we even doing in business at this point?

[00:22:07] Cathy McPhillips: But AI, it should be involved in your process. 

[00:22:09] Paul Roetzer: Oh, definitely. I mean, they, they have to be, especially in larger enterprises, but they should not be leading, in my opinion. Correct. Yeah. Yeah. 

[00:22:17] Question #9

[00:22:17] Cathy McPhillips: Okay. Number seven. What is vibe coding? Is it a fad or something with longevity? How do I get started? 

[00:22:23] Paul Roetzer: I mean, vibe coding is, is basically, and again, this is like carried over into like vibe marketing and vibe, everything.

[00:22:29] It. Again, like I, I'm gonna give you like my understanding, we've talked about this on the podcast a few times. It's basically being able to go into these tools and just start building something. And because the, these tools, like in a ChatGPT or or Google, they're able to do the coding. You're able to say, okay, here's what I'm trying to build, and it'll build it.

[00:22:49] And then you're like, okay, let's change it like this. And then you're basically iterating on the code in real time with the generative AI tools to where you can just get in and start kind of just building stuff. [00:23:00] And so it's just based on like vibes, like just the feel you have and kind of different directions you can take it.

[00:23:05] So I don't know if that's like the, I don't think there's a dictionary definition yet, but that's my perception of it is it's really just that idea of getting in and building something or doing, you know, building a campaign. Like I'm just gonna hack together a landing page and an email, like I'm just gonna get in and just vibe, code this thing, or vibe, build this thing.

[00:23:22] So I don't, I don't know if that it's a trend. I think it's just a term right now that sort of seems to have stuck for the last nine months to describe something. But it basically just means going in and iteratively building something. You could do it with research, strategy, documentation, organizational design, like it's just sort of a term, but most of the time this is referring to building actual applications or software.

[00:23:46] Right. Okay. 

[00:23:47] Question #8

[00:23:47] Cathy McPhillips: Number eight. If you could go back to the very first AI show episode and correct one major prediction or assumption you had about ai, what would it be and why, and what has changed? Hmm. 

[00:23:58] Paul Roetzer: Well, the very first AI [00:24:00] show was, episode was probably an interview with someone because people might not know this, but it originally started as the marketing AI show.

[00:24:07] And my plan with it was to interview, thought leaders in the space, entrepreneurs, people within AI labs, things like that. And, I just never did it because like to actually do interviews is hard. You have to, it's like a whole production thing and you gotta like figure out timing and it became a pain.

[00:24:24] So our podcast was just sort of like hanging out there and it wasn't happening. And so in, October, I think 2022, right before ChatGPT came out, I went to Mike, my co-host and our chief content officer was like, Hey man. We're just sharing all these links each week. Like what if we just started a show where, you know, we basically just curate the best stuff from each week and we just talk about it.

[00:24:47] And, that was like, I didn't even, 

[00:24:48] Cathy McPhillips: or 30 ish think, 

[00:24:50] Paul Roetzer: what's that? 

[00:24:50] Cathy McPhillips: That was episode like 30, maybe 20, 20, 

[00:24:52] Paul Roetzer: 20 maybe. Somewhere around there. And like, we didn't even know, like what the metric was. You monitored to know if a [00:25:00] podcast was working. Like I had no idea how you tracked these things. I didn't know downloads was like the thing that was like, you know, the best KPI to look at.

[00:25:07] We just started doing it just to talk. And so we didn't let all these links each week go to waste. Like there was some point to it. And, so I, I don't know that early on I was making many predictions, but like, let's just assume that, you know, I did. And back in 2022, i, I, I would, I would have a hard time honestly finding something where I was just blatantly wrong.

[00:25:34] And I don't mean that in any kind of arrogant way. Like we've tried to be very objective about what we think was gonna happen. we've tried to be very conservative on timelines with which I thought things would happen. we try to be very thorough. Our research so that it is not just me making ideas up and just throwing out crazy predictions.

[00:25:56] So the model we have for the show is very [00:26:00] objective and research driven and follows jour, journalistic methodologies. that being said, the closest I've come to making predictions, I would say at a, at a broad level would be like the road to AGI, the AI timeline. where I've sort of projected out how agents would emerge and robotics and things like that.

[00:26:22] I've done that two years straight and I, I wouldn't change anything on that timeline yet. So that one is something where I've actually feel pretty confident about. The thing I will say at a very broad level that I got wrong is back when I started the Marketing AI Institute in 2016. I actually thought by 2020 I wouldn't even need to have AI in the name.

[00:26:46] I thought that by 2020, AI was gonna be everywhere. Everyone would've already adopted it and we wouldn't even need separate AI education and events. We would just have marketing events and business events. I was very wrong on that. and so that [00:27:00] actually changed for me in 2021. I read Genius Makers by Cade Matz, and I realized.

[00:27:05] That I had overestimated how quickly adoption would happen, but I'd actually underestimated the total impact it was gonna have on business and society and the economy. And so that's the day I decided to sell my agency and focus on AI exclusively. That was spring of 21. So I would say at a, at a broadest level, the speed at which adoption would happen is the one thing that I feel like I just missed by like a half a decade.

[00:27:30] But since then, I think because we basically try and look like six to 12 months out, and I usually project things based on things I'm hearing and seeing firsthand. yeah, I I, I, I would've a hard time like picking and saying, yeah, I was just completely off on, on this, right? I mean, in our book in 2022, nine months for Chat Bt, I wrote a section called What Happens when AGI can Write like Humans?

[00:27:52] And we predicted not ChatGPT, but that like, we were basically on the cusp of something like ChatGPT emerging where it was gonna change everything. [00:28:00] That ended up being PR pretty rough. 

[00:28:04] Question #9

[00:28:04] Cathy McPhillips: Okay. Number nine, listeners have asked you questions about careers, ethics, strategies and tools. What is one listener question that has fundamentally changed how you think about ai rather than you changing them, how they feel about it?

[00:28:19] Paul Roetzer: Oh man. So I actually have a, a pretty good answer for this one. I don't remember what year it was, but I think we were at Content Marketing World, and I know I was like catching a flight to Boston that night and I was sitting at a networking luncheon and someone said to me, what are you most excited about with ai?

[00:28:41] And I froze. Like, I literally just stared at the person and I was like, I, I actually don't know. Because there was so many things building up at that time, kind of like today where I could just sit down and list for you the hundred things that I was worried about. But what I was excited about was very difficult in that moment to [00:29:00] say, and I did not have a good answer.

[00:29:02] And so on the flight that night, I actually forced myself to write what are the things that I am excited about? And so that ended up becoming a key part of my keynote for Macon that year is actually featured. Those things on the, the final slide of that presentation. And so it was things like, you know, the, a golden age of entrepreneurship that I thought we were just heading for this thing where, you know, you anybody could build anything.

[00:29:29] Like the walls were coming down to start businesses. I thought like a renaissance and creativity of creatives who could work with the AI was gonna be. Amazing. I thought that, you know, basically the whole premise of my keynote and what became like the final piece was like we could create more time, more time for the things we cared about, family, friends, working on fulfilling projects.

[00:29:50] And that was why I was pursuing AI was to create more time in my life. And, and so yeah, that was the one that stuck with me that [00:30:00] honestly like changed the trajectory of how I even talked about things on the podcast and probably some of the trajectory of how we. Thought about things as an organization and like what my points of view on it were and my own personal need to wake up each day and find the positives in it, because I was getting overwhelmed by the downsides of it.

[00:30:20] Cathy McPhillips: Yeah. Do you remember who that person was? Like? I don't, 

[00:30:22] Paul Roetzer: I'd never met the, I'd never met the person before. It was, it was a lady, I remember that. And, yeah, I, it was just a, it was a totally random, I wasn't even talking to her. I was just sitting there eating and I think she knew who I was. Maybe listen to the podcast or something.

[00:30:37] And she just asked me and I was froze, like, and here we 

[00:30:42] Cathy McPhillips: are. 

[00:30:42] Paul Roetzer: Yeah. Weird. 

[00:30:43] Question #10

[00:30:43] Cathy McPhillips: All right. let's see. Number 10, what has been the most personally challenging part of leading conversations about AI's impact on jobs identity and the future? And how do you personally manage that weight? 

[00:30:58] Paul Roetzer: [00:31:00] I mean, my first instinct was to say, people don't believe me, but that doesn't actually bother me.

[00:31:04] I, I've been used to that, like I've spent a lot of my career being people thinking I was wrong about things that I had high conviction in. And then, you know, eventually it became apparent that, you know, I was probably more right than wrong on some of these topics. So Jobs is the one that I very consciously avoided talking about for a while on the podcast.

[00:31:22] I don't remember what episode it was, but I remember standing in the kitchen talking to my wife and I was like, I think I have to, like, I have to just say what's on my mind. And then I told Mike, I was like, listen, on the next episode, I'm just gonna like lay it out. Like I think we're, we're in for a world of hurt.

[00:31:34] No one's talking about it. Like, we gotta start this discussion on the podcast. and then I just said it like, and I kind of broke down why I thought it was gonna happen and the conversations I was having that like led me to believe that. So I think you have to be willing. To be perceived as wrong for a really long time.

[00:31:51] But anyone who's been an entrepreneur who's taken like huge risks to innovate in any area, knows that feeling. And honestly, like [00:32:00] I've, I think I'm more comfortable in that environment. Like if I'm at a point where everyone agrees with me than I feel like I'm not challenging status quo enough and that I'm actually falling into, like.

[00:32:14] I guess more standard belief systems. And my experience in my career has usually been that if I'm on the frontier of thinking about things differently, that usually leads to positive things. So I, I get uncomfortable when everything is like, everyone's like, yeah, yeah, yeah, no, I totally agree. Jobs are going away.

[00:32:32] Like, it's great. so yeah, I don't, I don't, I think the challenging part is just being willing to be perceived as being wrong and being okay with being the one that's. Willing to get out there and say it before it's consensus. personally managing the weight, like the weight for me is more the things I have high conviction in, like this disruption I've seen coming for jobs for years.

[00:32:58] When I first realized the [00:33:00] impact this was gonna have on like artists, like my wife and writers, like you and me, Cathy, like when I knew that years before. The, the masses like came to know that stuff. Like I'm talking back in like 2012 and 13 when I sort of played this out and saw what was gonna happen. I would say it's the weight of knowing the personal impact this is gonna have on other people, whether that is job loss or, things they've find fulfilling in their life that I know are gonna change.

[00:33:30] And it's hard to be the one that like knows that and you don't even know how to say that to people. So I would say more, it's more that for me, and then I think the fact that I, I, I have a pretty decent understanding of how this probably goes wrong in society and I can, if I sit down and allow my brain to go in that direction, I can tell you a lot of the pain and problems that are gonna occur that sucks.

[00:33:56] Like I, I don't like that feeling. I don't like [00:34:00] looking out and kind of, having a decent sense of how this is gonna play out in bad ways while everybody else is sort of blissfully unaware. And I get asked a lot of times about this sort of stuff and I often actually, like, I just hold back on that part of it because I don't, I don't think people are really ready for it.

[00:34:20] Like a lot of times, especially with family and friends, it's like you just have to, you have to be conscious of how people. Where they're at with their understanding of this stuff and how much you, you can actually put on them at once. 

[00:34:33] Cathy McPhillips: I think there's things like mm-hmm. AI literacy projects, the free classes, responsible AI manifesto, talking to people, going to school, doing all these things.

[00:34:42] That's like, you also have this responsibility to be pushing the right side of this forward in the world. 

[00:34:49] Paul Roetzer: And I think part of that comes with how we position this. So I, I have always had this sense of urgency without creating fear. I, I guess it's sort of like a guiding principle we have at SmarterX, and that is why [00:35:00] the literacy project exists and why we do invest all the time doing all these free things, whether it's a podcast or the class or the talks at schools.

[00:35:08] I feel like the way I manage the weight of the negative stuff that I, you know, that is current and future around ai. Is we just go do something every day to make a difference. And if we were just sitting still and not doing that, then I would probably be losing my mind right now. But I do feel like what we do matters in some small way, and I feel a sense of urgency to do more of it because I think it makes a difference and it empowers other people to go out and make a positive impact.

[00:35:37] And I think we still have a, a say in this, like, I don't think it's inevitable that it has all these negative outcomes, but we gotta just. We gotta do more as a community. 

[00:35:48] Question #11

[00:35:48] Cathy McPhillips: Okay. Number 11, you've talked a lot about AI roadmaps and scaling AI inside organizations. Looking back, where do you think most companies actually over invested in ai and where did they [00:36:00] quietly under invest in ways that will matter long term?

[00:36:03] Paul Roetzer: They over invested in buying gen AI platforms for all their employees before they taught them how to use them or gave them use cases for 'em. So like that's the example I referenced, sort of the parody tweet, but like. We spent 1.4 million on copilot licenses and no one's using 'em. And like that's the overinvestment is they thought the answer was just go buy generative AI tools.

[00:36:23] But then they didn't invest in training people and doing change management. So that has been the problem since early 2023 when people were racing to go get these tools and they have not invested in the people and you know what's required and people are complicated and there's. Logical and illogical reasons why they don't wanna be a part of ai.

[00:36:44] And if you just ignore that as an organization, you're gonna fail when it comes to this stuff. So yeah, overinvested in the tech too early by under investing in the people and the change management I guess would be my answer. 

[00:36:56] Cathy McPhillips: So talking about people, you know, we're looking at hiring some people, some people are right now [00:37:00] I know are looking for work companies are, might be looking for people.

[00:37:02] Like what are some things you would say to folks who are in that position? Like what could they be doing right now to. To get ahead of this a little bit or a lot. 

[00:37:13] Paul Roetzer: I mean, it sounds like a broken record, but you just have to invest in AI education and training for yourself. Like you have to level up your understanding of the technology, what it's capable of.

[00:37:23] You know, again, we offer a ton of free stuff. Go to the intro class, come to the scaling class, go back and listen to the last 10 episodes of the podcast. You'll get a real good sense of just sort of where we are. State of ai, what are the important things going on right now? you know, if you can afford it, take, take some of our paid courses and get certificates.

[00:37:40] Go take free classes from Google and openAI's and others who offer certificates in their platforms like you. You can go get educated in AI really fast, and then the competency is use it every day in your personal life. Like, if you're not using it in business and maybe you're not provided a license, go, you know, plan a trip.

[00:37:58] Plan your holiday menu. Like do what? [00:38:00] I don't know, like just experiment with the tech, use it, build a workout plan. You know, figure out how to coach your kids in math. Like just use the tools and get used to 'em and then layer that over your capabilities. I mean, the jobs report came out today. It sucks.

[00:38:13] Like it is not good. We are, we are in a, a not great economy right now that is being bolstered by the investment in AI infrastructure. If you stripped AI infrastructure out of the economy right now, we're in a recession. There's a lot of people already losing their jobs and it's not being directly tied to ai.

[00:38:32] In a lot of cases, that's going to change. Like there's going to be way more job loss in Q1 of next year than probably the remainder of the year, and it's going to be directly tied to ai. If you're talented at what you do, layer AI capabilities over it and go find companies that are trying to build ai, native AI emergent that value those, skill sets.

[00:38:54] Because I, I don't know, like I, I, I don't know how else to do this. Like, you have to just be [00:39:00] willing to be, the person on your team or the person in your industry who is racing ahead to try and figure all this out. And I think that's the best chance you have to thrive through a lot of uncertainty. It is not gonna be an easy period for jobs.

[00:39:15] I wish that weren't true, but I mean, we can see it just through SmarterX, like we hear from people every week. Cathy, you probably hear from more people than I do. Really talented people who are on the market or, or know they're gonna be on the market right, in the not too distant future. And they're trying to be proactive about it.

[00:39:31] So that, that's kind of the, I, I wish I could make it better for everybody, but all, all I can do is keep growing our company and trying to create job opportunities here. That's that. Like new age of entrepreneurship, just build something. and trying to educate organizations to like level up their people so they can drive innovation and growth and not remain stagnant in the economy.

[00:39:53] Question #12

[00:39:53] Cathy McPhillips: Okay. Number 12, if you are starting a brand new company tomorrow, that had to be AI native from day one, what is one [00:40:00] thing you would refuse to automate no matter how good the tech gets and why? 

[00:40:05] Paul Roetzer: vision and strategy. Like I, I there, like, as a leader, I use ChatGPT and Google Gemini all day. And I say both of them because I, I literally use both of 'em.

[00:40:16] Like if it's a really important strategy, I will. Talk to both AI assistants about it, and I'll compare the outputs and things like that. but the vision for where we go, the goals we set as an organization, the strategies of how we get there, the people we're gonna hire. Every element of that is AI assisted right now for our company.

[00:40:36] And I would think of us largely as an AI native, you know, EV events and education company, at SmarterX. I don't, I don't turn any of it over to the AI assistant though. It's just my thought partner. It's my thought partner for finance decisions, legal decisions, HR decisions, business strategy decisions.

[00:40:55] Like I talk to it about all of it and it's because it just gives me somebody [00:41:00] 24 7 that I can bounce things off of and like a lot of times, like really important decisions I've made. I've been in business, you know, 25 years now. I've owned my own companies for 20 of those 25 years. And a lot of the most important decisions I made.

[00:41:16] I was probably just sitting there talking with my wife out loud about things and she just is a great listener. And every once in a while she'd ask like really insightful questions, but normally it's just the process of saying out loud, the thing I'm trying to decide or like the direction I'm trying to go.

[00:41:31] That enables me to kind of make an educated decision. And so AI assistance function in that way. I actually, I was telling Cathy and the team this morning, I was talking my drive in this morning, I was talking to my dad, like sometimes I just call him on the way into work and we were having a meeting this morning on an important business decision we had to make.

[00:41:46] and so I was telling my dad, I was like, yeah, I'm just like, I'm trying to decide this thing and like. Part of that is just like keeping my dad in the loop of what's going on in my life. The other part is like I realize like I'm just trying to think out loud. Like I'm just trying to get this stuff outta my head, right?

[00:41:59] And then [00:42:00] going through that process, I sometimes just like, oh, okay. So that's what AI assistance are. But often what I'll do, like I did this this morning before that meeting, I was trying to say another even bigger decision I have to make in the next 30 days. And I said at the end, like asked me, you know, here's the basic premise, here's the context, here's what I'm trying to decide.

[00:42:15] I want you to make a recommendation. But if you don't know all the context, like ask me one question at a time, like as we go and Geoff Woods talked about this idea of like Gimme one. And so I did, I had like this 30 minute conversation with AI that now when I go meet with my attorney in like 48 hours, I now have like way more context and understanding and I actually have a point of view on the decision that has to be made, that I wouldn't have had if I didn't talk to my AI assistant.

[00:42:40] So. I dunno. I think at the end of the day there's still, the human has to make the decision, especially when it's related to the vision or strategy of an organization or a team. But the best leaders are going to infuse AI to help them make more informed decisions. 

[00:42:55] Cathy McPhillips: Shout out to your dad, number one. Yeah, who I'm sure will be listening to this episode.

[00:43:00] Paul Roetzer: He's like our number one, you know, fan of the podcast. So. 

[00:43:04] Question #13

[00:43:04] Cathy McPhillips: Number 13, you posted in link on LinkedIn last week and in your exec AI newsletter, your favorite podcast you listen to for AI news. Hmm. Aside from obviously liking the show and guests and gleaning value from them, what, which might be enough, what is your measure for adding a podcast or other medium to your trusted sources?

[00:43:22] Paul Roetzer: So the ones that get expanded, so the ones I don't know that I add to that list is usually 'cause they had a phenomenal guest. So, that's almost exclusively probably how it happens. Like some of the best ones, like 20 vc. I love that podcast. I, I had no idea. Like, I, I didn't follow them. I didn't know that.

[00:43:42] Dwarkesh, like, just kills it. He has amazing guests. Lenny's podcast. Never heard of it until like, three months ago. And it was like, I forget who he interviewed that I, I, it was just a phenomenal interview, so like, boom, you know, subscribe, like 

[00:43:56] Cathy McPhillips: that's a great podcast. 

[00:43:57] Paul Roetzer: Yeah. 80,000 hours. Like there, there's [00:44:00] people that have access to people that I, I don't, they have inside access to other people in labs and different entrepreneurs, or they're in the VC world so that, you know, it's companies they fund, things like that.

[00:44:09] So usually it's through Twitter X is where I capture the vast majority of information that at, you know, informs my perspective on AI by following, you know, there's about 300 or so accounts that I get alerts from and that's usually, it's like if it's important enough, someone in that group of 300 is tweeting about it.

[00:44:27] So if I see a clip from a great interview, it's like, where's that from? And I'll go find the YouTube clip or the podcast clip and I'll add it to my list. So I would say nine times outta 10 when a new, podcast finds its way, it's because I saw a clip of a, of a great segment on, on Twitter. I do not ever go into podcasts and search for like new AI podcasts or anything.

[00:44:49] I've never done discovery through the podcast network itself. It's always through clips that I see. 

[00:44:54] Cathy McPhillips: Okay. So you posted that on LinkedIn last week? I think 

[00:44:57] Paul Roetzer: I did. And then in our executive newsletter included the links to [00:45:00] all 18 of 'em. Yeah. Yep. 

[00:45:01] Cathy McPhillips: Okay. Last question, 

[00:45:03] Question #14

[00:45:03] Cathy McPhillips: number 14. we talked this morning in the, in an internal meeting about simplification.

[00:45:08] How can listeners think about simplifying how they're thinking about piloting and scaling ai? 

[00:45:14] Paul Roetzer: So the guidance I usually give on this one is, don't overthink this and get overwhelmed by like the hundreds or thousands of tools and all the new funding each week and all the new models and like, it's overwhelming.

[00:45:27] Like I live this stuff every day and there's definitely some days where I'm like, oh my God, I don't even wanna look at my Twitter feed today. Like, there's just, I don't want something else new today. so what I generally guide people is depending on what license you have, either personally or professionally.

[00:45:42] Just get really, really good at that AI assistant. Like you, you can't go wrong with Anthropic, Claude, Google, Gemini, ChatGPT, you know, Microsoft copilot. if you just learn the capabilities of those tools, image generation, [00:46:00] video generation, the reasoning capabilities, doing deep research, talking it to like an assistant, asking it great questions, 

[00:46:06] if you just learn how to do those things, you're gonna be ahead of like 95% of users of these platforms and generative AI technology overall. Don't get lost thinking. You've fallen behind everybody and everyone else has this figured out. They don't. You may live in a bubble where you're hearing everyone talking like they've got this all figured out.

[00:46:26] That is not the norm. Again, go back to the Gallup poll we talked about this week, like 10% of corporate, workers or knowledge workers overall use AI daily. There's a massive opportunity to be a, a power user and still be an early adopter and innovator, but just focus on using one of those AI system platforms to the, the fullest extent, and you, you will create enormous value for yourself and your company.

[00:46:53] Cathy McPhillips: Amazing. All right. That is the end of our questions for today, 

[00:46:57] Paul Roetzer: and our second to last episode of [00:47:00] 2025. Right? This will come out on Thursday, so we're recording this on Tuesday, December 16th. This will drop on December 18th. Yeah. Then we will have our final weekly episode. We'll be on December 22nd, and then we will be back January 6th would be the next podcast episode.

[00:47:16] Yeah, sounds right. That 

[00:47:18] Cathy McPhillips: that sounds right to me. 

[00:47:19] Paul Roetzer: I totally forgot we were doing this one, I think. I think when I did the closing to episode 180 6, I was like, all right, I like, we'll talk to you next week. That'll be the last episode. And I looked at my calendar today and I was like, oh, right. We have my answer still.

[00:47:33] Well, thank you. This is a great one. Yeah, thanks Cathy, for, yeah, I think we've been doing this for almost a year. I don't know, there's 11 of 'em. Maybe it's been six months. I don't know. This whole year is a blur. 

[00:47:42] Cathy McPhillips: Well, but thank you to our listeners. Math would say it's been almost a year. 

[00:47:45] Paul Roetzer: All right. Well, hopefully this is really helpful for everyone.

[00:47:48] we are definitely gonna keep doing this series in 2026, so stay tuned for not only the weekly episodes coming on Tuesdays, but AI answers. Then we got some other new exciting things we're gonna do an AI transformation series, I dunno if I've [00:48:00] shared that before, but we are working on like a new podcast series that's gonna be interviewing people who are going through and leading transformations within their organizations and teams.

[00:48:08] So we're really excited about that series that'll launch in, early Q1 next year. So lots more coming on the podcast. Again, we appreciate everyone being a part of it and we will talk to you again soon. Thanks, Cathy. Thank you. Thanks for listening to AI Answers to Keep Learning. Visit SmarterX dot ai where you'll find on-demand courses, upcoming classes, and practical resources to guide your AI journey.

[00:48:33] And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about ai.