There is no shortcut for AI verification, and we think that's a good thing.
In this AI Answers episode, Paul Roetzer and Cathy McPhillips dig into the questions business leaders are asking again and again about AI. The conversation moves from structured prompting and the real value of custom GPTs into more serious issues: how unverified AI output is creating tangible organizational risk, where AI agent-building tools truly stand right now, and why early adopters are especially vulnerable to burnout.
Listen or watch below—and see below for show notes and the transcript.
Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 40,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike.
AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.
In this episode, we address 15 of the top questions from our February 10th Intro to AI class AND our Marketing Talent AI Impact Report Webinar, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time—unscripted and unfiltered—just like we do live.
Whether you're just getting started or scaling fast, these are answers that can benefit you and your team.
00:00:00 — Intro
00:07:00 — Question #1: Do you need to prompt AI the same way every time?
00:10:59 — Question #2: What problem do custom GPTs actually solve?
00:14:26 — Question #3: Are SaaS providers becoming model agnostic?
00:17:09 — Question #4: Why AI voice and tone change when models update.
00:20:36 — Question #5: AI output validation: why there's no shortcut for verification.
00:23:17 — Question #6: Tools for building AI agents: where to start.
00:26:11 — Question #7: Will knowledge workers face the same AI disruption as developers?
00:29:53 — Question #8: AI burnout: how leaders can prevent it during the AI transition.
00:36:21 — Question #9: Which roles and skills are most at risk from AI?
00:42:03 — Question #10: Traditional BI platforms vs. AI-first reporting systems.
00:45:22 — Question #11: Build vs. buy: AI decision framework for business leaders.
00:48:52 — Question #12: Competitive advantage for AI-forward agencies.
00:52:43 — Question #13: How to tell when someone just copy-pasted from ChatGPT.
00:54:39 — Question #14: Ads in AI platforms: what business users should know.
00:56:42 — Question #15: The one AI superpower every business leader needs.
This episode is brought to you by Google Cloud:
Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.
Learn more about Google Cloud here: https://cloud.google.com/
This episode is also brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Learn more here.
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: There is no shortcut and nor do I think there should be a shortcut for verification. I think that we should slow down and publish good quality stuff that we have verified and that humans have signed off on. and I think that's good. Like otherwise, we would all just pump out AI slop.
[00:00:19] Cathy McPhillips: Welcome to AI Answers, a special Q&A series from the Artificial Intelligence Show.
[00:00:24] I'm Paul Roetzer, founder and CEO
[00:00:26] Paul Roetzer: of SmarterX and Marketing AI Institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of AI, but we never have enough time to get to all of them.
[00:00:42] So we created the AI Answers Series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing, whether you're just starting your AI journey or already putting it to work in your organization. These are the practical insights, use cases,
[00:00:59] Cathy McPhillips: [00:01:00] and strategies you need to grow smarter.
[00:01:02] Let's explore AI together.
[00:01:09] Paul Roetzer: Welcome to episode 199 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host today, Cathy McPhillips, chief Marketing Officer at SmarterX. Hello, Cathy.
[00:01:18] Cathy McPhillips: Hello, Paul.
[00:01:19] Paul Roetzer: I feel like we haven't seen each other in a while, like we're, did I see you yet this week?
[00:01:23] Cathy McPhillips: You are, you were vacationing.
[00:01:24] Paul Roetzer: Well, I was sort of, I mean, I was vacationing for two of the days, and that's true in Miami. I wouldn't say that was a vacation like, but I haven't been in the office in, I was here yesterday. I think I'm losing track. So we're recording this on Tuesday, February 24th. it'll, it'll, this will drop on Thursday, February 26th.
[00:01:45] This is episode 1 99. you, you're not in the wrong place. Like if you're used to seeing or hearing Mike with me, Mike and I continue to do the Weekly. This is a special edition of the Artificial Intelligence Show. We are in our 14th [00:02:00] episode of this series. Cathy. AI answers we Yeah. So. This is a series we do in partnership with Google Cloud.
[00:02:06] So it's presented by Google Cloud. this is a series based on questions from our monthly intro to AI and scaling AI classes. So if you've never been to one of those, every month I teach a free intro class, and then I teach a free scaling AI class. And Cathy is my co-host with both of those. We have been doing the intro class since 2021.
[00:02:27] So we are on, we're coming up on the next one will be 56. Cathy. Does that sound right? Yes. Yeah. So we've had over 50,000 people go through the intro to AI class and then scaling ai. We are going onto our 14th ish. 15?
[00:02:42] Cathy McPhillips: Yep.
[00:02:42] Paul Roetzer: Okay. So we've been doing that one for about a year and a half. and so these are our, you know, part of our way to sort of accelerate AI literacy and make it as accessible as possible to everyone.
[00:02:53] And so when we have these classes, the intro in particular will get, you know, somewhere between 12 and 1500 registrants each time. [00:03:00] And so we will get dozens of questions. And so a while back we thought, well, let's start taking some of those questions that we don't get to. 'cause we try and allow like 20 minutes or so at the end of each class for Q&A.
[00:03:10] We'll normally get to somewhere between seven and 10 questions in that setting. But then there's dozens of unanswered questions. And so the goal of this series is to answer as many of those questions as we can. So this one in particular is a follow-up to our Intro to AI class that we held on February 10th.
[00:03:27] Because I've been traveling, we've been trying to nail down a date to do this one. So we are doing follow-up questions from our Intro to AI class. So that's just helpful context when you hear the types of questions we're getting. it helps frame you know, where they're, they're coming from. So again, special thanks to Google Cloud for sponsoring this series as part of our AI literacy project.
[00:03:47] In addition to sponsoring this AI Answers podcast series, Google Cloud is also our partner for the monthly intro to AI and five essential steps to scaling AI classes. A collection of AI blueprints, which we launched three of them this [00:04:00] week. So you're listening to this on Thursday, we will be wrapping up our AI for Departments Week with Google Cloud.
[00:04:05] AI for Marketing, came out on Tuesday. AI for Sales. No. Yeah, AI for Sales came out on Wednesday. And then ai, for customer success drops on Thursday when this episode drops. so lots of good stuff going on. There's both webinar and then a free ungated access to AI blueprints. And so you can learn more about Google cloud@cloud.google.com.
[00:04:28] And then this AI Answers episode is also brought to us by AI Academy. By SmarterX. AI Academy helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and an AI powered learning platform. Right now we have 12 professional certificate course series available on demand with more being added each month.
[00:04:49] We just released our newest course series ai four financial services taught by our director of research, Taylor Rady. It covers real world applications of AI across banking, insurance, wealth [00:05:00] management, and more. So you can start applying AI strategically in your organization today, so you can go check it out.
[00:05:05] We have course, series and certificates by industry, by department. we have a, a, a foundation series that has piloting AI and scaling ai, and AI fundamentals. So there's tons to learn. Plus gen ai, reviews dropping every Friday, and then AI Academy live sessions, each month. So go to academy, do SmarterX dot ai to learn more.
[00:05:27] There are individual and business account memberships available. So there's Academy dot SmarterX, do ai, and I'm gonna turn it over to Cathy. She can give us a little background on how these AI answers sessions work. And then we'll just dive right into the questions. And as you know, we'll kind of explain like.
[00:05:42] I actually haven't seen these questions. So we do this just like we do in the live class. I don't see the questions in advance. Cathy gives me a doc. I don't ever look at it bef, no offense, Cathy, but I usually don't have time to look at this doc before we get going. So these 15 questions are, are raw to me, which mean I [00:06:00] might not have great answers to some of them.
[00:06:02] And I'll be honest with you if If I don't have a great answer and hopefully I'll give you some guidance of maybe how to go find a great answer for it.
[00:06:08] Cathy McPhillips: Okay. So we take the questions from our intro or scaling classes. Sometimes we take them from some of our virtual summits or other things, and we will synthesize them.
[00:06:17] Claire on our team helps. She's built an amazing GPT because the more we do these, the more complicated it gets because we do get repeat questions. So we, we don't wanna make these podcast episodes redundant to questions that you've already asked. So there's a whole process we go through that we make, you know, we synthesize the questions, we put them in an order where it flows.
[00:06:35] We run them through to see if the questions have been asked before and if they have, but they are really good questions. We'll either ask again, because maybe your answer's gonna be different if it's what, 12 months ago. Or we tweak it it a little bit or two
[00:06:47] Paul Roetzer: weeks ago sometimes based on how these models are advancing 12 months.
[00:06:50] Cathy McPhillips: and then just tweak it so it just hits a different audience or does something different. So here are the 15 questions that we landed on. So we're gonna get started. All [00:07:00] right.
[00:07:00] Cathy McPhillips: Number one, I'm a beginner in AI When using structured prompting, do you need to do it every time you interact with an AI platform or can the system carry that structure forward from prior conversations?
[00:07:11] Paul Roetzer: So, I don't know that there's a, like a perfect answer to this question. I think that prompting to me is like this ongoing experimentation. And so I think if you, this is why you would build a GPT or a Google Gem or you know, a Claude project if you wanted to follow the same instructions each time, then you write those system instructions once and you train that specific GPT or GEM or Project to behave and output in a specific format.
[00:07:38] Referencing a specific knowledge base. So if you want that level of performance and consistency, that's an argument to create a GOT or a gem or a, or a project. for me personally with prompting, I do experiment quite a bit where sometimes when I'm working on a new project, I will give it very detailed prompt with context and examples.
[00:07:59] And here's the role [00:08:00] I want you to fill, and I want you to first ask me questions. Like I'll give very detailed, prompt, and then a lot of times I will actually give a very basic prompt just to see. Because sometimes what you find is if we put too many guardrails on the aIt actually isn't as creative, and strategic.
[00:08:19] And so what I will do a lot of times is I will start with like, especially when I'm working on a high profile strategy project, like a, a high value strategy project, I will actually experiment with different models. And so let's say like Gemini might give a very detailed, comprehensive prompt to. chatGPT
[00:08:35] I might give like the same prompt. and then Claude, I will give a very basic prompt and what I'll do is then I'll actually look at how the different models behave. And so I don't think that there's any one right answer to how to exactly prompt in every case moving forward. I think it's constantly evolving as the models get smarter, more generally capable.
[00:08:58] And what I'm finding is [00:09:00] context is always helpful, but more and more it's kind of good to experiment with these models and just see what they are capable of without you telling them too much detail about what you want from them. oftentimes you get surprised. So I'll often go to a project and say, listen, here's what I'm trying to do.
[00:09:17] What do you think I should include in it? So before I go tell it what to do, I'll ask it for its guidance, and then I'll actually build from there. So I would think about it as like an iterative process, experiment with both, you know, comprehensive prompts and simple prompts. And just keep experimenting.
[00:09:32] Don't assume once you lock in, like all the models are gonna work the same way.
[00:09:36] Cathy McPhillips: Yeah. And are prompt libraries even a thing anymore? Like do people really is, if you put something in there and a week later, is it going to be something different? Or is there value in still keeping a, a library of things that have worked really well for you?
[00:09:49] Paul Roetzer: I do think that there's value and mainly it's because a lot of people have like the blank page syndrome when they go to work with these things. So for people who aren't used to having it integrated into their [00:10:00] daily workflows, they're not, you know, constantly thinking before they start something, can AI help me with this thing?
[00:10:06] the sample prompts to me are, are very helpful as a starting point for people. It helps them actually realize the different ways they could be using it. So yeah, depending on what the prompt library is being used for, it's good go-to. And then even, you know, for personal use, just to go back and remember some of the prompts you've used before.
[00:10:23] I keep a journal when I'm, whenever I'm working with like, higher value projects and I'll actually go back and reference the prompt I used. Because what I like to do is take that same prompt and put it into new generations of the models. So I have one project in particular I've been working on for like six months.
[00:10:38] And so I have in my journal, which is just in a Google doc, here was 2.5 Pro from Gemini. This was the output I got with that prompt. Well, when three comes out or 3.1 Pro, I'll take that same prompt and I'll actually drop it back in and I'll start the project again or like see if anything has really changed with the model based on that.
[00:10:57] Cathy McPhillips: Sure. Okay.
[00:10:59] Cathy McPhillips: Number two, you kind of alluded to this a little bit, but I'm trying to understand the practical value of custom GPTs. If ChatGPT can act, can already answer questions directly, what problem does a custom GPT actually solve?
[00:11:10] Paul Roetzer: Yeah, it's consistency of the output. So if you're doing something over and over again, then you don't have to provide that context every time.
[00:11:18] So I don't know, like an example would be, you know, I shared how when, when we were creating AI Academy, like the new version of AI Academy that launched in 2025. I had to create 20 courses, so three certificate series, but totaling 20 different courses. And so I built an AI learning assistant, I called ala, which, which actually was a Google Gem, which functions like a, a custom GPT does.
[00:11:41] And it was trained on our instructional design principles for academy. It was trained on our roadmap for academy. I explicitly told her like, here are the things I want you to function as. Like, I don't want you to create the courses. I want you to actually help assess them. I want you to help me build outlines.
[00:11:56] I want you to write abstracts for the courses once I've completed them, like I'm gonna give [00:12:00] you the PowerPoint or an our case Apple keynote, and then I want you to actually write an abstract based on that. And so I told it to do this specific thing, and then over like a two month period while I was creating all these courses, every time I needed something, I didn't have to go back in and say, here's what AI Academy is, here's what I'm trying to do.
[00:12:18] It was already pre-trained on that. And so that's a use case. another example would be JobsGPT, which is a custom GPT that I built, I don't know, like a year and a half, two years ago. That helps assess the impact of these AI models on jobs at a task level. And then it actually provides a rationale as to how you could be using AI to do these different tasks differently.
[00:12:39] So you could put your job title in and it'll actually assess your job title, and the tasks you do based on it's pre-training basically with what I, the knowledge base I gave it. So the argument for creating a GPT there is because I shared that and made it free for anybody to use. And so now I don't know, last time I looked over 30,000 conversations that happened in Jobs GPT, and [00:13:00] that was probably six months ago.
[00:13:01] because I just created this thing that was specifically designed to do a, a very specific task. and then I made it available for everybody. So I could, I could just take a prompt and say like, okay, here's a prompt. Go do it. But that's, it's way harder to do that. So we created a thing like productized it, put it on our site, like, Hey, go click here.
[00:13:20] You can go play with this thing and see what it does for you. So those are a couple of cases, consistency of projects when you're gonna, you know, want to use the same kind of prompt over and over with the same context. And then when you want to create something that you can share out that helps people do something.
[00:13:34] I also do this, like, I just ran a workshop yesterday, where I did problems, GPT, which helps people identify problem statements and value statements where AI can potentially help solve something. And so I actually use problems GPT in workshops. So to help people brainstorm faster, they can just go in and say, all right, here's what my job, here's what I'm struggling with.
[00:13:52] It'll help 'em write a problem statement, it'll create a value statement, and then it'll actually build a strategic brief that I trained it on, to help them solve that [00:14:00] problem. And so, like none of that, it's really hard to do with just like prompt libraries basically.
[00:14:05] Cathy McPhillips: Sure. Another use case is that I have my PaulGPT that I use every day, and I also have a CathyGPT that I have shared both of them with members of our team.
[00:14:16] So a lot of that knowledge base, they, you know, they have access to that GPT and they, and they're using it in their day-to-day. So sharing it among the team is a, is a great reason to have custom gpt.
[00:14:26] Paul Roetzer: Yep.
[00:14:26] Cathy McPhillips: Number three, with the constant evolution of AI models, are SaaS providers becoming more model agnostic, including bring, bring your own model offerings?
[00:14:35] Paul Roetzer: So the way this, the software providers work, just, I think to provide a little context that people aren't clear exactly how this works. There's like five companies in the, well in the United States, there's other companies internationally, but in the US there's like five companies that are actually building the models.
[00:14:52] So, you know, Anthropic, openAI's, meta, xAI, Microsoft to a degree, [00:15:00] Did I, did I miss somebody? Google? Microsoft openAI's. Anthropic. xAI. Meta. Yeah. Okay. So maybe six. they build the models. They're the ones spending the billions this year. Hundreds of billions to build data centers, build energy infrastructure, train the models, and then make those models available to other companies to use.
[00:15:22] So your company might contract directly with one of the model companies and get direct access to it. Like maybe you use Gemini, you know, outta the box. Like you just go to the Gemini app and you do what you do or build it into workspace. The alternative is software companies like HubSpot, Workday Box, Salesforce.
[00:15:45] They don't build their own models, they pay those labs, the AI labs, they have model companies to access the models, almost like a utility. They're just basically paying for tokens to, to use the technology. And so if you go [00:16:00] into like a HubSpot and you're interacting with an AI within HubSpot to helping you build a landing page, or it's giving you insights into your data, that is not HubSpot's model.
[00:16:09] They are likely, well, they were originally, I think using openAI's, they probably use a symphony of models. Now they're probably trying to use the lowest cost version of a model. So maybe they have a model selector that's like, okay, someone wants to write an email. Let's just use the cheapest version of Claude because we can charge the customer more money and we can make a, a bigger margin on it.
[00:16:29] So what's happening is the software companies build these models into their software. They don't actually build the model. And then when you use it within their software, you're actually using a third party model, but you're just paying. You know the markup fee from the software company. So each software provider handles this differently.
[00:16:50] Some probably have exclusives with these model companies, but I think more and more the software companies are realizing it's probably smart to diversify and so they likely have relationships with [00:17:00] multiple model companies to be able to serve up the intelligence that you then use to do your work within those software platforms.
[00:17:07] Cathy McPhillips: Yep. Okay.
[00:17:09] Cathy McPhillips: Number four, I've built a number of custom gpt for work processes since the switch to the 5.2 model. I found the voice and tone aren't as reliable as in the previous versions. Is that something you're hearing from others and you have any advice?
[00:17:23] Paul Roetzer: Yeah, this is the problem. So the, again, I'll just always try and like provide a little context here in case the question isn't clear for everyone.
[00:17:30] So like, let's take jobs, GPT as an example. So I built jobsGPT, I think it was originally built on GPT-4, like when I first built it. And so it's trained to do a specific thing. And then when each new model comes out, so you have 4.5, then you get GPT five as the creator of that GPTI have to go in and be like, but does it still function the way I intended it to function?
[00:17:54] So this is a very low level version, but the software companies have to deal with the same thing. The individual users have [00:18:00] to deal with the same thing when the model changes. These models are trained in different ways. They're trained to have different, as weird as it sounds like, personalities and tone, they're trained to be good at some things.
[00:18:12] Like, so for example, Sam Altman admitted that when they came out with 5.2, they put less resources into making it good at writing. And so G PT four and even GPT five were very good at writing 'cause they were trained to be good at writing. They, they prioritized AGI agentic capabilities on the later models.
[00:18:34] And so it actually, they spent less time and money. Getting it to be good at writing. And so you might actually notice that when you go in and use these later models that the voice and tone starts to change. And there that's not on you. Like they just changed the model. So this is a real problem and it's something people have to be aware of.
[00:18:55] And it's why when new models come out, if you have go-to use cases, or if you have [00:19:00] custom GPTs or gems, you need to go in and experiment with them and make sure they're still doing what they're supposed to do. And they're still sounding like what they're supposed to sound, because you can go in and change the system instruction.
[00:19:12] So if you've built a GPT, and again, if you've never built a GPT, it's like giving a project brief to a human. You just go in and say, Hey, here's what I want you to do. Use natural language to do this. You explain what you want it to function as, and you give it whatever context you want, and then you just save it and publish it.
[00:19:25] So you can go back into those system instructions or the project brief and say, listen, I want your tone to be more like GPT-4. I want you to be more comforting. I want you to be more. friendly, like whatever. And you can try and coax that tone out of it by telling it that's what you want. But this, this was a huge issue for, openAI's because people got very attached to the tone of the four series models.
[00:19:50] And then when they moved over to the five, especially 5.2, there was a lot of backlash from users that it didn't feel like the same model to them. It didn't feel like [00:20:00] the, like people had gotten used to talking to it, they'd developed relationship almost as weird as that sounds with these models. And so it's, it's very odd to have them switch.
[00:20:09] So yes, it is a, it is a common thing. It is not gonna stop. Like this is gonna be a ongoing issue as new models come out and they get infused into the software you use or the GPTs or gems you've built, you're, you're gonna have to constantly deal with this. There's really no, I don't know a way around it.
[00:20:26] I don't know how the labs would solve for that. They, they determine the model's behavior each time they publish it. So sometimes it's gonna change.
[00:20:34] Cathy McPhillips: Alright.
[00:20:36] Cathy McPhillips: Number five, do you have an AI output validation process? You can share? I've seen colleagues take AI generated output and publish it directly, or send it to a second LLM for verification.
[00:20:46] This can mislead decisions and create risk. What's a better approach?
[00:20:51] Paul Roetzer: Better approach is the human has to be responsible for the output. Like there's no easy way around this. the, so we have these responsible AI principles that I [00:21:00] published a couple years ago, I guess three years ago now. It was the January of 23 and one of the first principles what the human has to remain in charge, like the human is responsible for the output of the ai.
[00:21:11] And so as much as we would all love to just run five AI research projects a day and publish, you know, these 30 page reports and get all the SEO credit and, you know, be able to, to drive awareness and audience growth with all this stuff, that, that's just not how this works. These things make mistakes.
[00:21:28] They hallucinate, they, they have errors in them. And just because we can take shortcuts on the creation of it doesn't mean we can take shortcuts on the verification and the critical thinking of like the output itself. And sometimes you just gotta trash something because it was based, like what we've seen with the deep research in particulars it can read amazing.
[00:21:46] Like if you just read this report, it's like, sounds incredible. And it picks these great headlines, like 60% of people aren't getting any ROI from AI or whatever. And then you dig into it and you realize that the source it used, the primary source for [00:22:00] the main headline was from a source you would never use.
[00:22:03] Like, it's like a third party platform that got its data from somewhere else. And you gotta dig five layers deep to figure out where they even got the 60% from. And then you find out it came from a Wikipedia, like you just don't know. So there is no shortcut and nor do I think there should be a shortcut for verification.
[00:22:21] I think that we should slow down and publish good quality stuff that we have verified and that humans have signed off on. and I think that's good. Like, otherwise, we would all just pump out AI slop. It's, it's not, it's like AI content farms from like 2010 range, right? Where you, you pay 2 cents a word and everybody started pumping out a bunch of crap.
[00:22:43] Like, we don't want that, we don't want the internet filled with junk. And that's basically what's happening. So I do use, like, I will take something like, let's say I do a research report and deep research from Google. I will take it and throw it into Claude and say, can you assess this? I want to be [00:23:00] critical of the citations.
[00:23:01] Make sure they're all the, like, that's fine. That's a good stepping stone. And you might save yourself 30%, 50% of the time to do that first level of verification. It might find a bunch of stuff and flag stuff for you, but you can't get the human outta the loop on the verification process.
[00:23:16] Cathy McPhillips: Definitely. Okay.
[00:23:17] Cathy McPhillips: Number six. Can you suggest tools for building AI agents? We hear so much about their potential, but where should someone start if they want to build something like an AI sales outreach agent or a content flywheel generat.
[00:23:29] Paul Roetzer: I would start with the existing platforms in your tech stack and see if they enable these kinds of things.
[00:23:34] So if you're a Microsoft shop, you can go build agents and, you know, Microsoft Copilot. If you are, you know, Google Gemini user, you can go build 'em in AI Studio. You can, you know, create agents to do different things. They're largely rules-based agents, but they can help. if you are in Salesforce, you can build agents like right within the CRM.
[00:23:55] So like your platform may enable the building of agents. [00:24:00] Claude, you know, Claude Code, you, you kind of use Claude Code as an agent to build stuff. So like they're, they're starting to just get embedded and sometimes you're not even gonna know that that's what you're doing. It just depends on the task you're trying to perform.
[00:24:11] So I, on the podcast recently, I shared the example of how I've been using Lovable to Lovable is basically an AI agent that, that does code to build apps. And I have no coding ability. So I just go in and like, Hey, I wanna build an org chart app. I wanna make it interactive. I wanna be. Envision like future iterations of how our organizational structure could look like, help me, help me build that.
[00:24:32] And it just does it, and it asks me questions and you build out the thing. So I would say like the popular building of agent ones are clo, Claude, code Codex. You know, Gemini has coding capabilities, lovable, but you know, that space is moving so fast. agent.ai is another one. Dharmesh Shaw, co-founder of HubSpot, created that platform and it's in essence like you can go, it's like a marketplace for agents.
[00:24:58] Basically. You can go in and build your [00:25:00] own or you can, you know, sign on to use an agent. So agents are an interesting emerging space. They're in, in software design in particular, they're becoming more autonomous and reliable, and that's starting to trickle out into other areas of knowledge work. But most of the AI agents that are actually approaching auto like levels, high levels of autonomy.
[00:25:21] it is largely happening in, in coding and software. it's not like, you know, marketers and salespeople and customer success and executives, like, for the most part, the AI agents you would be using. They're still pretty rudimentary and a lot of rules based stuff, which is not like the full autonomy people envision when they hear the term AI agents.
[00:25:41] Cathy McPhillips: Right. But Matt Seer at our AI for Agency Summit at his presentation was like, okay, I actually can do a couple of these things mm-hmm. And get started. So they're, you know, look, look for some of those use cases and some of those people that are doing some cool things and, you know
[00:25:54] Paul Roetzer: Yep. They're starting to become more real, like by the day, like it's tons of innovation's happening [00:26:00] and they're, they're making coding accessible to everybody, so anybody's gonna be able to go build whatever they want.
[00:26:04] And I think like 2026 is gonna become very real for a lot of people.
[00:26:09] Cathy McPhillips: Yep.
[00:26:11] Cathy McPhillips: Number seven, AI labs are developing new tools like Claude Coworkers. And Excel or PowerPoint add-ons. Do you believe knowledge workers will soon and face the same disruptions developers and SaaS companies are experiencing? And if so, how should they prepare?
[00:26:24] Paul Roetzer: I think the technology is gonna be there to cause that level of disruption. I think human friction and the friction of slow moving enterprises is going to prevent it from being massively disruptive in a very short period of time. So I think the software coding agents, you know, like we talked about, like Claude Code as an example, it's disrupting, you know, stocks daily, like we hear every day, like Claude releases, you know, there's Anthropic releases, some NewLaw plugin and some new category of stocks crashes like 10 to 20% overnight.
[00:26:58] And it's all just based on the [00:27:00] uncertainty of what does this all mean to all these legacy software companies. So I do think that in the next 12 to 18 months. Most of knowledge work will be solved. Like in theory, Claude and others will be able to do the vast majority of what knowledge workers do.
[00:27:20] so I I I would say there's not gonna be too many technical limitations to looking at industry and saying, well, let's just use Claude and Gemini and Chet, and let's just like automate that industry. Having spent a lot of time with large enterprises, especially in the, in the last few weeks, that could be true today, that like you could just go get Claude and, and automate like every job.
[00:27:46] If that was true today, it would take five years for these enterprises to do something about it. Sure. Like it's the enterprises are moving so slow with not just like adoption and integration and like transformation. [00:28:00] They're moving slow to even provide the most basic AI assistance to all of their employees.
[00:28:06] So I just feel like if you'd asked me this two years ago, I would've said, yeah, by 2026, we will probably be like right within the midst of the disruption. I don't know that that's true for most industries. I think it's gonna be a really long run, and I've said this before, like we could have AGI today, we could agree as a society that we have reached artificial general intelligence.
[00:28:28] And I don't know that it would change anything within enterprises like, it. It's again, like, generative AI is three and a half years old. It was in, you know, November, 2022 is when like ChatGPT emerged and gave access to the world. And I meet with enterprises every day who have not yet provided a license to it, to their, to their workers.
[00:28:49] So, I mean, we're three years plus in into the most basic thing, which is give 'em an AI system for 20 bucks a month. And we're sitting here, you know, in many instances not doing [00:29:00] anything. So I just am not a believer that the disruption is gonna happen that fast. Now, I would not. I wouldn't say that. You shouldn't have a sense of urgency about it though.
[00:29:09] Mm-hmm. Because it will start to impact jobs. I'm just saying it won't completely change industries
[00:29:15] Cathy McPhillips: and it might start to impact jobs before the company's ready to use AI to, to replace those jobs.
[00:29:20] Paul Roetzer: Yeah. Like jobs are going away. Like they're in, in mass starting, they kind of started last year, but like 2026 is not gonna be pretty from a jobs perspective,
[00:29:30] Cathy McPhillips: but the companies aren't prepared to lose those people.
[00:29:34] Paul Roetzer: Yeah. They're just not aware of how unprepared they are.
[00:29:37] Cathy McPhillips: Correct.
[00:29:37] Paul Roetzer: I think that it's largely happening because they think AI is driving all these efficiencies and productivity and they don't need as many people. but they don't really have a plan.
[00:29:47] Cathy McPhillips: Right.
[00:29:47] Paul Roetzer: For like what that looks like.
[00:29:49] Cathy McPhillips: Right. Okay.
[00:29:51] Moving on
[00:29:53] Cathy McPhillips: number eight.
[00:29:53] Paul Roetzer: Yeah.
[00:29:54] Cathy McPhillips: As early AI first organizations report increased workloads during the initial AI transitions, what strategies [00:30:00] can leaders implement to support employees and prevent burnout?
[00:30:03] Paul Roetzer: I assume this question comes from some of the recent reports that are basically saying, you know, the organizations who are racing ahead and figuring out the ways to integrate AI across teams and employees and workflows, are just increasing the productivity, but they're also just increasing workloads.
[00:30:21] Like they're not seeing like the byproduct of, you know, more time to work on pet projects and more time to like, not work. I think what's happening is the people who are fully aware of what's possible, like know how to use these tools to think, create, understand reason, assistant decision making, assistant problem solving, they now realize they can go build apps and software without having to talk to anybody in it.
[00:30:48] Like it's, it's like this insanely inspiring world of like anything is possible. And I can just speak from my personal experience, like I just wanna build stuff, like I just wanna [00:31:00] solve things. And so. I don't feel burnout personally. Like, and I, again, I know I'm probably unique and like I'm the CEO of a company that does this stuff.
[00:31:08] So like, I probably have a little bit of different perspective than, than, you know, say like a VP at a 30,000 person company. But like, I just want to do stuff and build stuff and like, solve problems and I find it so exciting. But I am definitely creating more output than I used to create. I'm doing things faster, solving problems faster than I used to solve them.
[00:31:33] And I could see how you become so immersed in that, that you just like overwork yourself. I could more realistic though, to me is the enterprise example where you have, let's take it, say you have a marketing team of, you know, 50 and there's five people who have actually figured all this out. They're the ones that are daily active users.
[00:31:55] They've built assistants that are personalized to them. They know how to build apps. [00:32:00] they've got a permission to connect to some data, so like they're doing all the things and the other 45 people on their team aren't, and all of a sudden those five are doing the work of like two or three employees and they're realizing they're the only ones producing at this level, and they're getting paid the same as their peers who aren't doing any of these things.
[00:32:20] And like, I think that's where a lot of the burnout and friction is gonna come in is when you have these AI champions or like early adopters who are just outpacing their peers and they're doing more work than everybody and everybody's kind of maybe starting to say like, Hey, can you help with this? Can you help with that?
[00:32:37] Can you help me figure out a new pro? I could see a scenario where burnout starts to happen there, but I think this is an organizational design thing. It's a change management thing. Like companies are gonna have to figure this out. You know, your employees are gonna be able to do more once you enable them to do it.
[00:32:52] And you gotta figure out how to make that happen without burning them out and like giving them the grace of like, Hey, you just. Yeah, [00:33:00] yesterday afternoon you created an app that's gonna save us a half a million dollars next year. Like, hey, take Friday off. Like then there has to be some
[00:33:07] Cathy McPhillips: right
[00:33:07] Paul Roetzer: structure.
[00:33:08] Like I've had this personally, I've shared a little bit on the podcast, like I've done a couple of things in the last couple weeks where like I'll work on something for like two days that would've, u used to take me two months and it's probably gonna be worth millions of dollars to our company over the next like one to two years.
[00:33:28] And I'll finish it and like an hour later I'll feel guilty that I'm not working on something else. And it's like, dude, take the afternoon off. Like you, you just did this thing. And so even for me, there's this mentality of sometimes you have to look at the value of the output, not the time you spent on it.
[00:33:45] And you have to be able to give yourself time back. But I think that, you know. intrinsically motivated people are gonna just want to keep building the next thing. They're gonna be motivated by like, the game of this, of like, I just like create something new. We can like, you know, increase our score, like we do this [00:34:00] thing.
[00:34:00] And I think companies are gonna have to manage those people, to make sure they don't burn out, to give them the grace of time off and help them like, all right, you did a huge thing. Like the company really appreciates it. Like, take a week off, get some extra PTO, whatever it is. It's not gonna be monetary, I don't think.
[00:34:16] Like monetary is good for a time period, but that's not like, usually the thing that motivates those kind of people. once you get to a certain threshold, like once you make enough money, it's just like, all right, like, I wanna build stuff, I want some time back, things like that. So. These are hard things that like HR departments and leaders are gonna have to really start thinking about.
[00:34:37] I've, I've not seen a company yet that I've heard of that has solved this.
[00:34:40] Cathy McPhillips: Yeah. And I've talked to you about this before, but I'm kind of down this rabbit hole of like how we define productivity right now.
[00:34:46] Paul Roetzer: Yeah.
[00:34:46] Cathy McPhillips: So, and if productivity is you taking the afternoon off because your brain is fried, but you're coming back the next morning fresh, like that's a productive use of your time.
[00:34:55] Paul Roetzer: Yeah. And I'm like, micromanage, like, I'll give you an example. So I try and do, [00:35:00] I try and do the elliptical. So I get up at like 5 30, 5 45 every morning and I try and do like 30 minutes on the elliptical, and then I usually try and get to the gym like three to five days a week. Normally, I think of those times as times to listen to podcasts.
[00:35:14] It's like, okay, like I'm gonna beat 30 minutes. I can, I can get through a one hour podcast at two x speed and I get like, okay, that's 45 minutes I can get to. And so everything in my life is like, how do I consume the most amount of information so that I can synthesize that information for our own business and for the podcast.
[00:35:29] And every once in a while it's like. No, just listen to music for 30 minutes. And so I'm managing burnout at this very, very, like, down to the 30 minute block level of like the grace of, you don't have to listen to a podcast for 30 minutes. but again, I actually like, I find it invigorating, but every once in a while I can feel myself that burnout creeping in and it's like, all right, I'm, I'm actually not gonna do any work on this Saturday.
[00:35:55] Like, I'm, I'm just gonna take the off and hang out with the family. So, but again, people need, [00:36:00] they need guidance to do that. Well, I think a lot of people who are already early adopters of this are probably the type of personality who are going to want to just build and they're gonna want to keep working.
[00:36:13] Cathy McPhillips: Well, when you figure out that how to solve that, please let me know
[00:36:16] Paul Roetzer: to shut the mind off. I know it's, it's genetic. I get it from my mom. I'm sick.
[00:36:21] Cathy McPhillips: Okay.
[00:36:21] Cathy McPhillips: Number nine. What roles and skill sets are most at risk for obsolescence? And how can leaders help team, team members in those roles successfully navigate the transition?
[00:36:31] Paul Roetzer: Yeah, this is, this is a really hard one to answer. I'll try and answer first, but what I think is ish, so I think emotional intelligence is really important. I think interpersonal communication I is gonna be absolutely critical. and 'cause I think more and more that's what jobs are gonna be.
[00:36:59] You know, if [00:37:00] you're a, a wealth manager or a, a consultant or a journalist like we talked about this week, like, you're gonna have more time to be with other people, to talk to other people, to engage at a human level. And so that ability to do that is gonna be really important. So I think we're gonna look for that.
[00:37:19] Critical thinking, is essential, but being able to work with the AI that's also very good at critical thinking, you know, it's, so, I think. I often boil it down to knowing what questions to ask and what to do with the answers. And then knowing how to talk and work with, and learn from the ai like at a very high level.
[00:37:39] That that's basically what all knowledge work is gonna come down to. And so what do, what do you need? What skills do you need to work well with ai? Well, you need to probably be a problem solver because you have to be able to identify problems to have it solve. And so you have to like have that context of what does a problem look like?
[00:37:57] And, you know, what questions [00:38:00] do you ask about something? If something's not working in your company or you're building a strategy or you want to produce an analytics report, it often comes down to what are the right questions we're, we would be asking here. So if I'm thinking about our own company, and I'm gonna say like, what are the AI generated reports I want to see every morning when I, you know, open up my computer?
[00:38:18] I would actually start with, well, what are the questions I want answered every morning? Are we on track to achieve our goals? you know, which resources are providing the greatest return on investment? where are we falling short? Like, there, there's these standard questions I would want and, but to, to ask those questions is a skill.
[00:38:36] Like, it's, it's understanding the context of business. So it's, I I think like those are some of the things writing, I, you know, I know the AI's really good at writing, but I actually think writing is like a fundamental skill of all workers, all knowledge workers, professionals, and leaders. Because writing is thinking, and if you can form strong outlines and you can tell stories, and you can be critically assessed [00:39:00] sources and information and facts, like that's critical.
[00:39:03] Even if the AI's assisting, you still need that. So I would say writing is still a fundamentally important skill. And then the roles, the safety of different roles is largely dependent upon which companies get funded by venture capital firms because there's no role that's actually safe. Like it really is more a matter of when is someone gonna justify going after the labor market for a specific role.
[00:39:28] And what I mean by that is like the, in the United States there's about 11 trillion in wages annually. Somewhere between four and 6 trillion of that are knowledge workers, digital workers, people who use computers for a living. So in essence, what's happening is the AI labs and the software companies themselves are looking at that four to 6 trillion and saying, which, which portion of that do we go after?
[00:39:49] So if lawyers account for, I'm just gonna make up a number, but say 400 billion in annual wages, that's a pretty nice total addressable market to go after [00:40:00] with, if you're Harvey and you want to build an ai, yeah, it helps attorneys but also maybe eliminates the need for some attorneys. you can do accounting, you can do consulting, you can do marketing, you can do sales, you can do customer service like.
[00:40:14] And so that's in essence what's happening. That's the future of the economy is over the next like five to 10 years, it's just gonna be picking off roles. Now it can't really eliminate the need for humans, but it can reduce the number of humans needed. And I think that's the thing people have to come to grasp is like, we're not, these companies aren't trying to eliminate humans.
[00:40:31] Totally. Well, some of them actually are, but most of them are not trying to limit humans. But the byproduct of that effort is that fewer humans are needed to do the same amount work. So,
[00:40:43] Cathy McPhillips: yeah. And you've talked before about, you know, working with your accountant, working with your lawyer. My dad's my accountant.
[00:40:47] So like, I took in my tech stuff and I did a lot of it on my own. And he was like, Hey, what are you trying to do? You're trying to fire me. And I was like, Noah, but I'm also very curious on this sort of stuff.
[00:40:56] Paul Roetzer: Yeah.
[00:40:56] Cathy McPhillips: But I can go into him and say like, I actually saved you a few hours. But you have to finish it.
[00:41:00] Paul Roetzer: I, yeah, I've done it with, Accountants. Architects like doctors, I have a co-x of all of them. Like, I want to come and educated to those conversations. I have equated it Cathy to like. How, you know, 10, 15 years ago when you were buying a car, you were just very dependent upon the salesperson. You know, you just, like, you didn't, you didn't have a ton of information about the car buying process.
[00:41:22] And now if you go buy a car and you haven't done all the research, run all the numbers, know what the market fair market value, if that car, like you need to know all of that. Know what percentage you can get off on the thing by a, like, if you don't walk in and buying a car with that information, you've done a disservice to yourself
[00:41:36] Cathy McPhillips: for sure.
[00:41:36] Paul Roetzer: And I kind of feel that way about anything I hire an advisor for a professional for. I'm not trying to replace them, I just want to come from a more educated standpoint. I wanna know what questions to ask them. And yes, that might reduce the billable hours, it might reduce the amount of time that we have to spend together, but the value creation is still tremendous.
[00:41:56] And I understand that's maybe disruptive to those fields, but I think that's just the [00:42:00] reality of how it's all gonna work in the future.
[00:42:03] Cathy McPhillips: Number 10. For business leaders modernizing analytics, should they continue investing in traditional BI platforms or shift toward AI first reporting systems? And what does the right architecture look like moving forward?
[00:42:16] And how should leaders decide?
[00:42:17] Paul Roetzer: I'd love the answer to this one. That's actually something I'm thinking about myself. so I'll tell you what I am currently considering for us. so if you, if you think about running our company, there's probably like four or five platforms that hold the vast majority of the analytics data I would want to have access to at any given time that I would wanna be able to ask questions of at any given time.
[00:42:43] So an example would be like, HubSpot is our CRM, and I may want to say like, like we just launched, launched the I for financial services, course series. So I might wanna say how many customers in our database, like 150,000 people work in the financial services industry. Historically, I would've to go in the [00:43:00] CRM I would've to,
[00:43:01] I build a a list, I'd have to pick the properties. I would've to like make sure all these things have to happen versus can I just talk to it? Can I just ask this question in HubSpot? The answer right now is, to my knowledge, no. Like HubSpot has yet to enable that level of, intelligence in the platform.
[00:43:19] If I'm wrong, HubSpot, please reach out and tell me that it exists. 'cause I would love to use it. But I, you know, imagine the same thing for our accounting data, for our learning management system data, like all these core things that are, essential to running our business right now, I have to go to those platforms.
[00:43:37] You could have a single BI tool, like you could build it into like dashboards and you can visualize it and things like that. And we have that too. But in theory, I have to go to these different places, seek out this information, and then I have no intelligence. All I have is data. And so I'm thinking about saying, okay.
[00:43:52] Back to this sort of first principles. What are the questions I want answered about our data every moment of every day? Like, what would I need to know? And then what is the best [00:44:00] way to architect that? Is it to talk to our, financial reporting system, our CRM system, our learning management system, and say, what AI are you building into your platform that we can come in and talk to our data and like, get insights and recommendations out of it, not just visuals and charts.
[00:44:16] so one option is to rely on each of those individual platforms to build intelligence into their platform. The alternative is to connect all of them to like a, a Claude. And then I just go into Claude and I talk to it or to, you know, in Gemini, let's say, that they would enable this kind of thing. And then I could just schedule an automated report saying, Hey, I want to know what's happening with our p and l.
[00:44:39] I wanna know what's happening with our top customers. Like, whatever. Here's the questions I have. Go through generate report once, say, okay, this report's great. Can you actually run this every night at midnight and send it to me in an email? So imagine that level of reporting. So now I never actually have to go into those platforms for the majority of the use cases.
[00:44:56] I'm, I'm just like, it's, it's showing up, it's surfacing itself to me in [00:45:00] an almost ambient way. Like, here's what you need to know. Here's the three, you know, key facts or highlights from yesterday's data report, and here's three actions you might want to take. We think you should reach out to Cathy about this.
[00:45:09] You should reach out to, you know, Jess about that. And yeah, talk to Tracy about this. Like that to me is the vision for like, what it should look like. I'm not clear yet how exactly we get there.
[00:45:21] Cathy McPhillips: Okay.
[00:45:22] Cathy McPhillips: Number 11. As AI capabilities expand rapidly, how should leaders think about what to build internally versus what to buy from vendors or partner on?
[00:45:31] And what is the decision framework?
[00:45:33] Paul Roetzer: So the general practice I grew up with in business was like, if it's core to your business, build it. If it's ancillary to your business, then you know, let somebody else build it and maintain it. So like, what I mean by that is, like if you are Amazon at a very, very high level, you're not gonna build your business on Shopify.
[00:45:58] Like you, you're Amazon, like you're [00:46:00] gonna build your own e-commerce engine. You're gonna manage that each kind e-commerce engine because it's, it's absolutely fun, you know, fundamental to the business you're in. so I would say it's getting harder and harder to, when we think about Gen AI to build your own things.
[00:46:19] Like if you're a software company, trying to build your own models, even though they're fundamental to your business is, is next to impossible. Like, you can't compete with these bigger companies and the general idea that like the next version of the smartest model is just gonna obsolete whatever fine tuned model you created to do a specific thing, like, they just keep proving time and time again that the bigger the model, the more general it is, the better it's gonna be at everything.
[00:46:45] And so. I think that more and more you just need to assume you're not going to be the one building the intelligence that you're gonna find the partners to build with. But then [00:47:00] the complexity becomes like, what are they building on? So if you go find a software product that enables AI and marketing or sales or customer success, the challenge for me is do you go through the software vendor or you go direct?
[00:47:12] So like, like I'll use again, I'll just share a personal example, like the AI agents on your website, like customer service, you know, do you use the one that's kind of built into your current tech stack or do you go work with a company that specializes in building on top of the best models to create the most advanced customer service agent with the fewest hallucinations and the highest satisfaction score rating and whatever.
[00:47:38] And so like, those are the kinds of things we have to start thinking through is, you know, what is the. the thing we're trying to solve, what is the potential value of it? What are the risks related to it? What are the options we have? And then what do we actually go and do, and what do we build ourselves?
[00:47:54] So, yeah, th this is a tricky one and it, I think it's shifting. I think the traditional [00:48:00] idea of build core, you know, and buy everything else, I don't know how true that remains, and I'd have to actually probably sit down and think about individual examples within our own business. Like we didn't build our own learning management system, like that's core to our business.
[00:48:13] But, we looked at a bunch of vendors and we found a vendor with an AI roadmap that we believed in, that we thought was gonna give us the best chance to move quickly and start iterating. And then that would enable us to improve that platform, by customizing it. And so, you know, I could have stopped and spent a million dollars and tried to build our own learning management system, but it would've taken us three years and it probably wouldn't have been what we needed.
[00:48:39] So, yeah, I don't know. There's no, there's no real right answer for this. I don't think. It just depends on each unique situation.
[00:48:45] Cathy McPhillips: This will be a question we'll ask again in like,
[00:48:47] Paul Roetzer: I'm sure
[00:48:47] Cathy McPhillips: Two months or 12 months or whatever.
[00:48:49] Paul Roetzer: My answer might change if you ask me next week.
[00:48:52] Cathy McPhillips: number 12, holding companies are building billion dollar AI operating systems, and the big four are investing billions in AI consulting [00:49:00] divisions for independent AI forward agencies.
[00:49:03] What is the leading the lasting competitive advantage? Is it trust? Is it agility, the ability to actually build and ship while others are still writing decks?
[00:49:11] Paul Roetzer: Hmm. Yeah. It's not just the big four that are building a consulting practices. Claude and openAI's and Google are gonna come after that business too.
[00:49:19] which is weird because they have alliances with these companies and then they're building like massive competitors to the companies. It's a very weird dynamic right now. Very weird marketplace. The, so the AI For agencies, you know, we just did our AI for Agencies Summit. Was that this month? It was this two weeks ago.
[00:49:38] Cathy McPhillips: Yep.
[00:49:39] Paul Roetzer: So, man, time moves fast. You know, it's, it's, it's a very difficult environment right now for agencies. There's pricing pressures. you know, I think brands are aware that a lot of what they used to rely on agencies to do the tactical work they can do themselves, you know, with a chatbot in, you know, a 10th of the time and a 10th of the cost.
[00:49:57] So some of that gets commoditized. Some of the [00:50:00] creative work gets commoditized. Some of the strategy work gets commoditized. I think that the agencies that stay on the leading edge of this, that are constantly, you know, evaluating it, re rethinking their services, doing impact assessments, looking out, you know, 12 to 18 months, how their industries they service are gonna change.
[00:50:17] How the organizations they work with are gonna change how consumer behavior is gonna change. I think you just have to stay out on that edge and see around the corner. I don't know any other way around this. I don't know there's any service I could guide an agency to offer today that I wouldn't have some.
[00:50:33] Level of confidence is gonna be obsoleted in 18 months. Like it's it's very hard. And then I also think it's hard to time the market, right? And one example there would be change management. Like every enterprise in the world is going to need AI change management guidance, whether it comes from internal people or outside agencies.
[00:50:54] And that means not just like, Hey, we bought licenses to chatGPT and we gave them to our [00:51:00] 30,000 people. But what does that mean? How do we personalize use cases? How do we deal with the fact that 50% of our employees hate AI and don't want to use it? How do we make sure they're not just doing it as an answer engine and they're actually using it to do real high value strategic work?
[00:51:15] Like there's so much more that comes with true AI adoption and the scaling of ai. And if you're an A firm today that offers those services and you're experts in change management, the unfortunate reality is most companies don't realize yet they need that. So you're like. You're ahead of the curve. And so you gotta figure out a way to offer the services they knew need today while they grow into the services they actually need and don't know it yet.
[00:51:42] So I don't know that there is an answer to what is a lasting competitive advantage. I don't know that such a thing exists at the moment other than staying on the edge and figuring out what the next competitive advantage is. I think there's gonna be these, like, almost like s curves of competitive advantage.
[00:51:58] It's gonna be like, oh, you, you're it, [00:52:00] you're like the leader in X when it comes to this as an agency. And then it just like plummets, like falls off a cliff. 'cause like now everybody does it. Or ChatGPT just enabled the thing you were doing out of the box. Or like Claude introduces cowork and now it just does the thing I whatever.
[00:52:14] Like it's gonna be constantly this like, all right, well that gave us an edge for 12 months. Now what's the next edge we have? So it's a very difficult environment, which also means there's tremendous opportunities. A lot of agencies are gonna struggle and fail in this environment. They're not designed by it, they don't have the right leaders in place who understand the moment.
[00:52:30] they don't have a situational awareness to how fast this is all moving. And if you're in an agency that does, you, you have a tremendous opportunity to get out ahead of everybody else and figure this stuff out as you're going.
[00:52:41] Cathy McPhillips: Absolutely.
[00:52:43] Cathy McPhillips: Number 13, some slower AI adopters on my team struggle to discern what looks like AI and what doesn't.
[00:52:49] Any tips or resources to help, or is this just something that improves over time with experience?
[00:52:55] Paul Roetzer: I'm gonna, I dunno if I'm gonna interpret this one the right way, but I'm, the way I'm interpreting it, Cathy, maybe you have [00:53:00] a different interpretation, is if they're using AI to help them with something and they output it and someone else looks at, it's like, we just use AI to do that.
[00:53:07] Like, that's how I'm interpreting this. I've had certainly plenty of instances of this where you get something and you're like, this is literally just copied and pasted from ChatGPT. Like I can tell from the format I don't have to read this thing.
[00:53:17] Cathy McPhillips: Yep.
[00:53:18] Paul Roetzer: And so then the question becomes, okay, like, let's have a meeting and I want you to walk me through this.
[00:53:24] Like, I'm not reading this 25 page document that I know you didn't even read. Let's sit down and I'm gonna have actually just grill you on questions about it. And maybe I'm gonna take your output and say, Hey, one of my employees just gave me this 25 page creative brief. I'm not sure that they did any critical thinking about it.
[00:53:42] What, gimme 10 questions I can ask them about it that will prove to me they actually know what they created. Like, something like that I could totally see happening. I could see that happening in HR examples where you're like interviewing job candidates. You need to get to the point where you're saying like, did they actually think about this?
[00:53:58] Did they apply critical [00:54:00] thinking to this? Did they verify the outputs of it? And do they have confidence in the material that they've presented to me? And so you can usually figure that out pretty quickly. You don't have to take like 30 minutes to do this. but that's how I deal with it. It's like if you're handing something in or you're publishing something publicly, you better have the confidence in that stuff, that it is accurate.
[00:54:22] That you actually understand the topic. and that if put on a stage and asked questions about it for 10 minutes, that you could actually confidently answer those questions. So like, do the work. Like there's no substitute for actually understanding a topic.
[00:54:38] Cathy McPhillips: Agreed.
[00:54:39] Cathy McPhillips: Number 14. I am new to AI and trying to use it more in my work during the Super Bowl, which was just a few weeks ago also.
[00:54:47] Cathy McPhillips: AI had a big presence. Claude's commercial stood out, especially its message that ads are coming to ai, but not to Claude. As ads enter AI platforms, how might that impact day-to-day workflows? What should we be paying attention to as this transition [00:55:00] unfolds?
[00:55:00] Paul Roetzer: I don't know that the ads are gonna impact the day-to-day workflows much at all.
[00:55:05] If you have the paid versions of ChatGPT supposedly, well the free and the go version, which is the $8 a month will have ads. The teams enterprise Pro, like those won't have ads. As of right now, Google says that Gemini won't have ads. Claude obviously is staking the position. You know, that they don't, that doesn't mean they won't eventually experiment with them.
[00:55:29] Ironically, on the way in today, I was, here goes that like if I have 15 minutes, I listen to a podcast, openAI's podcast. That's actually what it's called, the openAI's podcast. Episode 13 is the Thinking Behind Ads in ChatGPT, and it's actually with one of the executives that I think leads that team, one of the leads at, on ads, Assad aan.
[00:55:50] So you can go listen to that and AEC shares their perspective on ads. I don't think largely they're going to change much of anything for business users. I think they'll [00:56:00] largely be excluded from what you do for a while. Maybe they eventually figure out a way to elegantly integrate them, but I think there's gonna be a lot of experimentation before any of that would ever happen.
[00:56:10] Cathy McPhillips: Yep.
[00:56:11] Paul Roetzer: And then the only thing to pay attention to would be like, if your brand wants, like right now you have to be a major brand to like be getting in the pilot for running these ads on chat GPT. But I think if it's proven that large language models AI assistance is a viable place to run ads, I think the question more becomes on the brand marketing side, should you be there?
[00:56:30] Cathy McPhillips: Yeah. Is it only an episode 13?
[00:56:34] Paul Roetzer: they just launched that podcast last year. Yeah.
[00:56:37] Cathy McPhillips: Wow. We're, we're at 200
[00:56:38] Paul Roetzer: almost. I know. Yeah. A little ahead of the curve.
[00:56:41] Cathy McPhillips: Okay.
[00:56:42] Cathy McPhillips: Number 15. If you could give every business leader one AI powered superpower starting tomorrow, what would it be and why? I,
[00:56:49] Paul Roetzer: I think it would be the situational awareness thing.
[00:56:51] I mentioned earlier that way too many CEOs have no real concept of what's happening. And this is like at every size [00:57:00] company I talk to, publicly traded, venture backed, private equity owned, small business, middle market, like. the true concept of how advanced these models already are, how, how many use cases there are across organizations that don't even require you to connect any data.
[00:57:19] Like so many times I hear there's a slowdown because we're waiting for it, or it's concerns about data leakage and things like that. and I think if the leaders truly understood the moment and where we are and where we're going in the very near future and how disruptive it's all gonna be, they would be moving way faster than they're moving.
[00:57:39] And so awareness and understanding of AI at a deep level is like the superpower I would give them because I think everything else comes from there. If you know that what, what, you know, some of the leaders know, you would be acting in a way more urgent way. You would be establishing a vision, sharing that vision with your team.[00:58:00]
[00:58:00] You would be providing the right tools, personalized training to them. You would be standing up centers of excellence across departments. You would be empowering the leaders of each department to identify and personalize use cases. You'd be doing all of that, but it has, first you have to understand the moment.
[00:58:17] And I, again, I can speak with a very, very high level of confidence because I'm in the room with these executives every week running workshops and doing speaking engagements with hundreds of these executives. And I'm very confident that what I'm saying is true. Like there is a very low level of understanding of this technology and how far along it really is at the highest levels of almost every organization I talk to.
[00:58:45] Cathy McPhillips: Okay. Thank you. And speaking of awareness and understanding, our state of AI for Business Survey is in the field right now. And if you're a marketer and you take the survey, please give it to your other departments and your co organization. We want a real big holistic view of [00:59:00] this for this year's report.
[00:59:01] We have big goals on the number of. People we need to take the survey, so please. It's SmarterX.AI/survey, so if you've got a few minutes, take that and please share it with the rest of your company. We'd love to get, a lot of responses on that.
[00:59:17] Paul Roetzer: Yeah. Last year we had, what, 1800? I think so
[00:59:19] Cathy McPhillips: almost 1900 last year.
[00:59:21] We're trying to like almost triple it this year.
[00:59:24] Paul Roetzer: Nice. I'm usually the one that's saying triple it. That's good. I say that out loud. Don't set those goals. Yeah. Yeah. You're usually the one that, don't say it out loud. Don't say the goal. All right. Well, thanks Cathy. It was great. thanks everyone for, you know, attending the classes, for asking great questions.
[00:59:38] And, March 12th, just reminder, March 12th. Yeah. For, for our AI Mastery members episode 200.
[00:59:44] Cathy McPhillips: Yes.
[00:59:45] Paul Roetzer: Yeah. So March 2nd is if you're a, if you're an AI Academy Mastery member, you are invited to attend a live recording of episode 200. So on March 2nd, we're gonna do the live recording. Like Mike and I are gonna show up to our usual thing.
[00:59:58] And then we're gonna just take questions [01:00:00] from the audience. After that, we'll turn off the recording and we'll just hang around and take questions. So, it's a, it's an experiment. We've never done this, so 200 episodes in, we've never done one like this, so it should be fine.
[01:00:11] Cathy McPhillips: So the episode that drops on March 3rd, we're gonna be recording it on March 2nd in the morning.
[01:00:16] So if you're an AI Mastery member, you can log into the learning management system and sign up for that. And we'll see you at 9:00 AM on Monday for the recording.
[01:00:23] Paul Roetzer: Yeah, should be fun. Alright, thanks Cathy. Thanks everyone for joining us.
[01:00:26] Cathy McPhillips: Thanks everyone.
[01:00:28] Paul Roetzer: Thanks for listening to AI Answers to Keep Learning.
[01:00:31] Visit SmarterX dot ai where you'll find on-demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about ai.