Let’s be honest: no one went to business school for this. We are entering a world where you have to manage humans and autonomous AI agents on the same org chart, and there is no manual for that kind of orchestration.
In this episode of AI Answers, Paul Roetzer and Cathy McPhillips cover why the agency billable hour is gone, the rising need for an "AI Output Verification" manager, and the strange reality that even the engineers building these models don't completely understand how they work.Listen or watch below—and see below for show notes and the transcript.
Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 40,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike.
AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.
In this episode, we address 14 of the top questions from our January 15th Intro to AI class, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time—unscripted and unfiltered—just like we do live.
Whether you're just getting started or scaling fast, these are answers that can benefit you and your team.
00:00:00 — Intro
00:03:38 — Question #1: AI Leverage for Marketing Agencies
00:07:44 — Question #2: The "Alien" Nature of LLMs
00:10:06 —Question #3: Responsible AI Mistakes to Avoid
00:13:07 — Question #4: Evaluating AI Platforms
00:16:32 — Question #5: Platform Consolidation
00:18:32 — Question #6: Building Internal Systems vs. Third-Party Tools
00:20:09 — Question #7: Data Privacy Concerns
00:23:09 — Question #8: Signaling Trust & Authenticity
00:25:47 — Question #9: Reinventing Workflows & Org Charts
00:30:50 — Question #10: How to Start Building AI Assistants
00:33:34 — Question #11: What You Should Never Automate
00:36:12 — Question #12: Scaling AI Too Fast
00:38:36 — Question #13: New Leadership Skills
00:41:42 — Question #14: AI Output Verification
00:45:29 — Bonus: AI Book Recommendations
This episode is brought to you by Google Cloud:
Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.
Learn more about Google Cloud here: https://cloud.google.com
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: No one has gone to business school for that. There is no leader who has been trained to live in an environment where you have these AI agents that are capable of doing human tasks, and you have to not only envision your organizational structure with agents and humans. But then you have to manage the orchestration of all that, the risks of it, the limitations of it.
[00:00:22] Welcome to AI Answers a special Q&A series from the Artificial Intelligence Show. I'm Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of ai.
[00:00:42] But we never have enough time to get to all of them. So we created the AI Answers Series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization.[00:01:00]
[00:01:00] These are the practical insights, use cases, and strategies you need to grow smarter. Let's explore AI together.
[00:01:12] Welcome to episode 192 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Cathy McPhillips, chief marketing Officer at SmarterX. Hi, Cathy.
[00:01:22] Cathy McPhillips: Hello.
[00:01:23] Paul Roetzer: How are you today? I'm actually in the office. Cathy's at the home office. We are recording this on Tuesday, January 20th, which, anybody in the Midwest, I assume is dealing with the cold weather.
[00:01:34] It's like 15 below zero in Cleveland today. So I did not wanna leave the house this morning. My kids are at home. Everybody's at home in northeast Ohio. So, all right, so today's episode is AI Answers. So this is presented by Google Cloud. This is our series based on questions from our monthly intro to AI and scaling AI classes.
[00:01:53] Along with some of our virtual events, and I think we have a question today that was like a LinkedIn, direct message I got [00:02:00] with someone we do beyond our intro class. So special thanks to Google Cloud for sponsoring this series. As part of our AI literacy project, we have an amazing partnership with the Google Cloud marketing team.
[00:02:09] In addition to sponsoring this AI Answers podcast series, Google Cloud is our partner for monthly intro to AI and five essential steps to scaling AI classes, which we do free every month, in part because of that partnership. They also are sponsoring a collection of AI blueprints, some of which we are gonna be releasing.
[00:02:27] Is that in February? Ken? It's, you're gonna probably talk about that.
[00:02:29] Cathy McPhillips: I
[00:02:29] Paul Roetzer: am. We have a big AI for departments. week coming up in February, we're gonna launch three new blueprints and then our marketing AI industry council, so you can learn more about Google cloud@cloud.google.com. we have some other notes about other events that are coming up, so a bunch of free educational events that we've got through SmarterX over the next like 30, 45 days.
[00:02:50] Cathy, you'll touch on those. At the end. So I guess we'll just kind of dive right in, Cathy, and you can give the overview of how this works. This is not a replacement to our weekly, if you're a [00:03:00] regular, weekly episode with Mike and I, that that continues on. This is a special series that we do, and this is the 12th, Cathy, I believe this is the 12th Yep.
[00:03:08] AI answer. So, all right. I'll turn it over to Cathy. She'll give us a little rundown, and then she will guide us through the questions.
[00:03:13] Cathy McPhillips: Great. So, as Paul mentioned, intro and scaling AI happen once a month-ish. And then we take the questions that were not answered or the questions that were answered that were just really good questions.
[00:03:23] We bring them to you in podcast form Claire, and our team helps us synthesize all of these so we can get them in a usable format for today. So Paul, I know you squeeze this in between a few meetings, so I wanna rapid fire some of these questions.
[00:03:35] Paul Roetzer: Okay.
[00:03:36] Cathy McPhillips: So let's get going.
[00:03:37] Paul Roetzer: Sounds
[00:03:37] Cathy McPhillips: good. Okay. Number one.
[00:03:38] Cathy McPhillips: For marketing agencies specifically, where do you see AI creating the most leverage in the next 12 to 24 months?
[00:03:45] And what roles, services, or ways of working will agencies need to rethink first?
[00:03:50] Paul Roetzer: Yeah, so if you're new to the podcast or aren't familiar with my background, so I owned a marketing agency for 16 years. My agency was HubSpot's first partner back in 2007, [00:04:00] outside, partner. That became kind of the origin of their partner ecosystem.
[00:04:04] So I'm very familiar with the marketing agency world. I sold my agency in 2021, but you know, I've certainly stayed close to it. We have our AI for Agency Summit that's actually coming up on February 12th. So this is a, a thing, even though I'm not running an agency anymore. I think a lot about it. and I still have a lot of friends in that agency space.
[00:04:21] So I don't know, A couple of things that really come to mind to me from a service perspective where I see potential value creation, where there's leverage on pricing. 'cause you can't. Just do what you've always done with billable hours. So if you're, if it's an agency or, you know, again, if you're a brand person and you're thinking about a per perspective of, I work with agencies, the traditional things that agencies got paid for just aren't viable, as, as the future.
[00:04:47] So I really think about things like change management. So when we talk about AI literacy and transformation, it requires significant change management. and so I see agencies being able to play a significant role in that. I [00:05:00] think that, helping, drive adoption at a personalized level within teams.
[00:05:06] So like if we're talking about marketing agencies, specifically in marketing departments, being able to go in and provide some education and training and consulting around adoption at a personal or team level. Like what are the use cases and technology these companies should be using? And then I do think that there's gonna be.
[00:05:21] A lot of opportunity to do agent and app development at scale. So we've talked a little bit about this on the podcast recently, how the barriers to build apps and like minimum viable products of ideas is just gone. Like I've touched on lovable a few times on the recent podcast episodes, and so I think back to my agency days and the ability to like have an idea for a client, an innovation, a go to market idea, and then be able to just build a concept of it, like on the fly.
[00:05:50] So I do think that, a lot of organizations are gonna really struggle with how to build and integrate AI agents and then AI powered apps. [00:06:00] And so I think a lot of agencies have an opportunity to evolve and start offering more of those services, and you don't need to be a technical agency to do it.
[00:06:07] So, yeah, I think the future is, is still a little bit murky, as to what exactly agencies look like in three years. But I think change management, ai, agent building and orchestration integration, like those are just fundamental things that are gonna be needed by every company. And there's very few brands that we've interacted with that have a plan of how they're gonna do that.
[00:06:31] They, they're still really early and that's usually a good sign for a service company to get in there and build some value added services around it.
[00:06:40] Cathy McPhillips: Absolutely. You know, and I've been working on the AI for agency Summit Marketing, and I was, I'm working on an email going out this week and I was just thinking about, like, I worked at two agencies.
[00:06:48] I had my own agency by definition. My daughter works for an agency and so it's like been very top of mind and I'm a, I was a trusted partner of my clients. Like it wasn't, [00:07:00] it's probably very hard for them to say like, we don't need you because they love working with you. They trust you. They, they trust your, all the thing things you're bringing to the table.
[00:07:09] So like. What else can we be doing to help those, those brands? It's not just like, oh, we don't need you. You're doing this stuff that anyone can do. It's like, we actually really like working with you.
[00:07:20] Paul Roetzer: Yeah. But if you are only like, I create landing pages or I write emails, like if you're getting paid writing blog posts, like if you're getting paid to do these very tactical things and you're not seen as Cathy's saying as an advisor, a like a true consultant.
[00:07:34] Then you could be in trouble. But if you are seen as a problem solver and someone who helps drive innovation, you're gonna have tremendous opportunities to keep going.
[00:07:42] Cathy McPhillips: Yep. Okay.
[00:07:44] Cathy McPhillips: Number two. If only a handful of major labs created these systems, how is it possible they don't fully understand how they work?
[00:07:51] And what should that mean for leaders applying or adopting AI today?
[00:07:56] Paul Roetzer: A little context, I guess, on this question. So the concept here is like, how do they not [00:08:00] understand how they work? This is something I think I said during the intro class. It's probably where this question is coming from. So the way a language model works, which is like the funder fundamental underlying architecture, within chat, GPT and Google Gemini, Anthropic cloud is the thing.
[00:08:14] It's basically making it possible. The technological capabilities, the the engineers, the researchers who build and train these models don't fully know why. They're able to learn and do things. They just know that when they give them data and they provide examples of what good looks like, they, they just learn.
[00:08:34] and I think the example I might have mentioned on intro was it's kind of like gravity. Like we know it's a thing, we know how it works, but we don't really know why gravity exists. And so I think that's kind of how language models can be viewed. We fundamentally kind of get it. Like we know what causes them to learn, but we don't know why it works that they learn and so I think that's part of,
[00:08:57] you know, just kind of the [00:09:00] nuance of this situation we find ourselves in. It's, it's a new kind of, I use the word alien technology, because we don't really understand it now. That doesn't mean that we can't adopt it in businesses and figure out ways to apply it and figure out ways to, you know, drive efficiency and productivity and innovation and growth.
[00:09:18] But as leaders, it's just understand that this technology is still pretty new. We're still trying to understand the fundamentals of it. and it doesn't really change anything, but it enables you to then, I guess, unpack the limitations of it and the weaknesses of it. So if you do have a little bit deeper understanding of the nuances of how they work and how they're trained, you can start to realize like, okay, so I kind of get now why they hallucinate or why they might not be as factual as like traditional software or an AI agent isn't gonna just do 10 steps that I'm telling it.
[00:09:52] It's not like that. It's kind of got. Some ability to, to figure out its own path. Like that knowledge helps you as a [00:10:00] business leader better plan to use these tools in a responsible way.
[00:10:04] Cathy McPhillips: Yep. Excellent.
[00:10:06] Cathy McPhillips: Number three, why, why does responsible AI matter more now than ever, than even a year ago? What are one to two responsible AI mistakes you're seeing organizations make right now?
[00:10:16] And how can we self-correct while we're early on?
[00:10:20] Paul Roetzer: The responsible AI approach matters now because they're getting more powerful, they're getting smarter, they're getting more generally capable. And so in terms of like mistakes, people might be making it could be planning as though we have models from 2023, 2024 that didn't have reasoning capabilities, didn't have more autonomy in terms of agent capabilities.
[00:10:40] Like these things are getting pretty sophisticated in, in terms of what they're capable of doing. And if as a business or as a leader, you don't understand the implications of that, you can make missteps in your business strategy, your budgeting, your staffing structure. So all of those things start to come into play.
[00:10:58] So to build a [00:11:00] responsible approach to this's, like a human centered approach, you have to account for where the technology is and where it's going in this very near term. And like a really tangible example of this is. How long these AI agents can perform work without having to have humans involved. So there was a recent study, like last week from Anthropic that said they've used, Opus 4.5 and upwards of three to four plus hours where it's working on its own to build software and the human doesn't have to be involved.
[00:11:33] Hmm. Now that's specifically in software and AI research. But we're starting to see that play out in other forms of knowledge work. And so to be responsible about the use of this you, you have to understand that these things are getting better and better at the, what we call long horizon tasks, which starts to creep into what all of us do.
[00:11:53] I think I use the example of a marketing campaign recently on either the podcast or the intro class. Where it's like, [00:12:00] Hey, we're gonna launch this product on February, whatever, 15th. let's go build a marketing program for it. You know, first build the plan, I'll approve it, and then like, I want you to do the things.
[00:12:11] And so let's say it comes back and says, okay, we're gonna build a website page, we're gonna build form, we're gonna write a three part email nurturing sequence. We're gonna, you know, define the person we're do this. It's like, that sounds great. Go do it. And in theory, there's no reason Claude couldn't come up with the first drafts of all of that stuff.
[00:12:28] And so if you're gonna be responsible about this, you have to accept that. And you have to be able to explain to the people on your staff who are currently doing that work, like, Hey, we're gonna start shifting some things. So it all starts with understanding of what's possible and then from there you can be responsible about how you're gonna integrate it in, in a safe, way to your organization.
[00:12:49] Cathy McPhillips: Yeah. And I think, you know, even from a year ago, as adoption grows in companies and enterprises. Just training and onboarding and education is such a [00:13:00] critical piece to make sure that you might understand it. But do the people that you're imploring to use ai, do they understand it?
[00:13:05] Paul Roetzer: Right?
[00:13:07] Cathy McPhillips: Number four, when people ask which models or platforms, this is a long question they should be using.
[00:13:13] Beyond ChatGPT, how do you recommend they evaluate options without chasing every new release? So here's my question to you, since we've talked about this a lot lately. So much of it is like a personal preference. So like what your company will approve or will they pay for. So what are some guardrails, I guess, when you're considering do, excluding tools rather than deciding which ones?
[00:13:34] How do you decide which ones not to think about?
[00:13:38] Paul Roetzer: Yeah, I mean part of this is probably just the guidance of your organization, whatever the IT and legal department have decided. The operations team, like what they're gonna give you access to. in terms of like chasing things. So again, we'll assume this is within a business environment where some platform is being provided and, but then you have all these other interesting things that are going on that you could be experimenting with.
[00:13:59] So [00:14:00] my general guidance on this is to focus on getting really good at a platform. there might be times when that platform isn't gonna be the right fit anymore. So, I don't know, let's say you're using Microsoft Copilot internally and you realize it has limitations like that. You know that in your personal chat GBT account, it can do more than what your copilot is doing.
[00:14:22] Or Gemini, or Anthropic. So that's when you, when you have specific use cases that fall outside of the capabilities of the platform or model that you're commonly using, that would be a good reason to go and explore something else. Go see if Claude can help you with something that's been, you know, not possible with your current platform.
[00:14:40] But when you start to think about all the other exciting technologies in image, voice, video generation app developing, like there's so many tools, so many exciting things to build, I think you just gotta be realistic about whether, whether or not that's part of your role and your own career path.
[00:14:57] There's, there's a bunch of tools I would [00:15:00] love to be testing every day that I do not have time to test. And so even our Gen AI app review series that we do as part of AI Academy. You know, our team is doing that. It's like a lot of Claire and Mike right now. and they're getting to experiment with these tools that I would love to have time to do, but I don't.
[00:15:16] And so I have to kind of just watch the 15, 20 minute review that they do of it because I can't get an experiment with it. But at the same time, I am creating enormous value by focusing my usages on chat, GPT and Gemini. Integrating it into everything I do and into my workflows. and so I'm okay with that.
[00:15:35] Like it's enough for me to just get really, really good and spend like 90% of my time in those two platforms and not worry about the fact that I'm not getting to test everything else because that's not the key value I create in the organization. It's not the role I'm playing is to be the one experimenting.
[00:15:53] My job is to maximize the use of the platforms. That's gonna account for 80 to 90%. Of usage and value [00:16:00] creation within our company.
[00:16:01] Cathy McPhillips: Yep. So I'm down this Claude Rabbit hole right now. I'm having a good time.
[00:16:05] Paul Roetzer: Are you, are you using this? I know Mike's telling me every day new things he's doing with Claude and it's like, I don't even know.
[00:16:10] I don't even understand what you're doing. And I mean, I obviously live this stuff and there's some things Mike's trying with it and I'm like, man, you gotta show me that. I don't really even comprehend what you just said you're doing
[00:16:19] Cathy McPhillips: right. Yeah. Jeremy and I talked about that this morning. He and Mike, we talked about.
[00:16:23] Cloud code and it's like, oh, what an opportunity for us. So I'm excited to go test it out. Yeah. But I'm like, run it by Tracy first before we connect anything.
[00:16:30] Paul Roetzer: Yes, for sure. Definitely.
[00:16:32] Cathy McPhillips: Okay.
[00:16:32] Cathy McPhillips: Number five. Looking ahead, do you expect AI platforms to consolidate or will most professionals end up working across multiple tools and how should people think about what they're paying for as models keep changing?
[00:16:43] I know you just kind of addressed a little bit of that.
[00:16:46] Paul Roetzer: Yeah, I don't, I don't know about the consolidation. I mean, it seems pretty apparent that Google, Microsoft. openAI's, Anthropic to a degree are, are going to remain [00:17:00] major players and in, you know, obviously the dominant user right now is would in, in corporates at least, is probably Microsoft and ChatGPT, SMBs, I'm guessing Google plays a, probably has a bigger market shadow.
[00:17:12] I actually don't know exactly what the market share is, but our own data would tell us that that's pretty logical. I don't know how the consolidation would work. I think you're just gonna have these three to five major providers. Two of them are probably gonna be dominant in terms of the market share.
[00:17:29] It's, you know, there'd be, but it'd be, it'll probably look like the cloud world, I guess, where you have AWS, Google Cloud and Microsoft Azure and I think you're probably gonna have something like that where there's the two major players and then there's a third that, you know, maybe keeps coming up and taking some market share.
[00:17:45] And then there's gonna be, you know, a collection of long tail platforms that. Have specific uses, like, you know, if I think of like a Harvey in the legal industry, but Harvey doesn't build their own models. They're, they're built on top of prob, you know, probably open eyes APIs. [00:18:00] So I don't know about consolidation.
[00:18:02] I do think that most people will probably have their core platform, and let's say it's an 80 20 thing where like 80% of your usage is gonna be in your primary platform, be it copilot, Gemini Chat, GPT. And then 20% of your usage is gonna fall into a collection of other tools. So that's certainly how it is for me.
[00:18:20] Like 80% easily is chat GBT and Gemini. But then I dabble in Lovable and some of these other applications that maybe make up 10, 20% of my other usage of AI tools.
[00:18:31] Cathy McPhillips: Okay.
[00:18:32] Cathy McPhillips: Number six, from a government governance and security standpoint, what are your thoughts on organizations building or hosting tailored AI systems on their own infrastructure instead of relying on third party platforms?
[00:18:45] Paul Roetzer: I , I think in some industries and organizations, it's essential that they're building more internal systems that are more walled off. they can control better. from a safety risk compliance standpoint, I could see that being essential. [00:19:00] so I think that that's just always gonna continue to be the play, I think for.
[00:19:05] A lot of other, like, especially small businesses, being able to just get up and running, you know, instantly on a platform like Gemini or ChatGPT, just outta the box is a really intriguing thing. And then getting access to all the updates. What I've seen oftentimes with organizations who do choose to build through, say the OP, the APIs of like an openAI's or a Google is the capabilities often lag behind what you can go get.
[00:19:30] With the outta the box product. And that's kind of inherent 'cause they're gonna build it on a previous generation model, which is gonna be outdated. By the time they get the A APIs set up, they're gonna put in restrictions on it. So it limits some of the usage of things they can do with it. And so that's just a trade off.
[00:19:45] When you want to reduce risk, make it safer to use internally, it often comes with a reduction of the features and capabilities of the platform you're gonna have access to. So that it's, it just seems like that's just how this is gonna play out. Much like [00:20:00] any other technology, the last 20 years has played out.
[00:20:02] As it gets more controlled by it there, there's just gonna be more restrictions.
[00:20:07] Cathy McPhillips: Yep.
[00:20:09] Cathy McPhillips: Number seven, many clients worry that sharing proprietary or unpublished content with AI means it effectively becomes public. Is that a valid concern today? And how should organizations address it? I know we get asked this every single class, every single.
[00:20:23] Time we do this? Has anything changed?
[00:20:26] Paul Roetzer: No, not really. I always, you know, tell people, check with your attorneys and, you know, make sure your, the terms of use that you've agreed to protect you from this. But if you're a business account with any of the major platforms, AI companies, they're going to build in that they're not gonna train on your data and things like that.
[00:20:44] Again, you can't make a blank mis blanket statement and say, yes, it's safe. Don't worry about it. Go look at the tools that you're using and see what you're agreeing to, and then make sure you're staying up on any changes to those. if you are using a free [00:21:00] product, just assume whatever you put in could be used in training data, but again, it's not like if you upload some information.
[00:21:08] That exact thing gets thrown into a knowledge base that ends up being able to be searched by your competitors a year from now. It's not how this works, it's just training data that goes in and it learns all kinds of things and doesn't call recall a specific document you gave it per se. So, yeah, I mean, I think people's concerns around this are probably o overblown in most cases, but that does not mean you shouldn't take.
[00:21:35] precautionary approach to putting in sensitive information. I do know that there's, like, I've, I've spent some time with law firms specifically, you know, on the IP side where, you know, anything related to patents. There's just no way they would ever put stuff like that in, in any ai. So there's always caveats to this.
[00:21:54] So I would say overall, probably don't need to be as caution, [00:22:00] caution, cautionary as we are. Yeah, but we should still do our homework and make sure we and our legal teams are confident that whatever we're putting in is okay to be put in.
[00:22:11] Cathy McPhillips: And there's different degrees too. You know, you've talked before about you've put a lot of company data in some of these tools.
[00:22:15] Yes. Because the output and what you're receiving is greater than the risk for you. But then we talk about, like, we'd never put customer data in there. Correct. Even if it, even if it says, oh, we won't train on this. We're not sharing any of our customer data in any of this.
[00:22:29] Paul Roetzer: Yes. Yeah, and I mean, I guess unless you had like a proprietary internal model that isn't connected to the cloud and it's like staying on, you know, in, in your server and things like that, then that might be a different story.
[00:22:40] Or you know, I guess. If you think about, like HubSpot has AI that has access to customer data and it's like baked into our CRM. Like again, there's always these sort of caveats and , footnotes too, like every decision that's made. But I think that's where your generative AI policies become so critical is that everyone on your [00:23:00] team is clear with whatever your organization's policies are, and they know to follow those.
[00:23:07] Cathy McPhillips: Okay.
[00:23:09] Cathy McPhillips: Number eight. As AI reshapes products and decisions, trust and data becomes foundational, how should organizations actively signal trust in both their data practices and AI driven outputs?
[00:23:20] Paul Roetzer: Transparency. I mean, this is a pretty straightforward one. I think it's have you know your generative AI policies, your I principles, clearly documented, make sure they're infused into the training programs within your company.
[00:23:33] That they're not just words, you know, on a screen that, that are actually lived within the organization. And then figure out which elements of that need to be shared publicly. And you need to be transparent about. So, you know, I 'm not a big proponent of like every single post you put up or email you send needs to say this, you know, I used AI in these three ways to do this thing.
[00:23:55] Like, I think we've moved past that in, in most cases. But I think if, like, you know, the [00:24:00] example I always give is like, if authenticity matters, then you might need to disclose whether AI was used or not. So for my exec AI newsletter that I do every Sunday, I write that I have zero AI usage in that, and I do put at the bottom, this was a hundred percent written by by me because I think, especially for the editorial part, that's what people are signing up for.
[00:24:22] They, they want to hear from me, not my ai. And so I don't use it in the editorial writing. Same with my LinkedIn post. Now, I don't tag every LinkedIn post with a hundred percent written by me, but. I'm clear in saying that, that like, I write all my LinkedIn posts.
[00:24:37]
[00:24:37] Paul Roetzer: Now I'm a writer by trade, so I also don't, I'm not saying that every CEO or every leader should follow how I do it.
[00:24:46] It's what I do for a living. So I'm comfortable writing and I enjoy writing. Some, some people don't. And so if you're using AI in different ways, like, so it's very personal thing. It's a subjective thing. There's no like true hard, fast rules as [00:25:00] to what should and should not be. AI assisted. But everybody's gotta figure that out for themselves and my main thing is if people expect authenticity and expect it to be your voice, you gotta be really careful about how much you're using AI in that process.
[00:25:16] Cathy McPhillips: I wrote an email last week for MAICON. And someone was like, was that ai? And I was like, excuse me. No, that was me. That was me sitting down watching tv and all of a sudden I was like, oh my gosh, I've got an idea. A great idea. Got an idea.
[00:25:27] Paul Roetzer: Yeah. And, but that's the joy of like, you and I en enjoy that creative process of writing.
[00:25:32] We like coming up with the idea and getting, you know, being clever and writing three versions of it and things like that. some people, it's not their thing. Right. I get it. So there's no, again, there's no right or wrong answer necessarily.
[00:25:46] Cathy McPhillips: Okay.
[00:25:47] Cathy McPhillips: Number nine. As AI becomes embedded across platforms, how do you see workflows changing inside marketing departments and agencies from creative development to client handoffs, approvals and feedback loops
[00:26:01] Paul Roetzer: completely reinvented like I, the more I time I spend on this, and the more time we do this internally, I just think that workflows are gonna be fundamentally transformed.
[00:26:14] I think I shared this recently on a podcast episode where I think maybe when I was talking about the org chart app that I was building and lovable, and as I'm building it, I'm actually thinking about each role in our company. You know, as we plan to hire more people, and I'm thinking about the workflows those people perform, and I'm trying to imagine like 12 months from now, where's the human's role in those workflows and what's the AI's role?
[00:26:42] And so as an example, like we don't have SDRs on staff, so we don't have somebody who, you know, let's say looks at companies that come to our Intro to AI class and then does outreach to them. Like that's not a fundamental role. We have hired for a lot of, like SaaS companies will have SDRs that are going through the [00:27:00] leads and trying to qualify 'em and then hand them off to the sales team to close the deals.
[00:27:05] And so I'm looking at thinking. I don't know that we're ever gonna hire an SDR, like I don't know that the workflow of an SDR is something that's gonna be a human role, or if we do, it might be one instead of five down the road. Like you might have a SDR, but that person's job may be more AI or agent orchestration than it is doing the actual outreach.
[00:27:28] It's like managing the SDR bots, in essence, that are automating all of this work. And so we're gonna go through the process of developing the SDR workflows ourselves with humans. And my main goal is, so I can probably automate it so we don't have to hire a bunch of SDRs. I don't think that's a role that is gonna be extremely valuable within our organization.
[00:27:50] Something similar we're doing with customer service where, you know, traditionally you could just have humans doing that, but the reality is that most people just want a quick answer [00:28:00] to something. That is pretty predictable. It's gonna be one of, you know, a long tail of 300 things they're gonna look for.
[00:28:07] And honestly, like an AI agent's just better at getting them the answers to those things
[00:28:11] Cathy McPhillips: faster.
[00:28:12] Paul Roetzer: And we, yeah, and we wanna free our humans up to actually get on a call, jump on a zoom, like face-to-face, solve something. We don't want them doing all these like, mundane things that aren't fun for anybody, and we're not gonna get the answer as fast as I, so when I think about the workflow that goes into customer support, it's like, okay.
[00:28:28] Where is the AI agent gonna fit in to where we don't need those things? And so I , I truly am just like every workflow we go through, I think over the next 12 months across every department in our organization, we will have these conversations. It's like, okay, where, where's AI at now that we can fit it in?
[00:28:47] Where is it gonna be in six months that we don't wanna hire for this role? 'cause we think AI is gonna solve it. and so I, yeah. I truly believe, like, and I'm not that you, I'm an answering specifically for marketing departments, agencies, I guess. 'cause that's the [00:29:00] question. But th this answer is universal.
[00:29:02] This is every department and every organization should be going through this process of analyzing workflows. And then the other thing, Cathy, is we're just analyzing what we think is an optimal workflow. Like, okay, here's the 12 steps we go through to do that thing. Well, what if that's not the right way to do it?
[00:29:18] What if there's a more efficient, more innovative way to solve the problem? And so we're using AI to actually say, Hey, here's our workflow. How would you make this workflow better than where does AI and human fit within a, a more optimized, more innovative workflow to solve the problem or create the value for the end user or the stakeholder?
[00:29:39] So I, and I think people who can do what I just explained within a company, pick a department level at a, you know, horizontal ops level. People who have the knowledge of AI's capabilities have the business sense of what a workflow looks like, or being able to work with department leaders to define those, those are insanely valuable employees.
[00:29:59] So like, if what I [00:30:00] just explained sounds like you and something you wanna do, you're, you got some really good job security for the next like five to seven years because everybody's gonna need to do this and most companies are gonna lag dramatically behind in terms of figuring this out.
[00:30:13] Cathy McPhillips: Yeah. It's also interesting, I feel like, you know, from the client handoffs, approvals, feedback loops, AI is just surfacing, like some things have just been wrong or not efficient for a very long time.
[00:30:24] Paul Roetzer: Yeah.
[00:30:24] Cathy McPhillips: You know?
[00:30:25] Paul Roetzer: Yeah. And in some corporations, inefficiency is not only accepted, but sort of like rewarded. Like, I mean, let's be honest there, there's some bigger corporations that like slow moving is kind of like the comfort zone for most people, and they don't want things to move faster, like. Be that much more efficient and, that I just don't think that's gonna fly moving forward.
[00:30:50] Cathy McPhillips: Okay. Number 10. For people who've learned plenty of AI theory, but want to actually build assistance, GPTs or systems, what's the fastest and most effective way to get hands on?
[00:31:00] Paul Roetzer: I would go into your favorite AI platform and ask it how to build one. Like, I mean, we have, so if you're an AI Academy member, which is our online education platform.
[00:31:12] I have a course on how to build a coax. It's basically how to build an AI assistant, so it like walks you through. so that's available, but you can, you could go and search for this, say, how do I build AGI PT? How do I build an AI assistant in Google, Gemini? Or just go straight into the Gemini app or chat to me and say, Hey, I, here's what I do.
[00:31:30] I would love to figure out how to build some gpt to help me be more effective. And it'll guide you. You can literally say like, okay, I really like the idea for this GPT. Can you write a system prompt for me that I can use to train the GPT? It'll do it. So, you could to, to kind of shortcut it, you could use our jobs GPT.
[00:31:50] So we'll put a, a link in the show notes for that. and say, Hey, my, my role is this. I'm a partner in a law firm, or I'm a HR executive, or I'm a head of [00:32:00] operations, whatever it is. How, how could I be using AI and jobs? GPT will actually like lay out a bunch of ways you could be using it and give you some rationale as to which ones mostly might be most valuable.
[00:32:09] And then you could say to it, okay, help me prioritize like three GPTs I could build that are gonna create the most efficiency for me based on these tasks you just identified. And it'll go through and do that. So like you can just talk to jobs GPT about it and it'll help you.
[00:32:25] Cathy McPhillips: The first one I built, I actually had two monitors and two instances of ChatGPT opened up.
[00:32:30] And one, I'm saying, what's the prompt? What's questions? You know, it was like basically giving me all the answers to just basically cut and paste into it. And I edited it 'cause a lot of it just didn't make sense for me, but I'm like, okay, let's give this a spin and see how it goes. Went back in, edited it, made it more, made it better.
[00:32:46] there's a lot of just trial and error with those, with that first one.
[00:32:50] Paul Roetzer: And one of the things I've found really helpful, and I've mentioned this on some recent episodes, is when I give my prompts and I'm working on something like this, I will say specifically to ChatGPT or Gemini, let's [00:33:00] do this step by step together.
[00:33:02] And then what it does is it doesn't just like poof, like vomit, like 5,000 words and like, here's your thing. It's like, no, I wanna like go through a process. I wanna make decisions. I want you to help me do it. So if you just say, let's do this step by step. It, the , the AI assistant will actually like.
[00:33:19] Okay. Sounds good. Step one, let's do this. What do you think? That sounds great. Okay. Step two, and you can just go back and forth with it. It's the best way to use, any of your favorite AI assistants as an advisor and a consultant is just like, tell it to do step-by-step with you.
[00:33:34] Cathy McPhillips: Yep. Number 11, what's a decision organizations think can be automated but absolutely shouldn't be?
[00:33:41] Once AI scales.
[00:33:43] Paul Roetzer: Wow. That's a good one. I , I've seen a lot around hr, where a, you know, AI's getting integrated into the HR process of reviewing resumes, prioritizing candidates, things like that. That feels like we really need the human loop. [00:34:00] I do think AI can be used a lot to streamline a lot of things in hr, and the talent side of businesses, but it should only be to free up humans to spend more time with humans to like figure out who the right humans are to be part of an organization.
[00:34:11] So I feel like. It's scaling, you know, our ability to do HR better, but we shouldn't remove the human. I think customer success is a similar way. I explained our customer support concepts earlier, but all of that, all that automation we're looking to build in is to free up the human. So when someone says, I want to talk to a human, we have humans available to do that.
[00:34:33] And like, so everything we're doing with AI is to create a more human organization where when the human touch or authenticity matters. That's what, what we're here for. and so when decisions can't be made, and like the AI can simulate empathy, but it's not actually empathetic. Like we need humans with empathy and we need humans with creativity and things like that.
[00:34:54] That's the stuff you, you can't just scale by, you know, spending another 20 bucks a [00:35:00] month or buying another few licenses or doing some AI training, like that's not gonna solve that. So, yeah, and I don't know, I mean, I think those core decisions that are fundamental to the organization. the human needs to be in the loop if, you know, certainly if not the final decision maker.
[00:35:17] but I do think that more and more things that right now today we think, oh, AI can't decide that. It can't just take those actions without us. I think a lot of those barriers are gonna come down for some, like early adopters and innovators who are gonna get more and more comfortable with AI's ability to, you know, make some decisions that are, are not.
[00:35:36] Critical. I , there's always that analogy. I think Jeff Bezos always had like the one door, two door problem at Amazon. It's like, if it's a one door thing, like we make this decision, there's no turning back. Like that's gotta be humans. If it's a two-door decision, like, Hey, we make the decision. It's not right.
[00:35:52] Like we can just walk back out the door and like go make a different decision and come back in. I think a lot of companies will maybe look at it in that way and say, all right, like [00:36:00] if it's the two-door thing and we can backtrack on this and we can fix it, then AI maybe plays a greater role. If this is like fundamental to organization, there is no turning back.
[00:36:10] You are not relying on AI to make that decision.
[00:36:12] Cathy McPhillips: Yeah. Okay. Number 12. What's one early sign that an organization is scaling AI faster than its ability to govern it?
[00:36:21] Paul Roetzer: Yeah I'm just trying to think like in our own organization, if I've run into this or not,
[00:36:27] yeah, I think it's more instinctual for us at this point, like as new now, obviously we're pretty much at the frontiers of this. Like we're seeing and experimenting with technology faster than other people. most other organizations. And so in some cases there's a level of, of risk we're willing to take on because we're trying to kind of stay out in the frontiers and see this,
[00:36:52] but yeah, I don't know. I think it could be a people thing. It could be that people have too much fear and [00:37:00] anxiety and if you were to like poll your people how they feel about AI and the sentiment is not overall good, then maybe you're just moving faster than your people, you know, permit. And you can't, it's not a human centered way to do it.
[00:37:12] Is if like, it feels like you're leaving people behind, or they're telling you they feel like they're being left behind or not clear exactly what's going on. I would say it's probably that, it's probably more qualitative. it, it's like being in tune with what's going on in the organization, how people are feeling about it.
[00:37:30] Are they understanding why we're doing it? Do they understand the technology we're using? Are they, oh, this actually be a good quantitative one, utilization rates of the AI tech you're giving them. So let's say you bought 300 ChatGPT licenses and 20% of your staff are weekly active users, you, you're probably moving too fast.
[00:37:48] Like they don't get it. They're not. Actively using the technology you're giving them to do their job better. So yeah, actually, I guess as I'm talking and thinking out loud here, utilization rate of the tech you [00:38:00] give them is maybe the greatest indicator. And so if you're continuing to move and move and you still ain't got like 20, 25% of the staff who are even trying it, or maybe it's daily active users, you should be monitoring, then you got a problem and you gotta, you know, rescale.
[00:38:14] So yeah, I think it's probably a mix of. Qualitative where you survey them and find out how they feel about it and what they're thinking about it. And then there's the quantitative monitoring utilization of the technology that you're giving them.
[00:38:26] Cathy McPhillips: And the bottom line of both of those things is education.
[00:38:28] Paul Roetzer: Yes, for sure.
[00:38:36] Cathy McPhillips: Number 13, as AI becomes more embedded, what new skills do you think will matter most for leaders, not just practitioners? And where do you, where do leaders need to get uncomfortable?
[00:38:43] Paul Roetzer: Orchestration of all the AI technology. This is when I'm starting to feel myself already. I saw a Business Insider article,
[00:38:56] It was one of the consulting firms. The CEO was like, yeah, we have [00:39:00] 50,000 employees and 20,000 more AI agents, or something like that. And I was like, what? Like how? How does that even work? Like, what are you talking about? But I do think that there's. This challenge as leaders where we have to start orchestrating humans and AI together.
[00:39:19] And no one has gone to business school for that. Like there's, there is no leader who has been trained to live in an environment where you have these AI agents that are capable of doing human tasks at varying levels, varying levels of autonomy, and you have to not only envision your organizational structure with agents and humans.
[00:39:38] But then you have to manage the orchestration of all that, the risks of it, the limitations of it. that, that is like an entirely new skill and vision needed by leadership that nobody really has and there's no other way to do it, but be uncomfortable. I'm in the middle of this myself, and I'm trying to figure out what this looks like [00:40:00] and how we'll integrate these elements and when do we start putting AI agents on the org chart versus just.
[00:40:06] Embedded parts of workflows, like I'm modeling this right now. I have variations of how this could look. And I think for most leaders it's knowing to even be thinking about these things and asking these questions and then developing systems to manage it. So let's say you're, you're like that example, you know, consulting firm and you know, maybe you have a hundred GPTs or agents that are in use around the company.
[00:40:31] H how are you tracking that? Like where is an agent tracked? Who? Who's using it? And because it's not all gonna live in your ChatGPT license, like you might have lovable an agent that's building apps for your marketing team or your sales team, it's like, okay, where's the visibility for that? These things all don't talk to each other.
[00:40:49] How do we unify all this information about all these different AI tools that are being used and who's testing clawed on their computers and like things like that. So, yeah, there's all kinds of [00:41:00] operational issues around this, but you know, I think that that's, that's a big one, is the technology side and then the human management is just a, a whole nother element that leaders have to be familiar with.
[00:41:09] Because no matter how much the technology makes possible, there's a whole bunch of humans who don't love this stuff and have some fear, certainly anxiety, a a and you can't underestimate the friction. People can cause to technological change. And so as leaders, we have to figure out how to, how to navigate this, and that's uncomfortable and unclear.
[00:41:33] So I, again, I'm going through this myself. I talk to leaders every day who are trying to figure these things out, and there's no books written or blueprints to turn to for how to do it.
[00:41:42] Cathy McPhillips: Yep. Okay. Number 14, you've talked about the idea of an AI output verification manager. How credible is this as an emerging role and could verification itself become a scalable business model
[00:41:56] Paul Roetzer: for research firms and media firms?
[00:41:58] It is today, [00:42:00] like I mean either it is a skill or role of an existing person. So like your editors, for example, research assistants. They're doing this, like they're already verifying these things. And so I do think that, and I guess on brand side, like if you're publishing stuff, you should be doing it too.
[00:42:21] But yeah, those are the ones, again, media research and, you know, marketing teams that are publishing content. I think it's essential. And so, again, even if it's not a role, you have to find, like, we put a job up for this person. it has to be part of the responsibilities of someone. Who is in the , the workflow chain that publishes the final pieces of things.
[00:42:42] So another one I guess would be like law firms. I've seen plenty of instances where law firms have used AI to create briefings and, you know, documents they submit to judges where they didn't verify the information themselves, the citations. So anywhere where you have to verify, [00:43:00] data, statistics, names, anything like that, any kind of facts.
[00:43:05] That's going out publicly or being used internally to make decisions. Someone has to be verifying if AI has been used in the process. So at minim it's a responsibility of anyone in that workflow. And I could certainly see in some of those other instances I talked about, like, you know, research firms and media firms where it might just be a role.
[00:43:24] It's someone who gets really good at working with AI models and knows the ins and outs and the hallucinations and things to watch for, and if there's enough. you know, capacity needed to justify a full-time role. I could see it maybe being that and a firm like, the ones I mentioned.
[00:43:42] Cathy McPhillips: Do you think it would be easier to take someone internally who has the institutional knowledge and the experience to become that person?
[00:43:47] Or do you think someone who actually knows how to use the models as a better starting point?
[00:43:53] Paul Roetzer: I think if you already have those people on staff, certainly it could be, but it, you know, for us. We're unique. Like, so sometimes I [00:44:00] have to get outside of like our bubble at SmarterX because we hire a lot of communications people, you know, journalism, people trained in journalism.
[00:44:08] So for us it's a natural thing and we just like, of course you're gonna verify facts, like who would ever publish? You didn't. But then you realize a lot of people don't know that these models hallucinate the way they do. Going back to an earlier question about, you know, how much do we really need to understand how they work and how the models are built and things like that.
[00:44:25] So if you don't. Understand the limitations and the hallucinations. You might not even know to have this person, but if you have editors internally, if you have people who would check citations regardless of where they came from, then that person's just evolving their role and doing these things. But if you don't have that person and you plan to start creating more content, doing more podcasts, more webinars, more research reports, more, you know, downloads as part of your marketing funnel, because AI can all of a sudden help you create all that stuff.
[00:44:56] And you haven't solved for verification, then [00:45:00] yes. You, you need someone to do that and you might have to go hire for it.
[00:45:03] Cathy McPhillips: Yeah. I still, even with your newsletter every week, I still cut and paste prop, you know, proper nouns and quotes into something to make sure that it's actually correct.
[00:45:12] Paul Roetzer: Yep.
[00:45:12] Cathy McPhillips: And that I don't trust you.
[00:45:14] But I mean, I've, I've clicked.
[00:45:16] Paul Roetzer: It's
[00:45:17] just what we do,
[00:45:18] Cathy McPhillips: it's what we do. Verify every back Yann LeCun. I've checked his name 5,000 times, but I still do it.
[00:45:22] Paul Roetzer: Is there two Ns in the first
[00:45:24] Cathy McPhillips: or the
[00:45:24] Paul Roetzer: last names
[00:45:24] Cathy McPhillips: is exactly.
[00:45:26] Paul Roetzer: I know I did the same thing.
[00:45:27] Cathy McPhillips: Okay. So there were 14 questions.
[00:45:29] Cathy McPhillips: I've got one bonus question. Okay. so our friend Amy Martin, I'm like, who's texting me at 6:00 AM And it was Amy and I'm like, and I texted her back at like 6:02. So she is looking for a good book to dive into, preferably AI or anything on advanced technology. So any current recommendations or anything like OG books that you would just recommend to this audience?
[00:45:49] Paul Roetzer: yeah, so I've got that whole stack of books, you know, out, out on the shelf in the office. I don't know, some of the, my favorites I guess, that come to mind would be like, genius makers [00:46:00] is one I always recommend, just for context, it's from like 2020 2021, but I always. loved it. Algorithmic leader by Mike Walsh, who you know, was a keynote at Macon in 2024, is a really classic, it's pre-gen ai.
[00:46:14] Both of those are pre gen ai, but they're really good AI driven leader is is that? Yeah. AI driven leader by Jeff Woods, who was a keynote this year is like one of our current favorites. That's actually the book club book for Academy, right? Yep. We're doing, did we announce that yet or did I just preemptively announce?
[00:46:31] Cathy McPhillips: Well, per usual, you just announced it on the podcast before anyone else knew about it.
[00:46:34] Paul Roetzer: So we have AI Academy, we're gonna launch a book club and that's gonna be the first book. So yes, I guess I just pre, pre right now.
[00:46:42] Cathy McPhillips: Guess we're, guess we're reluctant.
[00:46:42] Paul Roetzer: Yeah. Yeah, we're doing that. Just kidding. Um, what's the one with Karen Howe?
[00:46:47] Empire of ai. Empire of ai? Yeah. Like you're more interested in. Not the downside of all of this, but the reality of all of this and like the concern around consolidation of [00:47:00] power and a few labs basically def define deciding the future of humanity and who are the people behind those labs. Empire of AI is a really good read.
[00:47:07] It was a New York Times bestseller. so yeah, so those are, I don't know, a few that kind of just jump top of mind. Co love-in
[00:47:14] Cathy McPhillips: intelligence, always
[00:47:15] Paul Roetzer: co intelligence. Yeah, that's a good one.
[00:47:17] Cathy McPhillips: Ethan Mollick.
[00:47:18] Paul Roetzer: Yep. I mean, I'm reading. I'm reading some different books right now. I actually have a book that's not AI right now, which is weird.
[00:47:25] I haven't read a non-AI book in like five years, but, yeah, I'm gonna, those are, those are some good ones.
[00:47:31] Cathy McPhillips: Okay, so I'm going to close with, we have five upcoming free virtual educational classes, webinars, and events coming up. So I'm just gonna run through this list really quick. We'll include the links in the show notes if you go to artificial intelligence show.com and click on show notes.
[00:47:46] You'll see them in episode 192’s post rather. So January 22nd, five essential steps to scaling ai. So if you're listening to this on Thursday, it actually is at noon today, noon Eastern today, January [00:48:00] 27th. 2026. Marketing talent, AI impact report that Mike put together. there's a webinar corresponding with that and you go look at the report the same day 29th, we have.
[00:48:11] How AI native research is fundamentally changing business decisions with, a great company Readingminds.ai. Our next intro class is February 10th. Our AI for Agency Summit is February 12th. Again, all of those are free, and then we have an AI for Departments series coming up, in later February, but we'll do that, talk about that next time.
[00:48:30] Paul Roetzer: You can go on the webinar fronts. There's a SmarterX AI slash webinars. We'll put, like Cathy said, we'll put links to all of these in the show notes, but you can go on Smart Rex ai and get access to all this information as well. All Cathy, that was good. The questions are always my favorite part of doing these classes, so we will have, so like Cathy said, we have a scaling AI coming up on Thursday the 22nd.
[00:48:53] We will do another AI answers special episode next week, answering questions from that. Class that we [00:49:00] don't get to during the live session. So thank you again to Google Cloud for partnering with us on this series, and we will be back with episode 193, our regular weekly episode with me and Mike next week.
[00:49:13] Cathy McPhillips: Sounds great. Thanks Paul. Thank
[00:49:14] Paul Roetzer: you. Thanks for listening to AI Answers to Keep Learning. Visit SmarterX.ai where you'll find on-demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it. That's it for now.
[00:49:33] Continue exploring and keep asking great questions about ai.