Think you’re asking the right questions about AI?
In this episode of The Artificial Intelligence Show, Paul Roetzer and Cathy McPhillips tackle questions from our audience about AI adoption, from reimagining business models to managing risk in regulated industries.
With candid insights, real-world use cases, and a few unexpected laughs, this “AI Answers” session reveals where companies are getting stuck, how to move past resistance, and the most critical AI skills professionals need to help shape their future.
Listen or watch below—and see below for show notes and the transcript.
Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 40,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike.
AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.
In this episode, we address 15 of the most important questions from our September 24th Scaling AI class, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time—unscripted and unfiltered—just like we do live.
Whether you're just getting started or scaling fast, these are answers that can benefit you and your team.
00:00:00 — Intro
00:06:48 — Question #1: How have you seen AI get introduced to a financial services firm as they are highly regulated?
00:09:10 — Question #2: What guidance would you give leaders who want to fundamentally reimagine business models for the next decade?
00:15:08 — Question #3: How do your five steps for scaling AI apply when an organization has one person leading company-wide adoption?
00:19:28 — Question #4: How do you actually convince leadership to commit the resources and build true AI enablement across the business?
00:22:49 — Question #5: If a company isn’t actively using AI agents yet, do they still need to consider policies and guardrails around them?
00:26:22 — Question #6: For independents or loosely connected teams, is it even possible, or advisable, to share a single enterprise AI account?
00:29:59 — Question #7: If a company doesn’t have an AI Council but leadership wants a vision for each department, where can someone start learning what AI can realistically do in each function?
00:33:14 — Question #8: What are your best practices for training newer AI users?
00:35:06 — Question #9: How do you drive stronger engagement in AI enablement trainings when individual contributors already feel too busy with their day-to-day work to spend time learning AI?
00:36:16 — Question #10: What is the best way to handle a situation where AI got something wrong?
00:40:41 — Question #11: For new and early-career professionals, what essential skills or habits are most critical for proactively shaping the future with AI, rather than just reacting to it?
00:47:08 — Question #12: How should marketers weigh the legal and reputational risks of AI-generated content when companies can't always claim ownership?
00:49:50 — Question #13: Relative to all the expectations around AI, where have you seen it fall the shortest in practice?
00:52:06 — Question #14: A lot of people are learning how to prompt AI more effectively, but how do you also train and guide it to be used ethically in the workplace?
00:54:56 — Question #15: Of the five essential steps to scaling AI, which step is the most challenging for organizations? What do you see leading organizations do differently?
This episode is brought to you by Google Cloud:
Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.
Learn more about Google Cloud here: https://cloud.google.com
This week’s episode is brought also brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.
For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: Getting value out of AI right now is comes down to asking good questions. Like if you know the questions to ask of a chat bot, you can get a tremendous amount of value with $20 a month. Welcome to AI Answers, a special Q&A series from the Artificial Intelligence Show. I'm Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute.
[00:00:25] Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of ai, but we never have enough time to get to all of them. So we created the AI Answers Series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing, whether you're just starting your AI journey or already putting it to work in your organization.
[00:00:53] These are the practical insights, use cases, and strategies you need to grow smarter. Let's explore AI [00:01:00] together.
[00:01:04] Welcome to episode 171 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Cathy McPhillips, chief Marketing Officer at SmarterX. Welcome back, Cathy.
[00:01:14] Cathy McPhillips: Thank you.
[00:01:15] Paul Roetzer: We do this a couple times a month. This is, if you're new to the show, we do these AI answers a couple times each month.
[00:01:21] This is in addition to our weekly episodes that Drop on Tuesdays. AI answers is presented by Google Cloud. This is a series based on questions from our monthly intro to AI and scaling AI classes that Cathy hosts with me. So if you're not familiar with those, each month we do a free intro to ai. We have now done 51, Cathy, does that sound right?
[00:01:41] 52. 52. All right. I'm losing track. So we started that in, fall of 2021. So a year before ChatGPT, we started teaching an intro to AI class. We've had 45,000 ish have registered for that class through the years. And then the Scaling Act class, the five essential steps to scaling ai, we [00:02:00] started doing. 2024 sound right, Cathy?
[00:02:03] Yes. Mid 2024. And we just did number 11. All right. See, I think it might
[00:02:10] Cathy McPhillips: be, I think 52, Intro is our next one.
[00:02:12] Paul Roetzer: Okay. All right. There we go.
[00:02:13] Cathy McPhillips: They all blend together after a little while.
[00:02:15] Paul Roetzer: They really do. So the format for those is I present and I kind of updated each month, but we present, you know, roughly the same class each month.
[00:02:22] It's a, it's part of our AI literacy project just to provide free education and training for people to get them, you know, introduced to artificial intelligence, help scale it responsibly within organizations. And so when we do these, we usually have, intra AI will normally get 1200 to 1500 registrants.
[00:02:38] Scaling AI is usually in the six to 800 range. And so we get great attendance on these and we get tons of incredible questions, and there's often more questions than we can get to in the live, class. So what we then do is Claire and Cathy, go through and curate questions that are left over and we turn those into this AI Answers podcast, [00:03:00] series.
[00:03:00] So, as I mentioned, Google Cloud is our presenting sponsor for this. We're grateful for Google Cloud and our partnership with them. We have an amazing, relationship with their, the marketing team over at Google Cloud doing a ton of interesting things together. in addition to this series, they are also our presenting partner for the intra AI and five essential steps to scaling AI classes.
[00:03:20] We are teaming up on a series of AI blueprints that'll start coming out this fall, and then our marketing AI industry council. So you can learn more about Google Cloud at cloud.google.com. And I have mentioned this on numerous recent podcast episodes. And as well as these, the intro to AI classes, I would check out their AI boost Bites.
[00:03:39] So this is a new series of short training videos that they designed to help build AI skills and capabilities in 10 minutes or less. I, there's a few dozen I think that are now available. We'll put the link in the show notes. You can go check those out. But again, it's a great quick way to learn from the Google Cloud team.
[00:03:55] They do an awesome job with those boost bites. And then this episode is also brought to us by Ma [00:04:00] Con 2025. This is Cathy's life right now. Cathy leads all the marketing efforts behind the MA Con event. She's done an incredible job. The team's done an incredible job. We are on track. We we're actually, I told her I'm gonna raise the goal, but we, we are already at goal for a number of tickets sold.
[00:04:16] So we are continuing to push forward and keep raising that goal. but everything's looking great. We're looking at 1500 plus in Cleveland and October 14th to the 16th. dozens of breakout and main stage sessions, incredible speakers. There's four optional workshops the day prior on October 14th.
[00:04:33] So that's a workshop day and opening party that night. You can check out the full agenda at MAICON.ai. That is MAICON.ai. and also episode 1 68, I did a full breakdown of the main stage sessions that were just announced. So you can use Pod 100 for $100 off your make on ticket. Still have a couple weeks to go.
[00:04:54] we'd love to see you in Cleveland, with myself, Mike, Cathy, and the entire SmarterX [00:05:00] Marketing AI Institute team. Alright, Cathy, I'll turn it over to you if you have anything to add on, make on go for it. If not, let's, let's roll into some questions.
[00:05:08] Cathy McPhillips: So, October and Cleveland is glorious.
[00:05:10] Paul Roetzer: Mm.
[00:05:10] Cathy McPhillips: It's also going to, our opening party is at
Hofbräuhaus
[00:05:14] So Octoberfest is the opening party for MAICON. And I'm not wearing
[00:05:21] Paul Roetzer: the outfit they bought. You're just
[00:05:22] Cathy McPhillips: deciding if Paul's gonna wear Lederhosen in
[00:05:25] Paul Roetzer: I am. I, they literally just put this in their internal chat last week or beginning of this week that like the outfits have been secured and my response was, I am not wearing that.
[00:05:33] What? Whatever you bought, do not think I am showing up in it. Come on, you're gonna need to like use some nano banana to like put me into that stuff. I, you're not gonna see that in real life.
[00:05:46] Cathy McPhillips: I mean, say less. We'll be working on that, Claire and I can work on that. Some, some promotions today.
[00:05:50] Paul Roetzer: I feel like I just unfortunately gave an I of marketing too.
[00:05:55] Cathy McPhillips: Okay.
[00:05:55] Paul Roetzer: Oh yeah,
[00:05:56] Cathy McPhillips: let's jump into this. So this is from our September 24th scaling [00:06:00] AI class and scaling ai. The questions are just kinda more strategic, a little deeper, a little big picture thinking. And I was going through these last night and I was like, dang, these are really good questions. Right. So if you haven't joined us first, a five essential steps to scaling ai.
[00:06:14] Our next one is November 14th. We'll include a link in the show notes so you can register for that. And I encourage you, if you haven't come in a while, I'll come back. Pardon?
[00:06:22] Paul Roetzer: Yeah. Just as a reminder here, like the format for these is during the lives. I don't know what the questions that are coming, and so we actually follow the same format here, like Cathy and Claire curate these.
[00:06:30] I I have not looked at these in advance, so if there's some I I can't answer really. Well, I will be honest and say like, yeah, I don't know. Like, here's some resources. Maybe
[00:06:39] Cathy McPhillips: I tried to not include those.
[00:06:41] Paul Roetzer: Okay. Good.
[00:06:42] Cathy McPhillips: Or position them in a way that I knew you could.
[00:06:44] Paul Roetzer: Okay. All right. We'll do our best. I'll, I'll be curious
[00:06:46] Cathy McPhillips: you.
[00:06:46] Paul Roetzer: Yeah.
[00:06:47] Cathy McPhillips: Okay. Let's get started.
[00:06:48] Cathy McPhillips: Number one, how have you seen AI get introduced to a financial services firm as they are highly regulated,
[00:06:55] Paul Roetzer: I would say. So this applies to any highly regulated industry. You have to work closely with it and [00:07:00] legal and procurement. Like you, you have to know the barriers ahead of time.
[00:07:04] then what I always advise people to do in these instances is understand the risks, understand the concerns of it, and legal and procurement. and then steer into those. So find use cases where those risks become low or non-existent. So if they're worried about, say, for example, customer data getting leaked into models and, you know, ends up in training, runs of future models, things like that, or privacy or, you know, overall regulations for an industry.
[00:07:34] find use cases where that doesn't apply. And so a great example, you know, I think that's just super tangible, is a podcast. So let's say you, you have a podcast, there's like, I don't know, we have like 15 use cases every week for AI in the podcast. That is all publicly available information. There.
[00:07:52] There's nothing we're doing or saying on the podcast that would come into any concerns around these risks that would prevent us from using ai. It's just [00:08:00] a publicly of available transcript. So we find ways to infuse AI into workflows and campaigns where the risks aren't there. Then you can spend more time trying to solve for the bigger picture and how to accommodate, you know, the regulations and the risks and the concerns that might come with the bigger uses, but the don't allow that to prevent you from moving forward with low to no risk use cases.
[00:08:27] And again, this is regardless, financial services, healthcare is another great example. . government, like there's all education there. There are so many endless use cases. And I would say like, we have JobsGPT, we can put the link in, but it's just SmarterX do ai. And then you click on tools, JobsGPT is a great one.
[00:08:45] You can go in there and say, Hey, I'm a financial advisor, or I, you know, work at a bank and here's my role. How can I use AI in a low risk way? Like, just talk to the AI assistance about these things and it'll help you find use cases that are, are gonna be safe for [00:09:00] you to use. And even help you build the business case and justification for it so you can convince the people that you need to convince that it is a safe use of ai.
[00:09:08] Cathy McPhillips: Absolutely. Okay.
[00:09:10] Cathy McPhillips: Number two, what guidance would you give leaders who want to use ai, not just to optimize today's operations, but to fundamentally reimagine business models and customer engagement for the next decade?
[00:09:20] Paul Roetzer: I really like this one. So my workshop at, of the four workshops that we're doing at Make on mine is on AI innovation.
[00:09:27] So I think that so many companies right now are still so focused on the efficiency and productivity side. It's the obvious thing that AI enables, but the organizations that reimagine what's possible, that think about growth and innovation, finding new markets, new product ideas, new ways to engage with customers, as you know, as the listener's asking here, and again, I think that the underutilized part of these, these AI assistance today is the reasoning capabilities.
[00:09:57] the ability to do deeper thinking about stuff [00:10:00] like this. These are the exact kind of questions I would go have with AGI PT five, you know, the thinking version of it, or a Gemini 2.5 Pro. I think soon we'll have a Gemini three, like probably in October we're probably gonna get the latest model.
[00:10:14] I would use those reasoning models. So again, if you're not familiar, a traditional chat model was, you know, what, what we got with ChatGPT in the early days. It was a prompt in and an instant response. So basically like information retrieval and prediction, it would just kind of respond to you right away with reasoning models.
[00:10:32] It takes its time to think. It goes through a chain of thought, it builds a plan, and it, it more deeply considers the actual intent of your question or your prompt. And so I would go in and say like, Hey, I have this kind of business. Here's the challenges we're facing. I want to think of in innovative new ways to use AI to help us grow this business and differentiate from customers.
[00:10:52] How can I do it? Like just talk to the AI about these things. What I often say to people is, imagine you have a [00:11:00] highly accomplished consultant, business consultant sitting right there. How would you phrase the question to them? If you could have an expert in your industry that could say, you know, you could say, how could I reimagine this?
[00:11:11] What can I do different? Talk to the AI like that, and you will probably get all kinds of inspiration, to help you reimagine what's possible in your business.
[00:11:22] Cathy McPhillips: Yeah. When it comes to the customer engagement side of things, I would say, you know, what does helping you reimagine business models and using AI in certain places allow you to do from a customer engagement standpoint?
[00:11:32] What doors does that open? What time does that give you back to be focused more on the customer engagement side of things? And I'm excited to see where AI can help us more with customer engagement in the future. Because right now, the way that we're using it is, I mean, some ways on our website and through our chat and things like that.
[00:11:47] And that also, it's just like, okay, I have this time back now I can go engage, but are there going to be opportunities coming up that could even en enhance that even more?
[00:11:54] Paul Roetzer: Yeah. I mean, even with, you know, we were just yesterday as a team looking at a new capability [00:12:00] in the new learning management system that we're gonna be rolling out for AI Academy here soon.
[00:12:04] And it, it has a more intelligent chatbot built into it that's like trained on the courses and the content. And so if you think about previously, you would've had to have reached out to customer support and said, Hey, I need some help building a learning journey. I'm not sure what to take next. and now it looks like we're gonna be able to have that kind of capability right in the system.
[00:12:26] And to Cathy's point, like the way we always think about strategy and business is what's more intelligent, what's more human? So there was a tagline I created for our conference back in 2019, but more intelligent, more human. We think about everything through that lens. So if we use, if the chat bot within the learning management system becomes more intelligent and actually enables real time interaction 24 7, where, you know, our learners can interact at any moment, then it frees up the humans to do more personalized outreach to connect with our learners, to spend more time in person with them, things like that.
[00:12:57] And so we're always trying to find that balance. And I [00:13:00] think that's the kind of stuff that enables, and if you think about the future of your business and the strategy and the reimagining what's possible, think about the more intelligent, more human lens. And every time you do something that's more intelligent, it's gonna free you up to do the more human stuff.
[00:13:14] And that's like Cathy does a really good job of this with our marketing. And on the customer success side, she spends a lot of time with one-to-one human connection. And that's made possible because we automate a lot of the low impact stuff. Yeah.
[00:13:29] Cathy McPhillips: But it does make me nervous. Like if someone I know really well comes on our website and they get a chat bot who doesn't know the nuance and the history of that person, are they gonna be like, what the heck?
[00:13:39] I've been talking to you for four years and I get this. Like, I am a little nervous with some of those things. Yeah. But I think we also just need to try it and see and just kind of, you know, enhance it from there and figure out how to make it better.
[00:13:52] Paul Roetzer: Well, I think you need to make the off ramp to a human very easy.
[00:13:56] Correct. Like if you want a human. But I do think that there's, [00:14:00] you know, HubSpot's had a lot of data recently on this because they're doing a great job of reimagining customer success and customer service. And it's like, I don't know, something like 80 to 90% of inquiries or requests are easily resolved through a chatbot.
[00:14:15] Yes. And then there's gonna be those instances where someone still wants to talk to a human. I mean, I have that. You probably have that too, Cathy. Like a lot of times, like I don't know. I think recently about like at and t in my personal life, their chatbots gotten pretty good. Like I remember back when I started my first business in 2005, I was like, if I never have to talk to at and t again, like my life will be complete.
[00:14:33] Like it was just a horrific experience as a business owner to have to deal with at and t in 2005. And now, honestly, like it's pretty smooth. Like I can just go in, I can talk to the chat bot, I get most of what I need and if I need a human, you just click a button, you bounce over Tesla. I've had a very similar experience, like they incredible customer service through Tesla's app.
[00:14:52] Like, so I think it can be done really well where it's low friction. I think at the end of the day, that's what we're trying to solve for is like, what is the low friction way for [00:15:00] people to get what they need? And then how do they get the more human side when they want it? Correct.
[00:15:05] Cathy McPhillips: Okay, I'm gonna move on.
[00:15:06] So I could talk about this all time.
[00:15:07] Paul Roetzer: Yeah.
[00:15:08] Cathy McPhillips: Number three. you've outlined five essential steps for scaling AI in organizations, but what happens when a company takes a more advanced route? Say they've actually built their own AI in-house with multiple models and one person is responsible for figuring out how the entire organization should adopt it.
[00:15:23] Do those five principles change in that situation? And if so, how?
[00:15:27] Paul Roetzer: Yeah, I think so. Basically, the way I explain the five steps we go through, which is, you know, building internal academy, creating an AI council responsible principles, generative AI policies is sort of the third step AI impact assessments, where you're looking at not only the current impact, but you know, the next 12 to 18 months, how it changes, you know, your talent, your tech structure, your strategies, all those things.
[00:15:48] And then building an AI roadmap, what we always say is different organizations are gonna be at different stages of transformation. If you've already done all of those things, awesome. Then you're optimizing the [00:16:00] transformation process from there on out. You're constantly doing AI impact assessments.
[00:16:04] That's not a, you do it and now you're done and what's next? Impact assessments are an ongoing thing. As new models come out, as new capabilities come out, you have to go back and do another impact assessment. Okay. What does this mean to our talent? if agents like we just had, Claude Sonnet 4.5 came out yesterday, so we're recording this on September 30th.
[00:16:23] By the way, just for context of what's happening, came out yesterday. They claim it can do up to like 30 hours of coding on its own. They, they're basically claiming it can build a Slack like platform on its own over 30 hours. That changes things. If that's true, you now have to step back and say, okay, like if we are in a business of, say, building software or.
[00:16:50] If we weren't in the business of building software, can we be now like, can, can we build our own software? So impact assessments are an ongoing thing. Generative AI policies are [00:17:00] an ongoing thing. As agents become more capable, you have to revisit what are our policies around agent use. So I don't know that those five steps become obsoleted.
[00:17:09] Once you've done them, you're always going through them, and that's why I even talk about them. They're not sequential. Like you're often doing all five things simultaneously and then you're constantly improving them. The roadmap keeps changing as things become possible. Yeah. I don't know. I mean, I, great.
[00:17:25] If if you're further along, that's awesome. And you may be doing, whole new things. Like In the Scaling AI course series through Academy, I had a whole module on building an AGI Horizons team. Like maybe you're spending your time thinking, well, what happens when we get to artificial general intelligence?
[00:17:41] What does our company look like then? And maybe that's more of your, you know, brain cycles are going to, what happens in 2027 when maybe these things are as good or better than all of our human workers? Like now what do we do? So there's no, and there's no limit of like, stuff that needs to be solved [00:18:00] for.
[00:18:00] I think if you've solved for all this, the first phases great. And you keep re, you know, iterating on 'em, and you, you start moving on to the next stuff.
[00:18:07] Cathy McPhillips: And I do think though, there is value in, okay, you're advanced, but your team isn't, take some courses together. So you are learning together. So you are also reframing what you've been building, going through something with that has like a plan attached to it.
[00:18:23] So you know what your team is learning along the way, so you can kind of just do this all together.
[00:18:26] Paul Roetzer: And the, yeah, and the whole, it's a great point, like the change management side of this. Like I mean, if there's an organization that has solved all of this and figured out the change management side and has like upskilled all their people and they're ready to go, like, awesome.
[00:18:39] And I would love to hear you and we'd love to do an AI transformation story on you, on the podcast. Like, it, it is hard to find those companies. the vast majority of companies are, are, are kind of like well into the pilot phase, starting the scaling phase sometimes, but really have not fully thought about the change management side.
[00:18:58] They're, they're trying to deal with what does it mean to [00:19:00] our people over the next year? Is it gonna affect our staffing levels? How are we changing our recruiting process? Like they're just starting to ask the right questions and the vast majority of them have no idea what to do with agents and don't even know what reasoning models are like.
[00:19:14] That's the reality of most companies. So if you're in the boat where you're past all that. Amazing. Right. You are probably in the top what 0.1% of like companies in the world right now.
[00:19:28] Cathy McPhillips: Number four, we hear a lot of executives say that AI is a top priority, but in practice it often ends up as a side project handled by a few people on the margins.
[00:19:37] From both an organizational and economic standpoint, how do you actually convince leadership to commit the resources and build true AI enablement across the business?
[00:19:45] Paul Roetzer: Yeah, it's a, it's a really good question. I mean, it, it all comes down to education and training, like awareness and understanding. So, you know, every year we do this state of marketing AI report.
[00:19:55] We specifically are asking like marketers, but there's a lot of business leaders in there as well. [00:20:00] and then I always think about marketers are usually kind of the leading edge of this within companies. So it, it's a pretty good data set to look at. And what we find year over year is lack of education and training is the number one barrier and follow very closely by lack of awareness and understanding.
[00:20:16] So what I have found is if you're in an organization where they just don't fully buy in yet, maybe they're allowing it to just solve this as a technology problem and they're not thinking holistically about re-imagining business models and doing impact assessments on teams and tech stacks and stuff like that.
[00:20:31] You, you have to find the lever to pull to get them there. And I don't know if that's like, you gotta, you, you know your organization better than us, obviously. Is it the CEO that needs to be convinced? Is there say a CIO who just doesn't get it and they're thinking only from a technology perspective? You, you have to know who will actually move the needle.
[00:20:53] Who do you need to get bought in at the C-suite level that this becomes a much higher priority and [00:21:00] is thought of more holistically than just a technology problem. This is a true transformational, it's an operating system for the business and it, for some people it might be, show them what the competitors are doing.
[00:21:10] For some it might be show the latest research on the impact of AI agents. You have to know what it is that actually drives change in an organization. sometimes what we've seen work really well is you just get a team. It could be the marketing team, the customer success team, the ops team, have them do something that has a true business impact, and then take that use case or, or case study in like three to five slides and say, Hey, here's what we were doing before we were spending 200 hours a month on this thing.
[00:21:40] We, we integrated this tool and we're now spending 50 a month instead of 200. We've taken the other 150 and we launched these two new campaigns, which generated a million dollars in revenue. Boom, okay, let's do this 10 more times across the, so, make it as simple as possible for them to say yes and to buy into the bigger picture.
[00:21:59] Cathy McPhillips: One example [00:22:00] is yesterday we were looking at what Noah's been doing with customer success and bringing things in through HubSpot versus emails and that quick little change, like if you didn't know that was going on. You would've been like, oh my gosh, why haven't we been doing this forever and now what else can we be doing with this?
[00:22:16] Paul Roetzer: Yep. It's huge. Yeah. And I will say, like the other thing is, you know, again, you have to get the full buy-in, but sharing successes internally, even if you just have a bunch of pilots going on across departments and it's pretty, spread out. It's not like unified under one initiative, then get those people together and share, build those internal champions from sort of the bottom up.
[00:22:35] and people wanna be a part of things that are working. . Like, they're gonna be inspired to say, okay, well let me see what I can do in the sales side, or, you know, in the product side.
[00:22:44] Cathy McPhillips: Well, that was the response. Tracy was like, okay, now do this.
[00:22:46] Paul Roetzer: Yeah, yeah, for sure.
[00:22:48] Cathy McPhillips: Okay,
[00:22:49] Cathy McPhillips: number five. There's a lot of talk right now about generative AI and AI agents.
[00:22:54] Are these really two different things? And if a company isn't actively using AI agents yet, do they still need [00:23:00] to consider policies and guardrails around them?
[00:23:02] Paul Roetzer: Yeah, so I mean, generative AI is the ability for AI to create things. So text, images, video, code. AI agents is the ability for an AI system to take actions on your behalf.
[00:23:15] So imagine it being able to go through a 10 step process. the most tangible example for people, if you haven't tried it yet, is in ChatGPT or Google Gemini. Go in and run a deep research project, deep research. What, what? It's powered by AI agents. Basically, it'll go and build a plan. So you say, I wanna research, you know, my 10 competitors.
[00:23:34] Here's a list of the competitors. Go out and look at their pricing models, product updates, any current marketing messaging, like just give it a, give it a project like that. If you don't know what to do, go into chat GPD and say, I want to test deep research. Here's what my company does. Here's what I do.
[00:23:49] Like, write me a prompt so I can test the capabilities of deep research. But when you do it, what happens is it then builds a plan. It takes that plan and execute it. So it, it goes [00:24:00] out to websites, it reads them, it processes it, it synthesizes the information. It goes through a chain of thought and kind of, and then it, and then it creates a research report for you.
[00:24:08] So that's an AI agent that is doing something. It's taking actions, it's building a plan. So that's the main difference. Now, the agents have varying degrees of autonomy. What I mean by that is sometimes you have to say, Hey, here's the 10 steps. Just execute these 10 steps. That's you in a deterministic way saying like, okay, this is what I want you to do.
[00:24:29] Now you're gonna go do it. That's very little autonomy in that case. The other is, Hey, I just want this achieved. I don't know how you're gonna do it, but like you go figure it out. And now it's like further degrees of autonomy where the human's less and less involved in the process. So they are very different things.
[00:24:45] I will say AI agents are going to just be built into the software you use. So. Right now you have to kind of go find them, you know, whether it's through Microsoft Copilot or Salesforce or HubSpot or whatever, like you're kind of more involved in [00:25:00] the process of identifying the need for an agent, giving it parameters, guardrails, building it.
[00:25:05] What's gonna happen though is, is it's just gonna be embedded within, so in a good example, this would be in ChatGPT right now, I don't know if this is at all tiers, pricing tiers, but they have an agent mode. So if you just go into ChatGPT and you click on, I think it's tools, you'll see agent mode will pop up that, that's building an agent right into your experience.
[00:25:23] I think I gave the example on the podcast about this. Like, I used agent mode to look for a new front door for my house. I just took a picture of the door and I was like, can you go find doors like this? I need a new door. I don't know how, explain what the style is. And it, it figured out what the style was and it went and like, looked around and then, and then it brought it back to me and it's like, okay, here you go.
[00:25:44] Here's some place. So. Yeah, that's, that's kind of the distinction. I do think it's important to start building in, into the generated policies though, because people are gonna have access to these things, whether, you know, you go seek them out or they just are embedded into the tools you're using.
[00:25:59] Cathy McPhillips: I just did [00:26:00] that with, we had to take a tree down.
[00:26:01] Paul Roetzer: .
[00:26:02] Cathy McPhillips: So my husband and I were like, looking like, what could we do back here? And I took a picture and I was like, what could we do back here? It was pretty cool.
[00:26:09] Paul Roetzer: Yeah. It, it is awesome.
[00:26:10] Cathy McPhillips: And then I said, it said pickleball court. He's like, no, it didn't.
[00:26:15] Paul Roetzer: It's, that's getting personalized to you now. Now it knows your interests.
[00:26:18] Cathy McPhillips: Not like it didn't say that.
[00:26:22] Cathy McPhillips: Number six, for independence or loosely connected teams, is it even possible or advisable to share a single enterprise AI account? What should we know before going that route
[00:26:32] Paul Roetzer: for independents or loosely connected teams?
[00:26:36] Paul Roetzer: like for 20 bucks a month, everybody just shares the same account
[00:26:38] Cathy McPhillips: or just like a team of, like an like agency owners that are within the same, you know, cohort or something.
[00:26:45] Could they jump in and share an account or is it by email address?
[00:26:48] Paul Roetzer: Yeah, I mean it's, it's generally, I've heard of this at, at a, a college level of like students sharing a single account. 'cause they don't, you know, they're college kids. 20 bucks a month is hard to do it on their own. So I've, [00:27:00] I've heard of those instances.
[00:27:01] I don't know, I mean, I would just make the business case to everybody to have their own license and to set it up under a team or enterprise account. They're making it easier to collaborate, like chat. GPT in particular, they just announced that you can now share projects. So you used to be able to share GPTs, like you could build custom GPT and share 'em with people on your team.
[00:27:19] they now let you share entire project folders that that's new as of like three days ago, I think. So they're making it easier to collaborate. Google still has some ways to go on making Gemini more functional like this. as of right now, you still can't share gems when you build them, which is absurd.
[00:27:36] But you, you can't, so hopefully Google fixes that this fall. But I don't, I mean, I would, I would definitely encourage people to get individual licenses. You, you definitely can, have like to share a team account in, I'm trying to think in chat. GBT, you don't need the same domain, like emails only.
[00:27:57] Yeah. So like if you had a group of people [00:28:00] that were working together, like say, you know, five independent contractors and you guys collaborate on stuff, you could in theory create a single account that you're paying $20 per user per month for, and then have a joint access to that with different email addresses.
[00:28:16] So that, that's how it historically worked. I don't know that they've made any changes to that where you have to verify your email address, like Asana for example. We use Asana and in that one, like you have to have the same email address to be part of the enterprise and then you can invite outside guests.
[00:28:32] That's not how ChatGPT historically has worked.
[00:28:35] Cathy McPhillips: So are there any things, so say it's five independent contractors, is there anything that they should think about if they were going to go and do a team one together as far as privacy for each other?
[00:28:45] Paul Roetzer: Well, I think you'd have to just be familiar with how it works and how the privacy, you know, stuff is set up and how the terms are set up.
[00:28:51] the one thing I would consider there is, right now the memory of these things and the personalization happens at an individual [00:29:00] level. So let's say we have 15 people on a team license for chat, GPT currently, to my knowledge, it doesn't learn across the system and then carry that into one unified, I'm, I'm gonna like humanize this one brain where like everything that me, Cathy, Mike Tracy have done is remembered at a centralized hub.
[00:29:22] It's, it's personalized to each individual user within that team license. So it starts to learn me. Whatever it learns about me and the company doesn't transfer to Cathy's experience in ChatGPT. I would imagine that changes and it starts to centralize the knowledge of all the individual users into the memory of the one overarching account because it can learn a lot of context from all the ways that the individuals use it.
[00:29:48] But that, I haven't heard that as part of the roadmap, but it just seems like an inevitable thing that would happen.
[00:29:54] Cathy McPhillips: And if that does happen, we will talk about it on the podcast. Yeah. You and Mike can break that down.
[00:29:58] Paul Roetzer: Yeah.
[00:29:59] Cathy McPhillips: Number [00:30:00] seven, if a company doesn't have an AI council, but leadership wants a vision for each department, say inventory management or hr, where can someone start learning what AI can realistically do for each function?
[00:30:11] Paul Roetzer: Yeah. I mean, literally I would, I would just go into Chut or Gemini and have those conversations. So, and jobs.
[00:30:17] Cathy McPhillips: GPT.
[00:30:17] Paul Roetzer: Yeah, jobs. GPT is a really, you know, quick way to do it. So again, jobs, GPT we've mentioned a couple times, but in essence what it does is it's, it's trained on, it's a chat, GPT custom GPT. So it's, it has the capabilities of ChatGPT, but I basically trained it on an exposure key and AI exposure key that says, okay, as these models get smarter, these are the known things they're gonna improve at.
[00:30:39] How will that change certain jobs? So you can just put a job title in or a job description, or you can ask for kind of examples of where they, these jobs may go in the future, but in essence you can just go in and say, okay, you know, we don't have an AI council. It goes back to, I was saying earlier, just talk to it like a, a consultant.
[00:30:55] We don't have an AI council, so no one's guiding us on this. I work in inventory management. [00:31:00] there's a three of us who are really excited about ai, but we're not sure what to do. We have ChatGPT licenses. How can we get started? Help us find maybe three to five use cases that are the most immediate value we could create.
[00:31:13] what should we do? So, I mean, literally like getting value out of AI right now is, comes down to asking good questions. Like, if you know the questions to ask of a chat bot, you can get a tremendous amount of value with $20 a month. Yeah. but I think most people just aren't, aren't there yet. It's almost like the old adage when, you know, someone would say, well, you know, they ask you a question be like, well, did you Google it?
[00:31:39] Like I, that's kind of how I approach stuff now and not in a condescending way at all, but like, when someone comes up to me, when I'm, you know, say doing a speaking engagement and they ask me a question like this, like, oh, I'm an HR leader. I'm, I'm just not sure where to start. I know it's being used. And I'll say like, well, have you talked to ChatGPT about it?
[00:31:56] Like, have you, have you asked the question you're asking me directly to [00:32:00] ChatGPT? and give it the context, like provide the background that you're providing to me and it's gonna give you a really good place to start. And then you're the domain expert so you can figure out which pieces of its response are valuable.
[00:32:12] But. It's often just like the best place to get going.
[00:32:15] Cathy McPhillips: So last Friday we were at the Ohio Aerospace Institute doing a a day with middle school girls and we were showing them jobs, GPT. So they were coming up and saying, I'm interested in fashion design, I'm interested in math, or something like that. And someone came up and she said, I want cosmetology.
[00:32:32] I was like, okay, this will be fun. So I throw it in there and it was like, obviously can't cut someone's hair. I mean, who knows? Maybe someday it can. Yeah. But all these different things. And I said, what do you like doing? And she's like, I love color. And I was like, wonder if that could help you with that?
[00:32:45] And she was just like her. Her mind was just blown. Yeah, that's cool. Just like learning about it and not like doing it. She's like, I like this part of it, but like there's all this whole science thing that I might not really know about and I don't wanna do the business side, but can help me with some of that because I really wanna focus on being with the [00:33:00] people.
[00:33:00] And she was just, I don't know, it was just such interesting, every single one of these young. Girls were just blown away by like the opportunity of what was out there.
[00:33:08] Paul Roetzer: Yeah, it's pretty cool. That's awesome. Yeah, it's cool. I'm, I'm so glad that the team did that. Yeah. It was a great event.
[00:33:14] Cathy McPhillips: number eight, what are your best practices for training newer AI users, especially around managing expectations, getting quality outputs, and staying safe?
[00:33:23] I've noticed firsthand how much difference proper training makes an adoption.
[00:33:27] Paul Roetzer: So the way we approach this, and this is whether we're running workshops for corporations or we're just providing guidance, is you have to personalize the first few use cases. So the greatest way to get immediate value and get people bought in and seeing the full potential, if you're gonna provide co-pilot licenses or ChatGPT licenses, or Gemini or whatever, whatever you're providing to them is either run an interactive workshop where you help them find those first three to five use cases.
[00:33:54] Or have the people leading the change management within your organizations build GPTs [00:34:00] for them. So, you know, if you're gonna introduce it into sales, for example, build AGI PT, that helps them do something that every person on that sales team has to do every day. And it's like, oh, this is gonna save me three hours a week.
[00:34:12] Like, this is fantastic. Or give them a series of prompts, like, as a sales professional or a customer success professional, here are five prompts you may find extremely valuable to help you right away with efficiency and productivity. So we're just huge proponents of, you have to personalize the integration of this stuff so that it's so obvious right away.
[00:34:31] And do this at the C-suite level too. Like I'm a, I'm a huge believer if your CEO isn't fully bought in, build those first couple prompts for them. Build AGI PT for them. That actually helps them right away. It, it changes everything once they see it for themselves, how it can help them and the people who are resistant, 'cause they have some preconceived idea about AI and maybe they think it's abstract or they're just fearful for their job and like they don't really get it.
[00:34:58] As soon as you [00:35:00] personalize something for them, the light bulb goes off for everybody.
[00:35:03] Cathy McPhillips: Yeah. That kind of goes into this next question.
[00:35:06] Cathy McPhillips: Number nine. How do you drive stronger engagement in AI enablement trainings when individual contributors already feel too busy with their day-to-day work to spend time learning ai?
[00:35:15] Paul Roetzer: Yeah. Find out what it is that's keeping them too busy and then solve it form with a prompt or a custom GPT. Yeah. Say, Hey, here, I know you're spending, you know, five hours a week doing this thing. I I built something that'll get it down to like 30 minutes for you and when you're ready, I can build three more of 'em for other things you're doing.
[00:35:33] So yeah, again, it just, the personalization make it easy for people to say yes and like to, to. We always talk about, I always end my, my course with, you know, be curious, explore ai. It's this whole idea of like, drive that curiosity by showing them something, you know, make them curious to learn more and find the next use case that that is helpful to them.
[00:35:53] And sometimes it's, it can be like in their personal life, you know, show 'em how to use it to help their kids with homework, stuff [00:36:00] like that. Like study mode, like, guided learning. I've been talking a lot about that on the podcast. A couple, you know, for, for ChatGPT and Google GeminIt's fantastic for helping students.
[00:36:09] and once you see that, it's like, oh, I could probably do this at work. Like, I could take a same approach. So
[00:36:15] Cathy McPhillips: absolutely.
[00:36:16] Cathy McPhillips: Number 10, what is the best way to handle a situation where AI got something wrong?
[00:36:22] Paul Roetzer: well, assuming you caught it before you published it or sent it, got something wrong, that's a different story.
[00:36:31] I mean, they get stuff wrong. They, hallucination is like the technical term for what happens. They, they will make stuff up. They will, you know, use data that didn't exist. They will cite a source that wasn't there. They will misspell a name sometimes, like they're gonna make mistakes. And that's why you have to have the human in the loop.
[00:36:48] This is the whole idea of like the AI verification gap. Like the human has to verify these outputs and the, and the higher risk, higher profile, the output, the more important it becomes that a human is in the loop. [00:37:00] So we've, you know, probably all heard the stories of lawyers using these to create legal briefs that they submit to judges and the judge finds that there was something wrong in there and you got a major problem.
[00:37:09] Like the humans have to be in the loop. And if it has to do with like business analytics data, customer data, if it's a communication that's going out, like you absolutely have to have the human heavily in the loop in those processes. At the end of the day, like in our responsible AI principles that we have published and that we share with people, we say like, the human owns the output.
[00:37:31] Just because the AI is capable of doing these things doesn't remove the agency from the human, the responsibility to own the output, the accuracy, and the quality of that output. So I think that that's just something that needs to be taught to people that you still are in charge of making sure the output is correct.
[00:37:51] It's not work slop as, as we talked about on the podcast this past week. what was episode 1 71? I think we were talking about work slop. [00:38:00] one 70 actually. One
[00:38:01] Cathy McPhillips: 70, yeah.
[00:38:02] Paul Roetzer: so yeah, you don't, we don't wanna be handing stuff in that, that you didn't put the actual time into.
[00:38:07] Cathy McPhillips: So what if it got something wrong about your company?
[00:38:11] Is there anything you can do to mitigate that for the future? Or can you retrain the model to have the facts correct?
[00:38:17] Paul Roetzer: Yeah, so I mean, this happens to me. I've seen this numerous times. I've even had it with GPT-5. So the most recent models, you know, I'll ask it something I actually had, there was one, it was yesterday.
[00:38:27] Oh, it was a funny one for like personal stuff. so my family and I, so I have a 13-year-old and a 12-year-old and my wife, we have Mario Kart world races every night. And so I always lose, I'm always in fourth place. And so I was having a conversation with Google Gemini about what is the best character car match, like match so I can actually start winning.
[00:38:48] And it came back and it started recommending stuff for like Mario Kart eight. And I said, no. I said, Mario Kart world, like you're, you're, you're not giving me the right information. You don't choose the wheels [00:39:00] anymore. That was in Mario Kart eight. And he goes, oh, I'm sorry. You're right. Mario Kart and actually went and searched the web and updated its knowledge base to be current.
[00:39:09] and so, you know, it's a personal example, but that happens in business all the time where I'll be like, no, no, no, you're wrong. You, you can do the thing I'm asking you to do. Like, it may say, well, I can't generate images. Like, yes you can. So, and then like, I think sometimes that then lives in its memory and then it remembers that change.
[00:39:26] So yeah, I don't know if you're using just a standard chat, you can tell it like, Hey, remember this for next time, and it'll, you know, remember that. The other thing is if you're using GPTs or gems, you can upload updated information into its knowledge base. You can say, Hey, anytime I'm asking you about this stuff, refer to the knowledge base, PDF when I ask you this.
[00:39:46] And then it'll, you know, reference that anytime it's doing an output.
[00:39:49] Cathy McPhillips: And can you train that, like for other people? Can it keep that in the knowledge base for others?
[00:39:54] Paul Roetzer: Yeah. So this, this again goes into the, the team and enterprise accounts, [00:40:00] whether they function as a single hub of knowledge or not. I don't, now that you're asking that, I don't know that you can upload a knowledge base into the team account that universally is, is referred to for all users.
[00:40:15] Cathy McPhillips: Unless there was like a GPT that was built and that was a knowledge base.
[00:40:17] Paul Roetzer: Correct. That's like, that's how we do it internally. So like I have our co, the Co-CEO GPT that I built has a knowledge base that's trained on company data. The system prompt has revenue goals, it has all this stuff, and then I shared that Co-CEO internally with the team.
[00:40:32] So when they use it, it has that same knowledge base about our information. . So that's a way to control it now.
[00:40:39] Cathy McPhillips: Yep.
[00:40:41] Cathy McPhillips: Number 11, for interns and early career professionals entering the workforce now, which specific skills or habits are most critical to develop in order to stay ahead of the AI curve and actively shape the future rather than simply reacting to it?
[00:40:55] Paul Roetzer: That is a great question, especially given all the stuff we've been talking about on the podcast lately about the [00:41:00] challenges for early career professionals and. You know, the unemployment underemployment rates are, you know, not great right now for people in the 22 to 25 range. So, I don't know. I think cur these are kind of softer skills, but curiosity, imagination, critical thinking are just fundamental.
[00:41:19] The ability to work well with the ai, to ask smart questions, to, you know, not just do a single prompt, but have follow up prompts and, you know, really work with them. those intrinsic motivation, like those are the kinds of things that I just think have always been important, and they remain really critical at that early stage.
[00:41:38] I don't know that there's any, I don't, I mean, I can tell you the advice I've been giving to family and friends who are in college, come out with any business major you want, I don't care what it is, economics, ac accounting, finance, whatever. if you wanna be in the business realm, any of them are fine.
[00:41:57] Like you just need to solve problems. You need how to [00:42:00] learn to work through hard things. Take entrepreneurship classes if you can, because I think entrepreneurship is gonna be like the golden age here. It's gonna be way easier to create businesses and then take as many complimentary AI courses as possible.
[00:42:14] Like you basically want to come out with a liberal arts background where you have a diversity of skills and knowledge, and then you know how to work with AI to augment your capabilities, not replace the need for you to do the work. So like challenge yourself to do the hard work, go through these hard things.
[00:42:33] My son's interested in coding. It's like, great, if you wanna do computer science coding, do it. I don't, I don't care that agent three can code or sonnet 4.5 can do the coding in eight years. Like is, is he gonna be actually writing code? I don't know. But for eight years he's gonna learn how to do really hard repetitive things that transfers into anything.
[00:42:53] So, yeah, it's hard to say like. How you should think about education and changing majors and [00:43:00] stuff. And my general guidance is just do these, these things well. And then work on the human skills, like the personal, personal communication and
[00:43:08] Cathy McPhillips: knowing coding is amazing and knowing that how to do all that.
[00:43:11] But take a communications class.
[00:43:14] Paul Roetzer: Yes. Learn how to talk to people. Do, you know, take a class, you have to do presentations. Yeah. Debates. Like all of it's good. Like I am a believer though, like, I think liberal arts degrees will continually take on greater value. and I would, I would really challenge parents now, challenge students, get diversity of it.
[00:43:35] Take a psychology class, a sociology class, you know, really spread it out and learn a lot of different backgrounds because I think people that are well-rounded in that way are just gonna be able to interact with AI way better and consider the more human aspect of, of this as AI starts to really be everywhere within society.
[00:43:55] Cathy McPhillips: Yeah, talk a little bit more about the entrepreneur side of things, because you had mentioned [00:44:00] previously that, you know, that that opens up a whole world of opportunity from a job standpoint of if there's layoffs across enterprises and things like that, like small businesses could be thriving soon.
[00:44:10] Paul Roetzer: It, it is.
[00:44:10] I mean, starting a business and running a business is very hard. It's, and, you know, I always just go back to when I first started my agency in 2005 and I had no idea what I was doing. I was 27, I came out of, you know, liberal arts college, journalism school. I took, I had a business minor, so I mean, I'd taken business class, but I had no idea how to start a business.
[00:44:32] I didn't know how to, you know, manage it. So I went to a local organization in Cleveland called SCORE, and they matched me up with a retired executive from a manufacturing company and, you know, wonderful. The work he was doing, that he was volunteering his time to do this. Zero context into what I was trying to do.
[00:44:53] Told me the idea waster terrible. Like, and I'm like, okay, well where do I go now? Like, that was it, that was my one shot. I [00:45:00] found this organization in town that was supposed to help me as an entrepreneur. Now I don't know what to do. If I had had ChatGPT in 2005, I could have figured out in 48 hours what, it probably took me four years to learn.
[00:45:13] Like the learning curve is almost non-existent. It's like, Hey, I wanna build a professional service firm. I want to do the pricing model different than it's traditionally done. Because I think our charging by the hour is absurd and obsoleted. how should I do it? Like, what should I think about how should I build the financial model for the business?
[00:45:30] I would've just spent 48 hours asking all the questions that I had back in 2005 when I had no one to ask those questions to. I could ask accounting questions, finance questions, legal questions, business strategy questions like you have on-demand intelligence in many ways at a PhD level, like at an expert level.
[00:45:47] To talk to about anything. Yeah. And now if you have any semblance of like coding ability, you can code apps, you can build, like I was talking to my son last night about they, they're, he's in seventh grade and they do an entrepreneurship [00:46:00] challenge. We have to build a business. And we were talking, I was like, Hey, but like we could build an app, like you can actually code an app in seventh grade, some coding classes.
[00:46:08] But I said you could build a working prototype of an app through, through these tools. Like, and so it, it's just the walls have come down to create things to bring them to market. Now there has to be a market of people to buy what you're gonna create. but I do think that entrepreneurship is just gonna be like, there's gonna be no excuse to not be able to start a business if that's what you wanna do.
[00:46:31] Cathy McPhillips: Yes. Because you could do all that stuff in 48 hours, but you need to have grit. Tenacity. Yeah. Curiosity. Maybe a little crazy to get, to be able to, to do that. Right. And do it well.
[00:46:41] Paul Roetzer: Yeah. And yeah. And it takes a while to make money as an entrepreneur. So like. I mean, I was paying myself the same thing for like the first eight years of my agency, like I was.
[00:46:53] So yeah, it's, it doesn't solve the fact that you still have to figure how to make the money, but the actual building of the business and [00:47:00] moving through that learning curve is so dramatically accelerated from when I started my first business back in the days. Yeah.
[00:47:08] Cathy McPhillips: Okay.
[00:47:08] Cathy McPhillips: Number 12. If companies can't always claim ownership of AI generated video or images, how should marketers think about the risks?
[00:47:15] Not just legally, but in terms of brand trust and reputation?
[00:47:20] Paul Roetzer: So the copyright stuff is just gonna get so fascinating. We, we will talk about on the podcast next week, but it appears Sora two, which is the forthcoming video model, which honestly may come out by the time you hear this podcast. open AI's approach is to, to steal everything, allow users to create anything.
[00:47:42] So Disney characters as an example. if you wanna create Disney characters in sort two, you'll be able to, they're going to basically say, if you want to stop it from happening, Disney give us a call. So they're, they're basically gonna say, we don't care about copyrights. We are gonna allow our users to [00:48:00] create whatever they want, and then you can go sue the user if you want, for creating the output, but we're gonna enable it to happen.
[00:48:08] So I think we are about to go through a really, really weird phase of copyright law where the AI labs are brazenly going to say, we don't care. And they're going to just challenge everybody to stop them. They will have the support of the current US administration, the, you know, the Trump administration, will support their lack of caring about copyright.
[00:48:34] And so that's gonna change a lot, I think. In the near term though, this does come to our more down to the moral compass. I would say like in our responsibility, I principles, we say when, when legal precedent lags behind, which it always will. With ai, your brand and your company have to have a moral compass.
[00:48:54] And that moral compass has to play a role in deciding how you will use this technology because they're going to enable you to [00:49:00] do anything with it. You have to decide what you're gonna allow your people to do with it. and then from a copyright perspective, if you want to own the copyright, currently under US law, AI can't create it.
[00:49:13] So like if your logo is a hundred percent AI generated, you have zero protection against it. So if you create a logo and people know it was AI generated, they can take it and put it on a hat and start selling it online and you can't do anything about it and they're gonna force you to prove it wasn't AI generated basically.
[00:49:31] So. I don't know. I mean, I, again, I think this is a really important discussion and a conversation you need to be having internally with legal needs to be built in your generated policies. It has to be factored into your responsibly I principles, because there's not gonna be really clear answers to this one for a while.
[00:49:47] Cathy McPhillips: Yeah.
[00:49:50] Cathy McPhillips: Number 13, relative to all of the expectations around ai, where have you seen it fall the shortest in practice? Are there particular tasks or use cases that consistently expose its limitations?
[00:50:01] Paul Roetzer: Honestly, I see people's understanding of AI and the change management commitment being far bigger of an issue than the limitations of the AI itself.
[00:50:11] And the reason I say that is, again, if I go back to the reasoning models and deep research as a very tangible example of this, within ChatGPT and GeminI constantly pull, anytime I do public speaking, and I've done this in rooms with thousand plus people, I've done it rooms with 300 executives from individual brands.
[00:50:30] Um. I say who has done a deep research project in ChatGPT or GeminIt is consistently less than 5% of the room. And this is as of, you know, 30 days ago I did this again, if you haven't done that, you have no idea the capabilities of AI today. It is so far beyond just standard chats. And give a prompt and get an output.
[00:50:53] And yet we've had reasoning models for, for 12 months now. They, they came out September of last year. It was the first oh one reasoning model from [00:51:00] openAI's. And yet most organizations, the vast majority of organizations have no idea the capability even exists. And so it is not the limitations of AI that is the problem.
[00:51:11] It is the limitations of our understanding of what it's capable of doing. That is often the problem. Yeah. And then the operationalization of that understanding. So once you know what deep research can do, how do you actually operationalize that into your organization? That by far is the bigger issue then.
[00:51:28] The AI itself hallucinating sometimes and things like that.
[00:51:32] Cathy McPhillips: So if you're coming to Mahan, I meet Katie, Robert from Trust Insights. This is one of her passion projects and what she loves talking about this. And go up to her, ask her if you can see a picture of her dogs and you'll grab her attention. You can ask her all your questions about change management.
[00:51:46] She's so good at it.
[00:51:47] Paul Roetzer: Yeah, and it is like it's a cheat code right now for competitive advantage. The companies that actually take a change, manageable approach to this and just benefit from the tech that's sitting there to be used for 20 bucks a [00:52:00] month per user, like, yep, it's, it's crazy. The opportunity that's still in front of us.
[00:52:06] Cathy McPhillips: Number 14, a lot of people are learning how to prompt AI more effectively, but how do you also train and guide it to be used ethically in the workplace? What practices or guardrails should organizations put in place so that prompting aligns with their values and policies?
[00:52:20] Paul Roetzer: This definitely goes back to the couple questions ago about the moral compass.
[00:52:24] You know, I do think the responsibility principles are essential, and not just documenting them, but actually teaching them and ensuring that people follow them from day one. That it is part of the onboarding process that they're taught these things. because then when people run into these gray areas of like, oh, I can now generate any copyright trademarked thing I want to do with these tools.
[00:52:49] Awesome. I'm gonna start using 'em for social and I'm gonna start putting, you know, gif into my emails that are using these like well-known characters. If you train it right and [00:53:00] you do a true change management process, that's not even a debate. They're gonna know not to do that. But if you don't teach it, then there's no way.
[00:53:07] So it starts with documenting it, having generated policies, responsible by principles, and then teaching it and making sure you live it as an organization. It has to be built into the culture. There's no other way to do it. Like, I mean, you, you can't just document them. I've set, I've, I think I've told this before, I've sat in meetings where we've asked, like there was this one big brand in particular, and there was like 70 ish executives in the room, and Mike and I were running a workshop for them and we said, you know, are there generated policies?
[00:53:37] and the Ccio said, yeah. And everybody else in the room looked at the CO. Like, where? Right. Who, who, who owns them? Have we told anybody They exist? And so you realize that, yeah, like sometimes organizations do this, but they don't live it. And that's the key. Like anything else, like if you go back to any kind of transformation, any kind of culture initiative, it has to be infused into the organization.
[00:53:59] It can't [00:54:00] just be words on, you know, a screen,
[00:54:02] Cathy McPhillips: is there any value? Because. The responsible AI principles are under Creative Commons. Could they download that, tweak it however they wanted to, and upload that as an, as part of the knowledge base? Would there be any value to that if there's like a shared company GPT?
[00:54:14] Paul Roetzer: Yeah, definitely. And we do, we can put the link in the show notes. So the responsible AI manifesto I wrote in January, 2023, we released under a share like creative common license, which means you're, you're welcome to take it and edit it and do whatever you want with it, but when you re-share it, you just share under the same license basically.
[00:54:32] So it's like a open sourcing, responsible app principles in essence. So yes, you can take those and tweak it to however you want, and then once you have it, you could in theory, you know, put it into a knowledge base and tell the AI always referenced this. I did that with, when I was creating the AI fundamentals, piloting and scaling AI courses.
[00:54:50] I trained it on stuff like that. so yeah, definitely a step you could take.
[00:54:55] Cathy McPhillips: Okay. Last question.
[00:54:56] Cathy McPhillips: Number 15, of the five essential steps to scaling ai, which step is the most challenging for organizations and what do you, what do you see leading organizations do differently?
[00:55:05] Paul Roetzer: Yeah. I mean, my, my instinct immediately on this one is the AI impact assessments, because I don't know anybody that's doing them.
[00:55:12] And the reason I say that is because to do the AI impact assessments, well, you have to have a vision for where the AI capabilities are going, which means using that AI exposure ki reference. So I teach a whole course on this in our Scaling AI course series on how to do AI impact assessments and walk through different examples of it.
[00:55:31] But in essence, you have to understand the labs are working on, you know, persuasion and personalization and AI agenda capabilities and all these things. And then you have to kind of project out how far along they're gonna be in like the next 12 to 18 months with those different things. And then from there, now you can actually assess jobs and hiring plans and how campaigns can be improved and workflows and.
[00:55:52] Which problems you can better solve. So once you do the AI impact assessments, you get a far greater picture. And I really focus on the [00:56:00] talent side in particular because we're all trying to think what happens if we become more efficient and we don't need as many people. Like what are we gonna do? How do we retain those people and upskill them?
[00:56:10] to remain relevant in the organization, you have to be honest with yourself. And the only way to do that is to be proactive about looking at where's the technology going and how is it gonna start to affect us? So one real tangible example would be future proofing hires. So if you have a job description for say, you know, in our world a customer success manager is like something that we're actively looking at hiring.
[00:56:31] if you have a job description of what that person would do, and then you put it into jobs GPT and say, okay, assess this over the next 12 months, like, how is this job gonna change? You may realize, man, 30% of what this person would do today, they're probably not gonna be doing in 12 months. Like an AI agent's gonna do that.
[00:56:52] What would we do with that 30% capacity? How would we make sure this is still a full-time hire? So future proofing [00:57:00] hires is one really critical way, and you can't do that until you understand what the models are gonna be capable of in the next six to 12 months. So it's hard because it's an inexact science and they're very few benchmarks to look at of people actually doing this.
[00:57:14] So, that's, that's, and
[00:57:16] Cathy McPhillips: is there value, is there value in like, us putting our jobs on a regular basis through jobs GPT to see what opportunities we're missing that we're not thinking of? Or just so we can stay one step ahead?
[00:57:27] Paul Roetzer: Yeah, I mean that's, so there's two main use cases when I built Job CPT.
[00:57:32] One was to prioritize use cases for yourself, right? So I'm a chief marketing officer, how can I be using AI today? And it'll help you do that. Breaks the job into tasks and then recommends ways you can use it. And then the second piece was kind of looking at the impact as AI becomes more capable, how's it gonna change my job as a chief marketing officer?
[00:57:52] Yeah, I think having regular check-ins, it could be part of an AI council, part of their domain of things that they're doing where maybe they're [00:58:00] taking the top 10 jobs in an organization and every three to six months running an updated assessment of how is it impacting these roles. yeah, and
[00:58:08] Cathy McPhillips: just seeing if that exposure grows.
[00:58:10] Paul Roetzer: Yeah, for sure.
[00:58:11] Cathy McPhillips: Yeah. All right.
[00:58:13] Paul Roetzer: Kudos to the listener. These are great questions. These are like Claire, I know curates a lot of this stuff and makes sure we try to like, you know, address different questions each time. But these are all really good. I'm impressed.
[00:58:25] Cathy McPhillips: Yeah. So next, scaling ai, the essential steps is November 14th.
[00:58:29] So we are done with intro and scaling before Mayon. So this, we're gonna have a little bit of a breather before, well, we'll leave it in bed
[00:58:36] Paul Roetzer: for agency Summit.
[00:58:38] Cathy McPhillips: Yeah. AI for Agency Summit is November 20th. So if you'd like more information, we'll drop that link in the show notes. Also,
[00:58:44] Paul Roetzer: I was totally thinking the other day, it's like, oh, okay.
[00:58:46] Once we get through MAICON, 'cause my whole year has been, once we get through the AI Academy launch, I can like breathe again. Story, my, and then, and then we did, it's like, wait a second. MAICON's in three months. Okay. Once we get through Maken, I can take a couple days off and I'll breathe again, and then I can see that meeting on my schedule for the [00:59:00] AI for Agency Summit agenda.
[00:59:01] I'm like, oh my gosh, what are we, oh, can't wait for the holidays. All right. Well thanks Cathy. I need, I
[00:59:09] Cathy McPhillips: need three days till, new Year's.
[00:59:12] Paul Roetzer: I didn't need to know that. All right, well, thanks Cathy for co-hosting with me. Thanks Claire for helping us put us all together and thanks everyone for attending the classes and asking great questions and tuning in for another edition of AI Answers.
[00:59:25] And thanks to Google Cloud again for partnering with us on this series. And that is all. We'll be back with the regular weekly episode of 1 72, next Tuesday.
[00:59:36] Cathy McPhillips: Thanks everyone.
[00:59:38] Paul Roetzer: Thanks for listening to AI Answers to Keep Learning. Visit SmarterX.ai where you'll find on-demand courses, upcoming classes, and practical resources to guide your AI journey.
[00:59:51] And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions [01:00:00] about ai.