48 Min Read

[The AI Show Episode 204]: AI Answers - What Should Stay Human, AI Pricing vs. Labor Cost, Leapfrogging Digitalisation, Getting Legal On Board & Do Reasoning Models Actually Reason?

Featured Image

Billable hours are in the past, human imperfection is the future of creativity, and agent swarms could be a part of your marketing team by year's end.

In this AI Answers episode, Paul Roetzer and Cathy McPhillips tackle 16 questions from our Intro to AI class attendees and YouTube commenters: from career transitions into AI without coding skills, whether reasoning models actually reason, to the most logical (and messiest) pricing model for AI.

Listen or watch below. Show notes and transcript follow.

Listen Now

Watch the Video

What Is AI Answers?

Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 40,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike.

AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.

In this episode, we address 16 of the top questions from our March 12 Intro to AI class AND a handful of listener questions from YouTube and Spotify, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time, unscripted and unfiltered, just like we do live.

Whether you're just getting started or scaling fast, these are answers that can benefit you and your team.

Timestamps

00:00:00 — Intro

00:05:05 — How do you transition into AI without a coding background?

00:06:03 — What are the best AI skills to learn while job searching?

00:08:56 — Should consultants bill for time spent experimenting with AI?

00:11:44 — How do we make sure AI productivity isn't quietly weakening our thinking?

00:14:17 — What's the best reframe for creatives who see AI as a threat?

00:19:04 — How do you thoughtfully approach and organize an AI free-for-all at your company?

00:20:45 — How do you personalize AI training at the enterprise level?

00:23:41 — How do you get legal stakeholders to support AI adoption instead of blocking it?

00:28:06 — How will AI adoption pick up in traditional industries like manufacturing?

00:31:24 — Can companies behind on digitalization leapfrog ahead with AI?

00:34:33 — Will AI companies eventually price their products based on the labor they replace?

00:37:55 — What is a swarm of AI agents and how will they impact work in the future?

00:43:34 — Do reasoning models actually reason or just predict the next word?

00:46:54 — Should the FCC regulate AI companies as they do monopolies to preserve diversity of thought?

00:49:34 — If AI can solve advanced math, why can't it solve technological unemployment?

00:52:40 — How do we make sure AI gives us time back instead of just more work?

 

Links Mentioned


This episode is brought to you by Google Cloud:

Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.

Learn more about Google Cloud here: https://cloud.google.com/


This week’s episode is also brought to you by our 2026 State of AI Report.

This year, we’re going beyond marketing-specific research to uncover how AI is being adopted and utilized across the organization, and we need your help to create the most comprehensive report yet.

It’s a quick seven-minute lift. In return, you’ll get the full report for free when it drops, plus a chance to win or extend a 12-month SmarterX AI Mastery Membership. Go to smarterx.ai/survey to share your input.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: The imperfections are probably what ends up making human creativity so special in the future. And the stories behind how they learned their craft and what goes into their craft, and AI's just not gonna have those stories. Welcome to AI Answers, a special Q&A series from the Artificial Intelligence Show.

[00:00:18] I'm Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of ai, but we never have enough time to get to all

[00:00:35] Of them. So we created the AI Answers Series to address more of these questions and share real-time insights into the topics and challenges professionals like you are facing. Whether you're just starting your AI journey or already putting it to work in your organization. These are the practical insights, use cases,

[00:00:53] Cathy McPhillips: and strategies you need to grow smarter.

[00:00:56] Let's explore AI together.[00:01:00]

[00:01:02] Paul Roetzer: Welcome to episode 204 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Cathy McPhillips, chief Marketing Officer at SmarterX. Hello, Cathy.

[00:01:12] Cathy McPhillips: Hey, Paul.

[00:01:13] Mike did the AI answers with me last time, so welcome back to the AI answers.

[00:01:18] Cathy McPhillips: Was very sad to miss that opportunity.

[00:01:21] Paul Roetzer: I know I heard from people, it's like, where's Cathy?

[00:01:23] She does AI answers. So if you haven't figured it out by now, this is a special edition. This is our AI answers series. This is the 16th episode of the AI Answers Series. So if you are expecting Mike as our regular weekly co-host, Cathy is my co-host on the AI Answers series. So this is a biweekly, roughly.

[00:01:42] We do these, AI answers is presented by Google Cloud. So it's a series based on questions that we receive from our monthly intro to AI and scaling AI classes, along with some of our virtual events, like webinars and summits. So if you're not familiar with Intro to AI and [00:02:00] scaling ai, we'll put links to both in the show notes.

[00:02:03] They're both free monthly classes that I teach live with Cathy, so Cathy moderates the q and a during those, thus my co-host for the actual podcast. Because what we do is we take questions we couldn't get to during the regular, live, episode, and we record them as a podcast. So today's questions are from Intro to AI 56 that we held on March 12th.

[00:02:29] So again, these are free classes. You can register at any time. The landing page is always live. You can register for the next one. and then we do on-demand access for about seven days, I think afterwards, Cathy, and then we just do it again. Redo it. So, special thanks to Google Cloud for sponsoring this series, this podcast series.

[00:02:46] As part of our AI literacy project that we launched in January of 2025, we have an amazing partnership with Google Cloud and their marketing team. In addition to sponsoring this AI Answers podcast series, they're our partner for the [00:03:00] monthly intro to AI and five essential steps to scaling AI classes that I mentioned.

[00:03:04] As well as a collection of AI blueprints. We, in February launched AI for marketing, sales and customer success blueprints. So if you haven't checked those out, those are a great asset. we also have, on-demand webinars for those, and that was in partnership with Google Cloud. And then we have AI for CMOs coming up that is also part of our Google Cloud partnership.

[00:03:24] So you can check out Google cloud@cloud.google.com and you can learn all, all about them. And then check the show notes for links to all the other resources that I have mentioned. Okay, so Cathy, you'll give us the rundown of how this works, but basically just unscripted responses to questions much like we do live.

[00:03:40] I have not looked at these questions yet. So they're gonna be new to me as they're new to you. So Cathy, take it away and see, cover anything I may have missed already.

[00:03:49] Cathy McPhillips: Yeah, these are good questions today. I was going through them. Claire helped me, us get them in this place. I went through and I was like, oh, that's really tough.

[00:03:55] So they're like, oh, that's really tough too. Paul's gonna have a great time today

[00:03:58] Paul Roetzer: because if you, if [00:04:00] you're a regular listener,

[00:04:01] Cathy McPhillips: what's,

[00:04:01] Paul Roetzer: I just recorded the weekly episode with Mike. So Mike and I were just on for, it ran Cathy like an hour and 40 minutes, like the weekly was. Wow. And I cut in real time. I cut like three of the topics.

[00:04:12] I was like, Mike, we're just not gonna get to these. So I'm like in the Google Doc, like, skip, skip, skip. Like, we're running outta time. So I have, I have been in podcast mode all Monday morning. We are recording this on. Monday, March 16th, right after I got done recording the weekly. I think I'm good. I had to deal with some tax stuff right before I came here.

[00:04:29] Like I gotta get in the right mental space. So I'm

[00:04:31] Cathy McPhillips: Well, here we are.

[00:04:33] Paul Roetzer: I'm ready.

[00:04:34] Cathy McPhillips: Whether you like it or not, here we are.

[00:04:35] Paul Roetzer: We're doing it. I'm, we're doing it on Monday. Like just get all my good thoughts out before Monday.

[00:04:39] Cathy McPhillips: Yeah.

[00:04:39] Paul Roetzer: I gotta recharge for our annual meeting on, we're doing annual meeting

[00:04:42] Cathy McPhillips: on Friday.

[00:04:42] I'm really excited about that. I need, I have a lot of prep I need to do.

[00:04:45] Paul Roetzer: You do. I

[00:04:46] Cathy McPhillips: think

[00:04:46] Paul Roetzer: you should see my list.

[00:04:47] Cathy McPhillips: okay. We're going to get started and here's something exciting. This state you might not know. We have three questions from YouTube this week.

[00:04:55] Paul Roetzer: Really, I avoid YouTube comments and questions usually.

[00:04:58] Cathy McPhillips: Well, they're good questions. [00:05:00] That's why they're here. Claire pulled them out and is like, these are legit good questions. Okay, let's get started.

[00:05:04] Paul Roetzer: Let's do

[00:05:05] Question #1

[00:05:05] Cathy McPhillips: number one. How can someone with the marketing background transition into the AI space without a strong coding background are their roles in AI marketing, product management, or AI strategy?

[00:05:15] And I guess that applies for any industry.

[00:05:17] Paul Roetzer: Yeah, I don't, I mean, increasingly the coding background is just not necessary. I mean, coding in, in many ways now, now still, like people who are expert coders, like, it's not like they, they don't have a role in what they're doing. They're just getting superpowers with these AI systems.

[00:05:33] But for people who aren't coders, yeah, I mean, there's a world of opportunities in marketing. If anything, it's enriching traditional marketing roles to enable coding with it, you know, it's like. Me and Cathy, who traditionally have no coding ability, we can now go into something like Lovable or Claude Code or Google Gemini, and we can, we can do coding using natural language.

[00:05:52] So, yeah, I think that there's, tremendous roles to, to blend what's always been done by marketers and to enhance it now with [00:06:00] coding capabilities of these different AI models.

[00:06:03] Question #2

[00:06:03] Cathy McPhillips: Great. Number two, we talk a lot about upskilling and reskilling in your current role, but if someone is currently unemployed or looking for a job, what are the best parts of AI to learn to become AI forward or a stronger candidate?

[00:06:17] Paul Roetzer: Prompting, you know, learning how to talk to the machine is critical. You know, learning how to treat it like a collaborator, a thought partner. So the first thing I would focus on is your prompting abilities and being able to use these tools to not only enhance the outputs you create and your creative, but your critical thinking.

[00:06:36] So that to me is the, you know, the first step. And then, I guess like prior to that is just the basic understanding, like a deep understanding of AI and what it's capable of and the different features, and apps that are within these different platforms. And then the other thing that would make you stronger candid, I think is a demonstrated competency so that you're, you're using the tools and if, if you're unemployed, you can gain this through, you know, [00:07:00] going and getting certificates to, to show that you're continuing your learning education.

[00:07:03] But then building gems in Google, Gemini or, or GPTs and ChatGPT to developing skills in Claude. Like, do things and it can be in your personal life and, you know, a health tracker or a. A travel agent, you know, a trip planner, like just do things with it. So when you're sitting there sitting, you know, we ask this question in our interviews, how, how are you using AI personally and professionally?

[00:07:25] So when you know, the personal side is probably where you're gonna have to focus on at the moment. you can talk about it, how you're using it with, you know, to help your kids with your homework, how you use guided learning in Google Gemini to help your, your kids with their homework and how you've built, you know, these different tools.

[00:07:37] So I would say just build things. Do things like the barriers have come down for all of us to actually. Create really interesting things and to demonstrate your ability. Like one that comes to mind for me right now is, like a wealth manager. Like, you know, it's, can I reference up front like I'm to deal with tax stuff this morning.

[00:07:55] And so you just, like, my mind is on the financial side and so like, as my life, you know, [00:08:00] as, as I get older and I, different complexities related to like the numbers of businesses I own and how those businesses are structured and my personal taxes and the business taxes and, you know. Investing for my kid's future, like getting ready for college in a few years.

[00:08:12] Like all these things, it's like, oh, I think I need to talk to somebody. Like it's becoming more complicated than I can handle. And I'm like, or I just need to build a gym. Like I need to. It's for a starting point. I'm not saying I don't need like a professional advisor, but I think most of the questions I have are, are things that I could probably just talk to a Google Gem about.

[00:08:33] And so I'm like. That, that's kind of example, like just build things, do things that, that provide value to you in your life and demonstrate your competencies.

[00:08:41] Cathy McPhillips: And the other thing is to maybe do, use it in something you know really well. So when you see the output, you could learn how to better prompt it to say like, oh, that's actually not the right answer.

[00:08:50] Or it could have been a richer answer. And, things like that.

[00:08:56] Question #3

[00:09:11] Cathy McPhillips: Okay, number three. For contractors or consultants who bill by the hour, how should they think about time spent experimenting with prompts or building AI workflows? It can be hard to justify billing for tinkering even if that experimentation ultimately replaces hours of manual work.

[00:09:12] Paul Roetzer: I just don't understand how billing by the hour is a thing anymore. I t's, the value exchange is just broken. Like, you know, if I'm paying an advisor. It can be a legal advisor, HR advisor, consultant, whatever it is. I 'm not paying for their hours like I am. I'm paying for the output or the value that they create, and I know they can work more efficiently and I don't like, I want a fair value exchange.

[00:09:48] Like if Cathy, if you were functioning as a consultant to me, and I came to you and. There was like a high value thing I wanted you to help me solve and you solve it in 25 minutes because [00:10:00] you gave the right prompt and then you verified the output and you put in your own expertise and context. And like an hour later, you email me, say, okay, here, I think I've solved it.

[00:10:10] Here's what I would recommend. Here's the output from it. It's eight pages. I've summarized it into these three points that answered the question you were looking for, for guidance on should I pay you for 25 minutes? No. You, I should pay you because you solved a problem for me. The trick comes in with people who have always just charged by the hour, and it's the only way they really know how to do it.

[00:10:32] but I just, I don't know, like, I don't even know how to answer this question because I just don't think billable is a viable way for anyone to be charging for work anymore. should you charge them for experimentation with your prompts and stuff? If it's specific to that project? Like if, but again.

[00:10:50] As the person hiring that contractor consultant, I'm sort of expecting a level of AI literacy and competency when it comes to [00:11:00] prompting and building AI workflows. And it's part of why I'm hiring you. It's like I assume you're using AI to sort of superpower, give you superpowers in these things. So, I don't know.

[00:11:09] I just, I feel I, but again, like I wrote a book in 2012 where I said Eliminate Billable Hours was that was the title of chapter one. So I'm, I'm kind of like. I have some bias in the game where I've always thought billable was a inefficient system for both parties. I understand in some cases they're just sort of essential, like there's no other way to, to do it.

[00:11:31] But I would just say like anywhere possible, just get rid of billable and charge for the reasonable value of the output you're creating or the problem you're solving. It's always gonna be a better scenario for everybody.

[00:11:44] Question #4

[00:11:44] Cathy McPhillips: Yep. Number four. There is lots of talk about AI increasingly increasing productivity, but how do we make sure we're not quietly losing things like weaker thinking over reliance and poor decision making behind speed and efficiency?

[00:11:58] At what point does AI stop being a [00:12:00] benefit to a business and start damaging it?

[00:12:03] Paul Roetzer: I think about this one a lot. I actually wrote a little bit in a related way in my exec AI insider newsletter this, this past Sunday. about this idea of in, you know, increasing productivity should in theory be giving us back time.

[00:12:18] So I was more focusing on the time aspect of it, but there is this challenge where we become too reliant on it and we lose the, especially the training grounds for younger employees is something else I've been thinking a lot about is how do we create job opportunities for entry level employees and associate level.

[00:12:37] but also how do we develop them to be the experts? You know, we've all become, after doing things for decades in this industry or, you know, years depending on how long you've been at it. and that's, I think the challenge is there's no solid organizational structure or change management process that I have seen yet or, or heard of within a company that is properly addressing [00:13:00] this.

[00:13:00] And the same challenge is faced in schools right now. Where the shortcut is there for all of us. It's like, I gotta do this thing. Like, ah, I'll just give it a few prompts and I'll get to the output. And it's easy, especially at a younger age if you're not properly trained, to just think that that's enough.

[00:13:16] And I did the work and I created the output, and yeah, I checked it. I verified everything seemed good, but you're not deeply thinking about the topic. You're not gaining confidence to present the topic or argue a position based on the topic or the output. And so yes, there is always an exchange and we have to be very intentional as leaders to not let that slip, especially if you're in an environment where you're being pressured to reduce staff and it's just gonna compile the work.

[00:13:42] Like if I came to you, Cathy, like let's fast forward and say we had a team of 200 people and you're leading a team of 40 marketers. And I'm like, Hey, Cathy, like we don't really need 10 of them, or 20 of them. Like, let's cut that staff well. Doesn't reduce your workload. Like if anything, it's now gonna like create more for you to do.

[00:13:58] 'cause now the people who are left are just gonna be [00:14:00] using AI even more and they're just gonna like push all that work up to you. And it's like, that sounds horrible. So, I don't know. I think this is a, it's sort of a new frontier that hopefully more companies are at least starting to think about this.

[00:14:12] But I, like I said, I just haven't really seen elegant solutions to this yet.

[00:14:17] Cathy McPhillips: Okay.

[00:14:17] Question #5

[00:14:17] Cathy McPhillips: Number fives for people in creative industries like fiction publishing or fine arts. Who see AI as a threat to human creativity? What's the most useful reframe you've seen actually shift that mindset?

[00:14:30] Paul Roetzer: It's interesting. We actually talked about writers and AI on the weekly episode this week.

[00:14:36] So episode 203, if you haven't heard it yet, was I touched on this and I actually shared some of the insights, Cathy, from my AI for writers keynote from 2025, where I was basically explaining that it's a choice. Like I think that. Yeah. Increasingly, writers, artists, musicians, like we're all gonna have to come to [00:15:00] grips.

[00:15:00] And I say this as a writer, that, and my wife is fine arts major, she's a painting major. I say this as like, we have to come to grips with the fact that the AI is going to be able to create at a human expert level in all domains. It's gonna be somewhat indistinguishable on the surface. You won't know if the human did it or if the AI did it just by looking at it necessarily or hearing it.

[00:15:24] and this was in relation to a New York Times article where they were comparing AI written outputs versus human written outputs and letting people like vote on like, which, like a taste test basically. And in essence, that's what came out is like people can't tell the difference and they actually preferred the AI written stuff over the human stuff.

[00:15:40] So there's more conversation on episode 203 about that, but, um. I think the key message I had in that episode and what I had in the keynote was that we get the choice. Just because AI can do the thing doesn't mean you have to let it, and I do think that there will increasingly be human preference [00:16:00] towards content and creative that they know came from a human.

[00:16:03] Cathy McPhillips: Mm-hmm.

[00:16:04] Paul Roetzer: So while we might not be able to distinguish it. On the surface, if I show you A and B, whether it's artwork or you know, give you A and B from a musical standpoint, or a and b from a text output. And then I tell you the story behind the human creator and the fact that AI did this with these three prompts, your emotional connection will transfer to the human one most times.

[00:16:26] Like that's what you're gonna lead toward preference of you know, I was at, interesting, I was at my kids, my son was in a, a, a talent show this weekend. At his school and called Saint Showcase. And I was just like, you know, I was listening to the one kid as an exceptional piano player. He is played at Carnegie Hall and so you're watching him play.

[00:16:48] And I actually found myself thinking of the beauty of that. And like, I would never wanna sit there and watch an AI play a piano and like the imperfections of someone at that age who's so exceptional at their [00:17:00] craft that I couldn't even notice imperfections if they existed. Like I wouldn't even know if he played a key wrong.

[00:17:05] But just to watch that and know he is doing it on stage and it's because of all this training. It's like I would like a hundred times out of a hundred I will choose to listen to that or watch these kids do what they're doing versus watching AI do what they do that's like, just gonna be perfect. So the imperfections is probably what ends up making the human creativity so special in the future.

[00:17:26] and the stories behind how they learned their craft and what goes into their craft. And AI's just not gonna have those stories and. So, I dunno, I'm, I'm a very optimistic about like an explosion of, you know, human creativity and in some ways human plus AI creativity. Like, I do think there's a space for that.

[00:17:43] I just think it has to be presented at an authentic way.

[00:17:47] Cathy McPhillips: And to that point, you might look at a IR because you're interested in what the developments are, but if that's not going to replace it, you looking at other art. So it's not an either or in your, in your case.

[00:17:59] Paul Roetzer: [00:18:00] No. And I just think there's gonna be a balance.

[00:18:01] There's gonna be some people who love the AI stuff. It's, it's kinda like, I don't know, like in some ways I can think about music might be a good parallel. Like, I love live music. Like I love being at concerts, but I also love listening to live albums. Like I want the rawness of that live experience and, like the Unplugged experience, I used to love that series on MTV, like the Unplug series.

[00:18:24] And it's like, I want that from the human creators. Like I want. the authenticity and the rawness of live experiences, that you're just not gonna get from the manufactured stuff. So not saying like, great, you know, all music isn't, you know, great, but like, even the introduction of all the other elements of music over the last couple decades, like there's still just that, something about that live performance that's just different to me than like a studio produced album.

[00:18:49] And I think like creatives start to becoming that way. It's just like, you, you accept that AI is helping create things and that's fine, but there's always gonna be like. People who just prefer the, [00:19:00] you know, the untouched, I guess. I dunno.

[00:19:02] Cathy McPhillips: Yep.

[00:19:04] Question #6

[00:19:04] Cathy McPhillips: Number six, what if your organization doesn't have an AI approach and it's just a Wild West, free for all with everyone using whatever personal AI tool they want however they want.

[00:19:14] How do you approach wrangling your team and moving forward?

[00:19:17] Paul Roetzer: So if, funny enough, on episode 2 0 3 of the podcast we did, the AI pulse was actually somewhat related to this and it asked like, how is it being used within the organization? It was like. The dominant answer is we're just doing whatever we want.

[00:19:29] Like now it's an informal poll, it's like 70 people or something, but I'm not mistaken. It was like 62% said we just use whatever we want to use. So I assume that's mostly like small, mid-sized businesses. I can't imagine too many enterprises are like that. Um. You wrangle your team by actually putting guardrails in place and giving them approved tools.

[00:19:51] Like too often we see that first misstep of no one has actually given a structured, platform to people. They haven't like gone and [00:20:00] got Gemini licenses and said, okay, everyone has access to Gemini and here's your standard use cases and we've built the first five gems for you and they're personalized based on your role or your department or team or whatever.

[00:20:09] So you have to approach it from a change management perspective, and that has to come from the top. It's really hard to do that as the chief marketing officer or the head of sales if you don't have support from the C-Suite to actually go about doing this. And if you don't work well and align with it and procurement and legal.

[00:20:26] So it's just there. There are just steps that have to be taken. But I think access to the technology and then proper training are probably the first two fundamental steps that have to every organization needs to do.

[00:20:38] Cathy McPhillips: Sure. I'm gonna stop you there because I know you could go on, but I think number seven's question, well touch on a few other things.

[00:20:44] Alright. So you don't have to repeat yourself.

[00:20:45] Question #7

[00:20:46] Cathy McPhillips: number seven, where have you personalized? Where have you seen personalized training done well? Is there any advice on how to rethink training at an enterprise level? Digital liter literacy requires curiosity and experimentation at the individual level and making [00:21:00] personalized training scalable is difficult.

[00:21:02] Do you have any advice on that?

[00:21:04] Paul Roetzer: I can just talk to how we're thinking about it through our AI Academy, and how we're trying to then help, you know, our, our, our partners, our, our clients work with AI literacy within their organizations through that, and I think the first step that we're really starting to guide people around is they need to do a survey of their people.

[00:21:22] They need to figure out where people are at with their comprehension and competency of the I tools, but also. Where they're at with their feelings about ai. the reality is that there's just gonna be people across teams and enterprises that don't want anything to do with it. And maybe they're a writer and they just hate it, or they think it threatens their future, or they're a graphic designer or video production person, or you know, an expert who feels like AI just can't do what I do.

[00:21:46] Like, there's lots of different reasons why people wouldn't want the personalized training. So first you have to break down the human barrier of resistance to change. And the friction that can come from whatever the reason is, why they're not interested in AI [00:22:00] or haven't already taken the initiative to do it.

[00:22:02] And then from there, what I often tell people is you have to help them realize the value through a use case that matters to them. And often that comes from a use case doing something they hate doing. So what I'll often teach people is go in and find like, what are the parts of your job you don't enjoy?

[00:22:21] Like every week? What's something you do where you just don't look forward to it and then say, can we, you know, create a custom AI to help you with that thing? And so you can break down that barrier by finding something that creates value for them. And they don't even need to have a deep understanding of AI yet at that.

[00:22:39] They just need to start opening their mind to it. But if you don't start with that survey and that, you know, sentiment analysis of where people are at on their journey, and maybe many of them haven't started for different reasons, then you're not gonna be able to personalize the training properly. Then, you know, once you kind of break those barriers down, now you can actually start to think by, you know, so we think at least through academy [00:23:00] of like fundamentals, like what's the horizontal information everybody needs to know, like the basics of of ai.

[00:23:06] And then we'll get into, you know, identification of use cases and problems so that you can personalize your AI knowledge to yourself. But then we create AI for industries. So we try and attack it by like, what are the different industries you might be in? Then we do AI for departments. So you can then go into marketing and sales and customer success.

[00:23:23] we'll do AI by business types, AI by different roles. and so we're trying to basically create a collection of resources that allow people. To personalize their learning or allow admins to personalize learning based on where people are and where they want to go in their career.

[00:23:39] Cathy McPhillips: Love it.

[00:23:41] Question #8

[00:23:41] Cathy McPhillips: Number eight.

[00:23:42] AI adoption moves fastest in areas that are already outsourced. It's essentially a vendor swap, but legal and privacy concerns are the primary hurdles. How do you approach legal stakeholders? So they're seen as enablers of innovation rather than friction, or worse yet a roadblock. How do we frame it so all groups [00:24:00] feel supported?

[00:24:00] Yeah,

[00:24:01] Paul Roetzer: we talked about the outsource thing on, on the last weekly episode. not, not two. What was this? Morning's? 203?

[00:24:09] Cathy McPhillips: 203

[00:24:09] Paul Roetzer: 202 or 201? I'm losing track with all the AI answers episodes.

[00:24:13] Cathy McPhillips: Oh, it was 201. You're right.

[00:24:15] Paul Roetzer: 201, okay. Yep. So I talked about this idea that outsourcing is the most obvious thing 'cause it doesn't impact your people.

[00:24:20] You've already proven that it's easy to like, have someone else do the work. So having AI agents do the work that was previously outsourced is a very natural thing in terms of vendor swap. I see this as almost like sep separate issues, but then the legal and privacy concerns being primary hurdles. you know, I think you, you we've always say like, you have to involve legal IT procurement from day one.

[00:24:40] Like even if you have the autonomy as a leader of a department, of your organization to go do your thing and like go get, you know, licenses for your teams, you still want to involve, those different areas. You wanna be aligned with them. You wanna understand what are their. what are the areas where they're resistant to [00:25:00] infusing ai and why are they resistant?

[00:25:02] You know, I think the more you work together and understand the perspective you're each coming from, the better you're going to be able to drive adoption and not run into those issues. So I often advise people like, do an audit upfront, like sit down, talk with legal, say where, where are you at with your understanding?

[00:25:17] Where are we at in terms of tool use and access to platforms? What are you seeing as your biggest concerns and what are the risks that the organization is watching for? And then how do we steer towards very low to no risk applications and use cases so that you can keep doing what you're doing at that macro level.

[00:25:35] And I think the more open you are and you're not just like butting heads right away, you know, pushing back on each other and you actually come to a point of understanding. I , I think, I mean this is how I just approach life in general, is like just understand each other, see where they're coming from.

[00:25:49] Something you might think is illogical might actually make a ton of sense if you understand it like we shared. The story today on, again, I'm saying today is like I recorded today, episode 2 0 3 that came [00:26:00] out on Tuesday. 'cause you're listening to this maybe on Thursday about McKinsey. And they have this internal AI system that somebody hacked because they left APIs exposed and people got like the, this, this hacker quote unquote hacker, got access to basically everything like the internal weights of how McKinsey works.

[00:26:17] In essence, they access through this AI model, all the system prompts, thousands of accounts. And so there's a reason why it is hesitant. There's a reason why in especially in a bigger enterprise, they move slow with lots of unknowns. There's lots of risks and you have to, you have to accept that like we, we have to work together on things and not just like assume the other person's position is absurd.

[00:26:39] 'cause. There's usually something in the middle to be found.

[00:26:43] Cathy McPhillips: Yeah. My son's actually dealing with this right now. He's in it and he, he'll make a recommendation and then he'll go to cybersecurity for before he gives his recommendation to the group that's asking him. He's like, I'd not, I don't know all that stuff.

[00:26:54] I mean, he's young. Oh, yeah. Which, you know, but also he's like, there are a lot of other implications I don't think about. [00:27:00]

[00:27:00] Paul Roetzer: Totally. Yeah. There's stuff on the cybersecurity side I don't want to think about, like I just. Until I have to deal with it. It's like, it's like taxes. Like I just, I just don't want to even talk to anyone, but I know I have to.

[00:27:11] It's the reality of where we are.

[00:27:13] Cathy McPhillips: Yep. I'm gonna take a quick break and, talk to you about our state of AI for Business Report. I believe you talked about it earlier this week on the podcast, but as a reminder, we run our annual state of AI for Business reports. This is our first year of doing for business.

[00:27:25] Actually, we've done it for marketing the past few years. It's an expansion of that report. So this year we're going beyond marketing specific research to uncover how AI is being adopted and utilized across the entire organization. And to do that, we would like all of your responses. So we're hope looking for thousands of business professionals across all industries and functions.

[00:27:43] And we would love to have you be one of them. If you've already taken it, pass it on to your team. We'd love that. It takes about five to seven minutes to complete. And in return, you'll get a copy of the full report before it goes live, before it drops, and a chance to win or extend a 12 month. SmarterX.

[00:27:58] AI Academy [00:28:00] Mastery membership. So go do SmarterX dot ai slash survey to share your input. So we thank you for that.

[00:28:06] Question #9

[00:28:06] Cathy McPhillips: And onto question number nine, how do you see AI adoption rates pick up in traditional industries like manufacturing? And it seems like these organizations have a long way to go to realize the potential, and I don't know if I agree with that question before you answer.

[00:28:19] I think manufacturing actually is doing a really good job. Do you have more insights?

[00:28:24] Paul Roetzer: I would say there's probably segments of manufacturing that are doing a really good job. The only parallel I can draw is back in when I owned my, my marketing agency. So I had an agency for 16 years sold in 2021. There was a good portion of the time where manufacturing was our largest segment, the largest industry we worked in.

[00:28:41] It was like 25 to 30% of our revenue at one point, and there were pockets of manufacturing that we're racing ahead in CRM adoption. So I'll, I'll put this in the context of like digital transformation and CRM integration and things like that. And so there were definitely some that were all about it, and we [00:29:00] were driving HubSpot adoption within these organizations.

[00:29:02] And then there were others we would go in and their salespeople refused to stop you working on yellow notepads or Excel workbooks. They would not enter information into a CRM system and they would never log into the CRM system. So I just, I think that there's some industries that are naturally slower, but it, again, it's, you can't, can't make these industries like these homogenous groups.

[00:29:21] Like there, there's always segments within verticals within these industries that. You are probably doing this really well. I think, you know, any slow adoption, regardless of what the reason is, you, you have to create a sense of urgency at the highest level. If the CEO isn't bought in, and if the board isn't pushing for change, then everything's gonna happen in silos and pockets and like the marketing team is gonna race ahead while everybody else is left behind.

[00:29:45] Like that. That's a common thing within AI adoption right now. So I think you have to find the trigger points that get them to move, whether you, you have to know what is the thing that drives the CEO to make decisions, to put prioritization on different, you know, change [00:30:00] management issues or growth initiatives, whatever it is.

[00:30:02] And that could be an executive briefing with someone they trust. It's like get someone in from the outside who can sit there and say, here's what's going on. This is, I do a lot of this. I'll go in and do like state of AI executive briefings for teams and sometimes. I would say the leadership is unsure.

[00:30:16] Like you'll go in especially bigger enterprises that are slower moving, like, you know, financial institutions, healthcare organizations, manufacturing organizations, and you're gonna have these AI champions internally who are doing everything and they're excited and they're moving fast. and then you're gonna have the rest of the people in HR and operations and different areas.

[00:30:34] It's just like legal where they just don't see it. They don't, they don't get the application to what they're doing. Nobody's personalized use cases for them. Nobody's giving them the. This is moving really fast talk and it's gonna change our business. And so sometimes, like that's what it takes is just, you know, it can be in one hour, two hour executive meeting where you just have open conversations, like questions be asked in a setting where they don't, you know, they're not gonna be made to feel stupid that they don't know the answers.

[00:30:58] Like, it's okay to [00:31:00] just have honest conversations, but you gotta, you gotta know, you gotta know what moves your management team. And I'm just increasingly of the belief that it, that that has to be C-suite driven. Like it's gotta come from the top. Those people have to be convinced of the need to prioritize AI transformation and the urgency with which they need to do it.

[00:31:19] Otherwise, they're just gonna lag behind regardless of what industry it's in.

[00:31:22] Cathy McPhillips: Yep.

[00:31:24] Question #10

[00:31:36] Cathy McPhillips: Number 10, my company is behind with the digital Digi digitalization of HR processes. Now AI is here. While we still face the challenge of this, is this actually an opportunity for leapfrogging and if so, how?

[00:31:37] Paul Roetzer: Yeah, I suppose it can be, I mean, we're definitely spending more time ourselves thinking about what does the future organizational chart look like.

[00:31:46] There's anytime you deal with this like significant change where it's, you know. The hiring process is different. The, talent evaluation process is different. What the org chart looks like, what career paths look [00:32:00] like. Just even actually over this weekend, I shared a little bit with the team internally on Friday with like, this kind of thought process I was going through.

[00:32:07] I'm, I'm not gonna get into that now, Cathy, but, I think a lot about this and I do think that there's an opportunity to actually kind of probably reimagine what. We all do, like what we're trained to do and the role we're all going to play in the future of work. And how at the different stages of like associate to manager, to director, to vp, to to, you know, the C-Suite, what those roles look like.

[00:32:35] I 'm actually, my, my theory again, without getting into too many details here, my, my theory is that increasingly I don't think. The roles we have all grown up with is, I don't think they're gonna look anything like they do now, in three years. And I do think that there's a chance, and I actually was debating if I could do this by our annual meeting together on Thursday with our company.[00:33:00]

[00:33:00] I think I have an idea of what it might look like instead, which I've been searching for for a few years. And, um. I, so I do think that there's a leapfrog opportunity because the stuff I was working on, even as of like this morning, it's just a, it's a very different way of thinking about work and roles and career paths.

[00:33:24] And that would then translate over to our own HR processes and how we recruit people and what the interview process looks like, how we do assessments, how we guide people through their careers, how we accelerate. Yeah, their ability to gain experience and expertise when AI is doing a lot of the entry level work for them that it didn't used to do.

[00:33:45] So, yeah, I'm very, I don't, I , I don't have the answers I can share with you right now on like the, how part of this, but I do think that there's gonna, there's gonna be a way to do this in a pretty transformational way. And I think enterprises are gonna have to figure out how to find a middle [00:34:00] ground because major change is never, something that humans are huge fans of.

[00:34:06] But again, that's why I always say like building an AI native company, there's just never been a greater time. 'cause you can just do things like, you can just say, Hey, we're gonna, we're gonna approach it this way. Here's what the titles are gonna be. Here's what it it means, and here's, you know, the skills and traits we're gonna look for.

[00:34:19] And we're gonna infuse this into training right now and starting next week we're gonna do this. Like, you can do that stuff when you're young and kind of forming what that the organization looks like. You can't just flip a switch and do that with the big companies.

[00:34:31] Cathy McPhillips: Yep.

[00:34:33] Question #11

[00:34:33] Cathy McPhillips: Number 11. This was mentioned in class, but I liked it a lot.

[00:34:37] If AI begins replacing large amounts of human labor due to cost advantages, should we expect AI labs to eventually price their products closer to the labor they replace rather than the marginal cost of the technology?

[00:34:48] Paul Roetzer: I said this on, I don't remember what episode it was, it's been in the last 30 days.

[00:34:54] One of those episodes I talked about the idea of labor replacement cost, and I actually think it's. [00:35:00] One of the most logical, well, I don't advocate for it. I do think it is one of the most logical pricing models. I don't think legacy software companies will be able to do it in the near term because it's a PR nightmare to position your products that way as labor replacement.

[00:35:16] But I do think AI native companies like Mechanize and others are a hundred percent going to do this. They're, they're going to say, Hey, you have five customer service people right now, and you're spending $800,000 a year. and they're resolving x number of tickets per day. And your response time is this, we can triple the number of tickets per day that they can output, or they can do, we can cut, response time in, half at least.

[00:35:46] And we can do it with one agent for $250,000 a year. I f that is true, if it actually works that way. There isn't a publicly traded CEO in the world that can [00:36:00] say, no thank you. They, they have a fiduciary responsibility to, to do that. that is very messy. Like how, what that means economically, what it means to jobs is extremely messy.

[00:36:13] But I do think that the best pricing model for AI is outcome based and value based, and this idea of metering it and treating it like a utility. And credit pricing, I think is, we'll have seen its day very quickly because it's very abstract, it's impossible to budget for. and the, you know, on the buyer's end on, on the, you know, company end.

[00:36:39] I know that the cost of compute is dropping 10 x every year, and I'm gonna get really pissed if the companies we're relying on aren't passing that savings onto us. And I'm just gonna go. Build my own versions of things or I'm gonna go find a native company that's willing to price it differently. So yeah, I think there's a lot of disruption coming to pricing.

[00:36:59] And I do think [00:37:00] labor re replacement cost is actually the most logical way to talk to finance and, hr. It's the thing that makes the most sense to them. You go talk to CFO and say, yeah, we're gonna charge you credits. You might use a thousand, you might use 50,000, but if you cap it at a thousand, then your AI's gonna get shut off after seven days.

[00:37:19] Cathy McPhillips: It's like what? Or you're paying, or you're paying to upgrade to a level that you don't want. Right.

[00:37:23] Paul Roetzer: You're just gonna constantly bump up that credit limit and they'll be like, excuse me. No. So instead, if I say, Hey, listen, you, you have unlimited for this level of outputs, it's basically doing the work of 10 marketers.

[00:37:35] it's gonna cost you $50,000 a month. And you're, you say,

[00:37:40] Cathy McPhillips: okay,

[00:37:41] Paul Roetzer: like if we're. I t's, it's how they're gonna think. Like it's, it, I don't, I don't know how it isn't that the eventual way they do it just seems too obvious that they have to find a way to do it. It's just very messy and it's not gonna look good.

[00:37:53] Cathy McPhillips: Definitely.

[00:37:55] Question #12

[00:37:55] Cathy McPhillips: Number 12 on, you touched on this on last time's AI answers on episode 202. What exactly is a swarm and why does it matter for how AI systems will work in the future?

[00:38:04] Paul Roetzer: Yeah, swarms just like an informal way to explain a bunch of agents working together. I think for me, it probably came, there was a.

[00:38:11] Ilya Sutskever article in the Atlantic years ago where he talked about swarms of agents, and I , I think it just is a term that's always stuck in my mind. I don't even know that it's like a, like the fully accepted term in the industry. But in essence, it just means, like I 've, I've said a symphony of agents, I've said a swarm of agents.

[00:38:27] It just means like, let's say I have 10 different agents and if we stay on like a marketing example, I've got my email agent, that does the writing and segmentation of the email database. I've got my media buying agent that. You know, knows the go to market strategy and the goals, and it actually figures out what markets to put things in.

[00:38:44] I've got my creative agent that does all the creative outputs and it's trained with little different system prompt, and it's trained on the brand identity standards. I've got my strategy agent that oversees the whole thing. So like you would've a marketing team, you basically have agents that are highly trained to do these specific functions, but then they [00:39:00] live in an environment where they all collaborate and work together.

[00:39:02] And so Cathy could say, all right, we wanna launch some new AI certification series. here's access to all the previous game plans. Here's access to the the data so you can go figure out what worked, what didn't in those campaigns. now go do your thing. I'll check back in tomorrow night and Cathy hits go and the swarm of agents like start working together and they figure all this stuff out and they plan together.

[00:39:23] And. maybe at some point they ping Cathy and say, can you approve this plan? Here's the steps we're gonna go through. Great. Looks good. Okay. How about this creative? Do you like this direction? We'll create a hundred variations of this for all the different channels. Yep, that sounds good. we've got an opportunity to do, like, it just does things and then Cathy just orchestrates those things.

[00:39:42] And so that's what I mean by a swar of agents, and I understand that. That sounds really weird. Yeah, but I , I think I said on the recent episode, like I , I do believe that by the end of this year, there's going to be many instances of like early adopters, people who are out on the [00:40:00] frontiers, who are running their marketing, sales and success teams in this way, where the human is largely just orchestrating these groups of agents working together.

[00:40:09] Cathy McPhillips: Yeah. I'm looking forward to that,

[00:40:11] Paul Roetzer: and it sounds amazing and it sounds terrifying at the same time. More hundred percent. It creates all these dominoes of, whoa, right? What does that mean? What happens to jobs? What, like that, that's the challenge

[00:40:21] Cathy McPhillips: and just the interconnectivity of one thing. You know, if it's doing this for this agent and this for this agent, you've, you really need to watch once what the process.

[00:40:31] Paul Roetzer: Yeah, and then we actually, so in episode 2 0 3 we talked about AI agents gone wrong within Amazon and AWS and they had, they had some major issues recently where it was like some odd written code that seemed to be messing with some stuff and took down their infrastructure for like 13 hours. which isn't good if you're Amazon, but you can see this scenario playing out with this example.

[00:40:51] It's like Cathy's overseeing these 10 agents and they seem to be doing everything and it's going great and like, you know, you're spot checking stuff and it all looks really good. And then like there's [00:41:00] this one little detail that you didn't realize the media buying agent was doing, and all of a sudden, like it goes haywire and it starts put in your ads somewhere they shouldn't be.

[00:41:07] Or it starts spending money it you didn't think it had access to, or it accessed a database it shouldn't have accessed. And like, who knows, like all the things that could now go wrong that you have to contingency plan for. and again, I , I don't know of anyone who is doing that. Thinking about using these AI agents working together.

[00:41:27] Yes. Lots of people. The contingency plans and scenario planning of what are all the things that could then go wrong right in that environment? Or what does it mean to our staff and our hiring plans? And, 'cause if you, you know this, play this out. Like say, let's say theoretically this is possible that by the end of this year there's gonna be a bunch of like AI native startups that are gonna allow us to just go hire these agents that just do these things that I just explained.

[00:41:50] Or HubSpot enables it within their platform. Um. Then you go to HR and say, Hey, we're gonna put all these [00:42:00] agents swarms in place, and they're gonna do the work of the people that are doing this in marketing today. And we actually need to start hiring people who can just orchestrate these swarms of agents.

[00:42:08] HR is gonna be like, what? What are you talking about? And because HR doesn't even know these things are a thing and they don't even know what an agent is and like, it 's just, that's why I don't think, even if it's technically possible, which I actually do think it will be in many, departments and teams.

[00:42:25] It does not mean that we will see these things being rolled out across industries because the downstream effects of this happening are so immense and we are so unprepared that I don't know that like. Work will look fundamentally different to most people, even if it's technologically possible.

[00:42:46] Cathy McPhillips: Yeah, I 'm just thinking about like things like other human skills we, that maybe are trainable, you know, like management and how do you know, how do you teach someone due diligence and how do you like processes? All of those sorts of things seem like they be [00:43:00] critical right now. Yeah, more than,

[00:43:02] Paul Roetzer: that's the stuff I've been thinking about a lot, which I hopefully I'll have something to share later.

[00:43:05] I don't know. I think that the thing I floated the team on Fridays, like I might just do as like a MAICON keynote. Like I'm pretty sure by October I could like flesh up this out. But I don't know.

[00:43:16] Cathy McPhillips: I'm gonna say, I'm gonna say to everyone, buy your MAICON tickets so you can hear it.

[00:43:19] Paul Roetzer: Yeah, it'll definitely be in, it'll be part of, it might be the talk it make.

[00:43:22] I don't know. I don't know. I made progress this morning. I, we'll, we'll see it. I'm really excited about the direction. I'm very, I have more questions right now than I had this, like, it started Friday night.

[00:43:34] Question #13

[00:43:34] Cathy McPhillips: Okay. Number 13. Do reasoning models actually reason or are they simply predicting the next word and then rationalizing the answer after the fact?

[00:43:44] Paul Roetzer: It's more of a philosophical question, honestly. I answered something like this recently. I think I said something to the degree of like, we don't really know how humans reason, like, we don't, like, we don't really know. We, we [00:44:00] know humans go through like system two thinking, where we go through this like chain of thought and we go through this reasoning process to like play out scenarios and we, we infuse like our own experience and you know, we observe the world around us and like all this stuff goes into our thinking and then we make these decisions.

[00:44:15] But some people would argue that, that the human process is actually not that different than what the machines are doing. so. I don't, I don't know the answer to this. and I would say that the AI labs themselves and leading AI researchers aren't a hundred percent clear on the answer to this.

[00:44:33] There are definitely some people who think it is as simple as this, like next token or word prediction. And it does that just in, you know, thousands of times per second. And then it enables it to kind of like go back and check itself and verify the predictions, and that's all that's happening. It's just mathematics in essence.

[00:44:51] And then there are other people who are. Like, I think it's aware of, its thinking like that it 's conscious or sent sentient. [00:45:00] has metacognition like it's aware, aware of its own awareness in essence. and I don't know, I don't know that we're ever gonna get a definitive answer. Like at some point we, we might, but the reason I said this might be philosophical is then you start getting go.

[00:45:13] What is consciousness? Like? What, how do we know? We're aware of our own awareness and thoughts. And so, and I'm not trying to be like, funny on this, like this is truly like conversations that actually happen. Anthropic probably talks more about this than most AI labs about this idea of like the model being anxious or having consciousness.

[00:45:33] And I think the most recent thing I saw was there was like a 20% chance they think it's actually aware of itself and aware of when it's being tested. And so we don't know, but I think it's like anything else, like it , all that matters to me is it simulates it. Really well like that, that, that we're even debating whether it knows what it's doing or not, or it actually is reasoning or not.

[00:45:54] And my argument is it doesn't matter. Like it, I don't think it has actual empathy. I don't think it [00:46:00] really feels anything toward its users, but it simulates empathy extremely well, better than some of the humans I know. So the fact that it can do these things, even if it doesn't do them like a human would, the fact is it's still.

[00:46:15] Doing it. And it does it like a human, like it simulates the outcome that a human would be able to execute.

[00:46:22] Cathy McPhillips: So yes,

[00:46:23] Paul Roetzer: hundred, I don't think we're ever gonna have a great answer.

[00:46:25] Cathy McPhillips: Us reasoning and then rationalizing is a hundred percent a human thing. Well,

[00:46:30] Paul Roetzer: yeah, but I'm just saying like,

[00:46:32] Cathy McPhillips: no, I'm agreeing. Like it's just like we're, it's the same thing we're doing.

[00:46:35] We'll make a decision and then we're gonna rationalize it.

[00:46:37] Paul Roetzer: Right. But I can explain to you like how I did it. Like it's. I don't know. Yeah, it is just, it's a fascinating conversation. This is one of those, like, after like a glass of scotch, like I love to like sit around and talk about these things. 'cause I don't know, like, but they're fun things to think about.

[00:46:52] Cathy McPhillips: Sure.

[00:46:54] Question #14

[00:46:54] Cathy McPhillips: Number 14, if the FTC prevents one company from maintaining monopoly power, should the same philosophy for AI companies be applied to preserve variety in human thinking?

[00:47:06] Paul Roetzer: I don't know. I mean, I think there's legal elements to this. There's societal elements to this. There's the bias of who builds these models and the constitutions they put into them

[00:47:17] Cathy McPhillips: and what they were trained on, all of that.

[00:47:19] Paul Roetzer: Yeah. Yeah. I think that, yeah. and again, this came up in episode 2 0 3, we'd actually talked about the government claiming that the fact that anthropic Claude has a constitution and maybe is conscious as actually part of the reason they considered a supply chain risk, which I didn't really understand that logic, but, but anyway.

[00:47:35] Yeah, I think that right now, I mean the government runs the risk of mandating the type of thinking it it wants in its models, not allowing them to have a variety of thinking and constitutions, which is where some of the friction is coming in between an Anthropic and the Pentagon, is that the government is trying to sort of, they want AI models to think the way they think, and so it should [00:48:00] have the same philosophies and political leanings as the current administration is in essence what they want.

[00:48:05] And so they want labs that are gonna allow them to take the models and have them output things the way they would. This is, Elon Musk is a great example of this. He, he is, his whole mission is like seeking truth, but it's his truth. Like it's, it's very much like what does he think is true. And so he has specific sources and specific beliefs that he is absolutely putting into Grok.

[00:48:33] because those align with his thinking. And I'm not saying that's fraud. It's his prerogative. He's building the model. Like he can have it, do whatever he wants. And then maybe that's the spirit of this question is, should we allow that, should, should there be some models that Republicans like better and some that democrats like better?

[00:48:49] I don't, I don't know. Like I think it's a very dangerous, that sounds horrible. Humans are the arbiters of truth, but that's also how it's always been. That's how media outlets work, right? They're in [00:49:00] essence, the arbiters of truth. Like there's. you could read one publication and watch another channel, and like, the same thing where it's, the facts are very clear, could be presented in totally different lights.

[00:49:12] The headline is different based on what they want you to believe or perceive about something. So, I don't know. I think

[00:49:19] Cathy McPhillips: what if AI solved world peace? Like, wouldn't that be amazing?

[00:49:22] Paul Roetzer: That would be incredible. Like, I. I'd be all for that, or got people to be able to have logical conversations based on actual truths, would be

[00:49:30] Cathy McPhillips: one could dream

[00:49:31] Paul Roetzer: top of my list.

[00:49:32] Cathy McPhillips: Yep.

[00:49:34] Question #15

[00:49:34] Cathy McPhillips: Number 15. If AI can solve advanced math problems, why can't it solve the technological unemployment conundrum? Will technology eventually be able to solve the problems that technology itself created?

[00:49:44] Paul Roetzer: That is the bet the AI labs are making. So the reason it can solve advanced math problems is 'cause they've been trained to solve advanced math problems.

[00:49:51] They, they hire, mathematicians, experts in, in math, and they post train the models on math. The part of the training data is advanced [00:50:00] math problems. So the reason it can do these things is because it has been trained to do these things. And the other thing is math has verifiable outputs. So you can train the model on something and then know if it was right or wrong.

[00:50:14] And so that allows you to do training in a, I'm not gonna say like simplistic is the wrong way, but in a more structured way when it has verifiable outputs, there is no clear answer to the unemployment conundrum and therefore we, we don't know what the right answer looks like. And so the way you train these things is basically like, imagine giving it a math problem or giving it a writing test or something, and then like an expert human saying, this is good, this is bad, or I prefer this or this.

[00:50:42] And in the math case again, it is like, okay, you got that correct. when we get into these bigger societal issues, there is no clear answer and therefore it's hard to know if what it's doing is correct or heading down the right path, especially if it starts to get into more of a superhuman aspect when it comes [00:51:00] to societal issues and issues of defense and security.

[00:51:03] philosophy, like all these things, like, there's just no clear answer. And if it ends up becoming better at us than solving these things, it's like how does the human then even evaluate the output? It's like, it's solving it at like 10 dimensional chess versus like what we're able to play. So the bet the labs are making, especially like a, a Google Gemini with, with their model is, solve intelligence and then solve everything else.

[00:51:27] That's Demis Hassabis’ Basically mission in life is once we solve human intelligence or beyond. Then it can then help us solve all these messy things, including, you know, energy and environmental and disease and, you know, biology and chemistry and all of these, these fundamental things. So I choose to believe he's right because it's the most op, op, optimistic outlook for what's possible.

[00:51:51] And I don't see any good in spending my life worrying about the doom and gloom of it all because I can't change any of the things that they're doing at [00:52:00] the lab level. And so I spend my life instead focusing on raising awareness about the issues and creating a sense of urgency to understand them and figure out how they apply to you and your community, and your family and your careers.

[00:52:12] and then believe that this is somehow gonna work out in a very positive way, because it gives me more motivation to keep doing what we're doing versus sitting here and just being a dor and buying, you know, a bunker in New Zealand and just going and hiding from the world like that.

[00:52:30] Cathy McPhillips: Yeah.

[00:52:30] Paul Roetzer: Doesn't sound like fun to me.

[00:52:33] Cathy McPhillips: Well, I had a last pick me up question just in case this was, did not end on a really raw note. It was

[00:52:38] Paul Roetzer: Okay. Note, listen,

[00:52:39] Cathy McPhillips: it's done. I'm still gonna ask it though.

[00:52:40] Paul Roetzer: All right.

[00:52:40] Question #16

[00:52:40] Cathy McPhillips: So number 16, this past Sunday in your exec AI Insider newsletter, if you are not subscribed at SmarterX dot ai back slash newsletter, highly recommend it.

[00:52:49] You said, as leaders, we have one chance to get this right. AI can give us the greatest gift of all war time. Or it can just be another technological revolution that expands our work, fills our hours, and [00:53:00] leads us down the path of never ending productivity gains for profits. We get to choose what to do with the time we are given, both for ourselves and for our teams.

[00:53:08] And I tell people this line all the time 'cause you said it before. Yeah. like what is, like, how are you feeling? obviously you're feeling optimistic about this. Like what are some things you can say just in regards to your newsletter message?

[00:53:21] Paul Roetzer: There's just a lot. Of chatter right now around AI productivity and how, like this negative belief that companies are just gonna do what they've always done and any productivity gains we create is just gonna create more work for everybody.

[00:53:35] And I definitely see that already, that, you know, it's like, oh great, we're, we're saving, you know, 20% of time we're increasing output by, you know, two x let's just get rid of, you know, 10% of the staff. And then the people who are left can just do what they're doing and be superhuman. And still work the same amount of hours or more because now there's more pressure on them to perform and do the work of five people.

[00:53:56] And so I'm you just seeing this slow [00:54:00] decline or slipping down this cliff of like, we're just gonna do what we've always done. We're gonna take the gains of the technology and we're just gonna fill the time with more work. And what I've always said to people when I get this question, it's like, well, isn't that what's gonna happen?

[00:54:15] I'm like, if that's what your company chooses to let happen. Like, I think we're just gonna have a choice. And again, it does fit into that optimistic outlook. It's like at some point there, there's a profit level, that's enough, there's there's growth level. That's enough. And like you have to make those decisions as leaders to say we're happy.

[00:54:37] Now I understand that's not reality, you know, for publicly traded companies, like there's just always gonna be pressure to keep performing. But for privately held companies, which is by the way, the vast majority of companies in the world, I do think we have a choice. And I, like, we have our annual meeting, like when you're, when this podcast drops, we'll actually be in our annual meeting that day.

[00:54:57] And I haven't talked to you, Cathy, about this at all, but [00:55:00] I actually was working on something on Sunday. I was like, well, how do we model this behavior? Like, I don't think the answer, the obvious thing is like, oh, we're just gonna work four days a week. I think that's ludicrous. Like, I don't think the four days a week thing is like a realistic thing.

[00:55:13] It's kind of like, and no offense to anybody who's still running this in their company, but it's like unlimited PTO, like that was just a. That's just a ploy like that, that's not real. Like if somebody actually takes advantage of unlimited PTO, they're gonna get fired. Like you have to work. Like there's an expectation of output and things like that in a company and amongst your peers that you can't actually do something like that.

[00:55:37] And I think talking about a four day work week or a three day work week would fit in the grounds of, it's just messaging. Like it's just a PR thing. Nobody's actually gonna successfully do that. That doesn't mean you can't do things like. Friday afternoon's off during like summer hours or one day a month, we're gonna go and like volunteer as a team and like a community thing.

[00:55:57] Like you, you, or we're gonna, we're gonna [00:56:00] reduce the number of meetings by 50% and we're gonna actually have blocks where like no one's allowed to schedule with you. We're gonna give you like time to work on work that you find fulfilling. Like we're gonna do real things that make you enjoy your work more, but do give some of the time.

[00:56:17] And so I've been. It was again, like a random, you know, half hour I had of brain power to like work on this, but I think I may actually do this with our team on Thursday. It's like, hey, let's take 30 minutes and let's think about how do we model what an AI forward organization looks like? How do we give some of that time back that AI is giving to us?

[00:56:33] Because I do think it's a gift that we can take advantage of, but you have to be intentional about it.

[00:56:39] Paul Roetzer: So, yeah, I don't know. I definitely put this under the bucket of things I'm very optimistic about. If. You if you approach it that way, but Right. Otherwise you will totally just like you. And I feel it like I could work seven days a week, 24 hours a day, and I wouldn't do all the ideas that are in my head right now, right?

[00:56:58] Like, there's so many things I [00:57:00] want to use AI to do, and so I force myself to not do that and to like step back and, you know, take care of myself each day. Like do something from a health perspective, a workout, a walk, you know, in your case play pickleball. Gotta do those things. I force, I don't wanna say force myself, things I enjoy most is like taking my kids to school, picking my kids up from school, being with my family at night, taking trips together.

[00:57:21] Like I'm very, very intentional about not filling my time to allow for those other things. And I do work less now than I used to. I work a lot. But if I went back and looked at like what I was doing in like 2000 15, 16, 17, like when I was running the agency and trying to start the institute. I was a hundred percent working like way more hours than I work now, nights, weekends, everything.

[00:57:43] And I , I would say I've, I've done a fair job of giving myself time back, but I don't know, as an organization we've like fully embraced it and operationalized it, and I think there's an opportunity to do that.

[00:57:55] Cathy McPhillips: Yep. Absolutely. I'm all for that.

[00:57:57] Paul Roetzer: Yeah, let's

[00:57:57] Cathy McPhillips: figure it out. [00:58:00] All right.

[00:58:00] Paul Roetzer: That is the end on Thursday.

[00:58:02] Alright. Good stuff. Good question. Always. I mean, these questions get better every time we do these things. Like it's amazing to me like the, how we keep doing the same class, like variations of the same intro scaling and the questions just keep getting better. Like it's incredible. So thank you everybody who attends these classes and asks the questions and.

[00:58:20] YouTube comments apparently. We're even getting good stuff now. Yes. That's nice to know. Alright, well good stuff.

[00:58:25] Cathy McPhillips: Yeah, we'll see you next Thursday for scaling AI's AI answers.

[00:58:29] Paul Roetzer: Oh, we have another one we can

[00:58:30] Cathy McPhillips: squeeze set in next week.

[00:58:32] Paul Roetzer: Okay, so two more episodes. Next we'll have our regular weekly episode 205, and then I guess we'll have another AI answers in two six.

[00:58:39] Alright, thanks Cathy. Thanks everyone for joining us. Thanks for listening to AI Answers to Keep Learning. Visit SmarterX.ai where you'll find on-demand courses, upcoming classes, and practical resources to guide your AI journey. And if you've got a question for a future episode, we'd love to hear it.

[00:58:59] That's it for now. [00:59:00] Continue exploring and keep asking great questions about AI.

Recent Posts

[The AI Show Episode 204]: AI Answers - What Should Stay Human, AI Pricing vs. Labor Cost, Leapfrogging Digitalisation, Getting Legal On Board & Do Reasoning Models Actually Reason?

Claire Prudhomme | March 19, 2026

Paul and Cathy answer 16 AI questions on career pivots, the death of billable hours, human creativity, agent swarms, and labor replacement pricing.

[The AI Show Episode 203]: Anthropic vs. Pentagon Round 3, NYT  AI vs. Humans Writing Test, Atlassian’s AI-Era Layoffs & Grammarly's Expert Cloning Scandal

Claire Prudhomme | March 17, 2026

This week on The Artificial Intelligence Show: Anthropic Pentagon lawsuit, AI writing quality NYT quiz, Atlassian AI layoffs, & AI job loss tracker.

[The AI Show Episode 202]: AI Answers - AI for Marketing, Sales & Customer Success, Marketing Agent Swarms, Entry-Level Job Disruption, Environmental Impact and AI Privacy

Claire Prudhomme | March 12, 2026

This week on AI Answers from The Artificial Intelligence Show: AI agents, agent swarms, entry-level disruption, AI efficiency gains, SaaS apocalypse, and more.