45 Min Read

[The AI Show Episode 185]: AI Answers - Getting Started with AI, Core AI Concepts, In-Demand AI Jobs, Data Cleanliness & AI Fact-Checking

Featured Image

Get access to all AI courses with an AI Mastery Membership.

What happens when AI stops being a tool and starts reshaping every task inside your company? 

In this AI Answers episode, Paul Roetzer and Cathy McPhillips go through audience questions on where AI jobs are really heading, how agents and “AI ops” are emerging, and what to expect as reasoning models accelerate into 2026.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

 

What Is AI Answers?

Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 40,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike.

AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.

In this episode, we address 14 of the top questions from our December 3rd Intro to AI class, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time, unscripted and unfiltered, just like we do live.

Whether you're just getting started or scaling fast, these are answers that can benefit you and your team.

Timestamps

00:00:00 — Intro

00:03:55 — What AI Positions are in demand for professionals who are not coders? How can skill sets be presented to hiring managers? 

00:08:49 — What are the top AI concepts that organizational communicators need to know?

00:10:56 — What should I focus on in AI?

00:13:21 — What do you think would be a good area to focus on as someone trying to break into the AI industry?

00:16:15 — Would you recommend prioritizing 'Generative' use cases or 'Predictive' use cases to achieve the quickest win?

00:18:45 — What’s the most innovative way to get started? Do we need a certain level of data hygiene first, or can AI help clean and organize the data as we go?

00:23:55 — Can you talk about what to be aware of and best practices for sourcing use cases?

00:28:25 — What is the best way to introduce AI tools to a technical/industrial workforce without causing 'replacement fear'? 

00:30:47 — What would you say to people who are trying to move beyond the mechanical use of AI and actually trust the technology enough to use it in meaningful ways?

00:34:20 — How do you see AI-driven search tools impacting traditional search engines?

00:36:43 — As generative AI matures, what’s the next significant shift?

00:41:04 — Do companies understand AI well enough before reducing their human workforce?

00:45:51 — What are the main factors that could slow down the advancements of AI?

00:49:11 — As AI systems move toward recursive self-improvement, what guardrails are needed to ensure they aren’t learning from a distorted or incomplete view of the world?

Links Mentioned

This episode is brought to you by Google Cloud: 

Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.

Learn more about Google Cloud here: https://cloud.google.com/  

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: Like we have to accelerate growth and innovation as a society like this, as an economy, like this isn't an option, and anyone who tells you this isn't going to lead to workforce reduction, that is not true. Welcome to AI Answers, a special q and a series from the Artificial Intelligence Show. I'm Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute.

[00:00:24] Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of ai. But we never have enough time to get to all of them. So we created the AI Answers Series to address more of these questions and share real time insights into the topics and challenges professionals like you are facing.

[00:00:46] Whether you're just starting your AI journey or already putting it to work in your organization. These are the practical insights, use cases, and strategies you need to grow smarter. Let's explore AI together.[00:01:00] 

[00:01:03] Welcome to episode 185 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Cathy McPhillips, chief Marketing Officer at SmarterX. This is our 10th episode in our AI Answers series presented by Google Cloud. This is our series based on questions from our monthly intro to AI and scaling AI classes along with some of our virtual events.

[00:01:25] So if you're not familiar with those, we do. Cathy and I together do. A class each month, a free class, int intro to ai, and then we do a scaling AI class. The intro one has been going on since fall of 2021, so this is actually, we just did the 53rd one. So that is the questions we're gonna go through today are extracted from our audience from that 53rd edition of Intro to AI and then scaling ai.

[00:01:48] Cathy, we are on 13? 14?, 

[00:01:51] Cathy McPhillips: we're doing number 13 on Friday. 

[00:01:53] Paul Roetzer: 13 on Friday. So this is dropping on December 11th. If you're listening to it on December 11th, [00:02:00] join us on Friday, December 12th. If I'm getting my dates right, here you are for scaling AI 13. So again, intra AI is like the fundamentals. Everyone needs to know every knowledge worker, every leader.

[00:02:11] Scaling AI is sort of the five steps that leaders should be taking to scale AI within their organizations. So we do this, AI answer series as well as those intro classes and the scaling AGI classes as part of our partnership with Google Cloud. So they're presenting sponsor for those. as part of our overall AI literacy project, we're trying to accelerate AI literacy.

[00:02:31] we have this incredible partnership with the Google Cloud marketing team. For more than a year now, we've been teaming up on AI Answers podcast series. As well as intro scaling, a series of AI blueprints that we're gonna be launching here in the next month or so. And then we have this incredible marketing AI industry council that you're gonna be hearing a lot more about.

[00:02:49] We have our first report coming outta that council, also probably in the next month or so here. So you can learn more about Google cloud@cloud.google.com. [00:03:00] and also check out Google Workspace we talked about. The, workspace studio on actually episode 180 4 this week. So if you listen to the weekly episode, we talked about, Google Workspace, where you can go in and build your own agents within workspace.

[00:03:13] It's cool, it's working for me now. So I said on the podcast it wasn't working. it's working. I'm really excited about that as well. So, okay. With that, I'll let Cathy kind of go over how this is gonna work and where these questions come from. But again, Tuesdays are our weekly episodes. if you're, you know, a weekly listener that is still going on, this is a, a bonus episode this week tied to our AI answers series.

[00:03:36] Cathy McPhillips: Great. I'll keep this short. We have some really, really good questions, so I wanna jump right into the questions, but essentially we take your questions from the show that weren't answered, and even some that were answered that are really good that we turned into a podcast episode. So this is our 10th time doing this, and we should have one next week after our scaling, class from Friday.

[00:03:53] So, Paul, I'm jumping right in. Okay, 

[00:03:54] Paul Roetzer: let's do it. 

[00:03:55] Question #1

[00:03:55] Cathy McPhillips: All right. Number one, what AI positions are in demand for professionals who [00:04:00] are not coders in particular, this person has worked as a business analyst and holds certifications from Google, Anthropic and others. How can skill sets be presented to hiring managers?

[00:04:09] Paul Roetzer: This is an interesting one. You know, we're definitely starting to see roles emerge. you know, I, I've mentioned numerous times we're in the process of kind of building out our organizational design at SmarterX and starting to think what those roles are going to be. In some cases they are the roles we've always had, but with, AI capabilities baked into them.

[00:04:28] So we're still hiring customer success managers. We're hiring, account executives and sales, but they have to be AI forward. Like they, there's no question they, they have to be able to work with AI assistance. They have to be able to build custom GPTs and Google gems. Like we, we need that from the ground up.

[00:04:45] And so that's how we use our own AI Academy is to train our own people. and then we basically, you know, enable other people to do the same thing. So I think first and foremost, it is a layering of capabilities onto existing roles. Sometimes you will [00:05:00] see AI also then dropped into that. The first one that the kinda, the first domino I saw happening, and this goes back to late last year and certainly into early this year, is more of an AIOps role.

[00:05:10] So it's someone who's looking at, understands business cases, understands workflows, can work across departments, oftentimes a marketing, sales, customer service, and can help them identify ways to build smarter processes and fuse AI technology. So they have this. Innate capability of understanding what AI is capable of, and then understanding business cases and workflows and looking for ways to infuse AI to drive efficiency, productivity performance.

[00:05:37] So AIOps is definitely one we're starting to look at, things like bringing in people who can, work with AI to create outputs, but then provide the human verification and enhancement to it. So like research is an area in particular where I'm, I, I'm kind of very. Bullish on where you can use products like Google Deep Research and you can create five [00:06:00] reports a day if you want, and they can be 40 pages long.

[00:06:02] You can do all this work, but. A human who knows what they're doing, who is a researcher by trade, or a content creator by trade, journalist by trade, can go in and actually figure out, is this any good? Are there citations, legitimate? Things like that. So we're starting to look at layers like that. you know, I think there's gonna be people who oversee AI agents.

[00:06:20] Like there's gonna be roles that are literally just, Hey, we've got 10 AI agents working in the sales team. Someone needs to orchestrate those. So an AI orchestration manager or something like that. Like, and so again, you're. I see a lot of theories of what these roles are gonna be. I don't see a lot of titles yet.

[00:06:36] Right. And I think part of it is just like it's hard to put a job out there that no one really understands when people are searching for specific things they've done in their career. Like I'm trying to find roles in sales or marketing or customer success. I think more and more it's probably gonna be just taking existing roles and infusing AI requirements into them versus changing titles to have AI in them.

[00:06:59] That's, that's [00:07:00] my current kind of best guess of what happens. Sure. 

[00:07:03] Cathy McPhillips: I listened to that Lenny's podcast on the way home. Yeah. Awesome. On the drive in today, it's like, oh my gosh, I need to be sitting at my desk taking a million notes. 

[00:07:11] Paul Roetzer: Yeah. 

[00:07:12] Cathy McPhillips: It was so valuable. Yeah. 

[00:07:13] Paul Roetzer: And she, what, what Cathy's referring to Jean Dewitt, is that the, the lady's name?

[00:07:17] So, amazing. She's a Elle. She, she came from Stripe and Lenny, Lenny's podcast is the name of the podcast. And, and, I think I touched on it on the, on the episode this week, but, basically she's a go-to-market expert and it's just like amazing. And she tells this story about how they had a single engineer in six weeks that took their SDR team from 10 to one.

[00:07:37] By just orchestrating AI agents into the workflows and the motions that are going on within the go-to-market team. And I think, I just think that's what's gonna happen. So that's where you've got this like AI ops person who, in that case was an engineer who then just like comes in, is like, I can build stuff, right?

[00:07:52] And, and analyze workflow. So I do think AI will be infused into titles. I just at this point think more and more [00:08:00] it's gonna be a requirement of whatever your title is. And so if you're someone who's going out and building these capabilities, getting these certi certifications. You're in an organization that doesn't already recognize the value of that.

[00:08:11] I think you just have to figure out a way to demonstrate that through business cases, through like forecasting, like, Hey, based on what I've learned with these certificates, here's how I think we could be making improvements to workflows, efficiencies, productivity, things like that. So you have to be proactive in.

[00:08:25] Connecting the dots for your leaders of what the value of those are. And if they don't appreciate it, there's gonna be a market for that talent, like, 

[00:08:33] Cathy McPhillips: sure. 

[00:08:34] Paul Roetzer: I think, I think one of 

[00:08:35] Cathy McPhillips: the things I liked best about that is like when she said we went from 10 to one, I kind of like did that gasp of like, oh gosh.

[00:08:40] And then she said, and now they're all doing outbound. I'm like, oh good. You're, you're, they're doing other things. They're doing things they want to be doing. More value to the company. 

[00:08:48] Paul Roetzer: Yep. 

[00:08:49] Question #2

[00:08:49] Cathy McPhillips: Okay. Number two. I'm working to educate professional communicators, including marketing and PR through resources and training.

[00:08:55] What are the top concepts that organizational communicators need to know? 

[00:09:00] Paul Roetzer: so again, I kind of think about this one like I was referring to in this first one where it's just layering that AI competency, and comprehension onto what you already do. So I think if you have just the fundamentals, so like when I, 

[00:09:14] when I created my ai, foundations collection for our AI academy, when we relaunched AI Academy, this summer and fall, AI Foundations was like the one where I was like, if you don't take anything else, we have nine certificate courses in our academy right now, and it keeps going up each month. But the foundations was, was fundamentals piloting and scaling.

[00:09:34] My feeling was regardless of your role, regardless of how many years of experience you have, if you just take the AI Fundamentals course series, the piloting course series, and then ideally scaling as well, but just those two to start, and then you layer that over what you do, that that's enough, like that is so far ahead of most professionals at this point.

[00:09:55] So whether you're in communications marketing, pr, sales, customer service [00:10:00] management, whatever it is. If you just have a really, really strong foundational base of understanding what AI is, what it's capable of, how to look at business problems differently, how to identify use cases, how to help your coworkers identify use cases, that is, that is gonna put you in the top 1% basically in your field at this point.

[00:10:19] That'll change in the, you know, one to two years ahead when everyone else kind of has to figure this stuff out. But if you're there and you're in the communications field. I can promise you like a minimum top 5% in your field, but probably top 1% based on what we've seen. 

[00:10:33] Cathy McPhillips: Yeah, for sure. And I think that's just, you know, a year ago, two years ago, it was like, oh no, you're, you know, marketers that, that saying, you know, you won't be replaced by ai, you'll replace by marketers who know ai.

[00:10:43] And that's kind of evolved and things are gonna keep evolving. So I always implore people, stay up on your training, keep listening to the podcast as we know more and learn more and share more, we can kind of, you know, speed up some of these things for you. Yep. Yep. 

[00:10:56] Question #3

[00:10:56] Cathy McPhillips: Number three. As someone who isn't in the knowledge work field, I don't have any tasks in my job that I can apply AI to.

[00:11:02] I'm taking courses and learning as much as I can, but I'm not sure what I should focus on in ai. 

[00:11:07] Paul Roetzer: So, I'm not sure what this person does. The knowledge work field is quite broad. If, the way I explain knowledge work is if you use a computer to do your job, you're, you are in knowledge work. So if you think create for a living, apply reasoning to problem solving, like you are a knowledge worker, the non knowledge work audience would be more of the labor crowd.

[00:11:27] but in the United States, I mean, it's like roughly a hundred million out of 136 million full-time jobs are, are considered knowledge workers. Now, that being said, even if you are in the trades, for example, you can still be using this in your personal life. You can still be applying it to coaching your kids through school.

[00:11:46] You can be doing it for things like travel, your own financial planning, things like that. Like anywhere again, where you're thinking or creating in your personal life. You can do it. but I, again, even in the trades, I still [00:12:00] see it. Like I have, I have family members, so I, I have a family member who runs a laborers union.

[00:12:04] And so like, I think about this all the time. and so I think like even then you may have people who are using their hands every day, like that's what they're doing. Or maybe it's firemen, policemen, things like that. But we all have like administrative tasks that we have to do as part of that as well.

[00:12:21] And so one, I would say like zoom out and say maybe you actually do, part of what you're, you're doing is knowledge work in a way. Or you can find ways to improve the things your organization is doing by raising your hand and saying, Hey, I've been learning about this AI stuff. Like I get that I'm out in the field all day.

[00:12:38] or I'm on the manufacturing line all day, or whatever it is you're doing, but like. Have we thought about trying these things and like you could go and just experiment and maybe develop a plan or like come up with a way to do something better on the manufacturing line. Like, again, I think they're so universally valuable and if nothing else, try it in your personal life.

[00:12:57] Get really good at it. I love that this person is taking [00:13:00] courses and learning. I don't know. I mean, maybe at some point there's a career field shift too. Maybe it, maybe the future is in knowledge work because you see the potential of these things, and maybe it's in your industry. There's a smarter way to do something and you gotta step into the knowledge work world to do it.

[00:13:14] I don't, I don't know. It's, it is really good question though, actually, I've never gotten that question before. So that's, I really like that one. 

[00:13:21] Question #4

[00:13:21] Cathy McPhillips: Yep. Number four, I'm ready to leave my current job. I've considered building agents and governing agents and developing gpt. But given that I don't know much beyond the industry, I'm not sure how I can help.

[00:13:31] What do you think would be a good area to focus on as someone trying to break into the AI industry? 

[00:13:38] Paul Roetzer: So, if you're gonna leave your current job, I don't know if that means to go start your own thing, you know, start building some agents and, you know, GPTs to, to start helping other people more in like a consulting role.

[00:13:49] So I, you know, again, I think like anything in life, like I, I'm working with my son, one of his friends right now in their pitch competition. So in seventh grade they do a pitch competition and, [00:14:00] I built this startup buddy to like, help, I've been advising this seventh grade class for like five years now.

[00:14:04] So I've done this before with my daughter and not my son. And so like I, in the last week, I've been just sitting there talking with him about problem identification. Like you're, you're gonna build anything, like any idea whether it's a startup business, you wanna go run or you want to pitch yourself to another company to like move into a role.

[00:14:20] Anything where you want to create that demand for something, you have to figure out, okay, is there a market for the thing I'm good at? What is the problem I'm solving? Is the thing I'm bringing to the table better than what's out there? And so I think I would almost look at it like you are making a pitch deck for the knowledge and capabilities you have, and whether that's starting your own business or moving into a company where you're gonna bring those capabilities in.

[00:14:43] Think about that. Like what is the problem I'm solving by having these capabilities? is there that need for it? What is the market potential of this? Like, can I increase the, the value of this company? Can I drive more leads to this company? So I don't know, maybe I'd put myself in that entrepreneurial framework of like, what are the [00:15:00] elements of a good pitch deck?

[00:15:01] Like, think about applying those skills and putting into that, that mode. And then again, whether you're jumping to another organization and another corporate job or nonprofit field, or you're trying to build something like a consulting business. Think about problem solution, market, potential competition, and then how you're gonna kind of differentiate and get out there.

[00:15:21] Cathy McPhillips: Yeah, I did that when we were looking at, you know, rebranding SmarterX. You know, back in the spring or, you know, and it was just like, I had to go through that whole idea of like maybe it with my agency days and doing all of that, but laying it out in that format was super valuable for me to figure out where the holes were and where our competitive advantage was and everything.

[00:15:37] Paul Roetzer: Yep. 

[00:15:38] Cathy McPhillips: Super helpful. 

[00:15:39] Paul Roetzer: Yeah. and talk, talk to AI about it. I mean, I That's such a good question. The context that's missing for us to answer this one fully is like, what is the career path you want? So if you take this and say, Hey, here's the knowledge I have of the course I've gone through, here's the capabilities I have in ai.

[00:15:54] I'd love to be in the healthcare field. I wanna spend my time doing X, Y, and Z. Help [00:16:00] me think about what that next career path move might be and like how I can do it. And just like work with a, you know, a Google Gemini or a Chad GPT and like play that out. Have, have, have that kind of career adv, build a career advisor, GPT, you know, that kind of thing.

[00:16:15] Question #5

[00:16:15] Cathy McPhillips: Number five. I lead aftermarket sales in the industrial manufacturing space. We have extensive legacy data install base and service logs, but it's disorganized for a company just starting. Would you recommend prioritizing generative use cases such as writing emails and content or predictive use cases such as forecasting, churn, spare parts needs, et cetera, to achieve the quickest win?

[00:16:36] Paul Roetzer: Yeah. I'm guessing based on just the format of this question, this person already knows the answer to this, which is degenerative AI use cases is the most obvious thing is if you're not. Doing that yet, that is, you can be starting this afternoon and you, if you have this knowledge, you can be like, you know, getting permission to go meet with the marketing team, the sales team.

[00:16:53] Maybe you're running like a half day workshop and you're helping people. Just find ways to use Google Gemini for [00:17:00] 20 bucks a month to infuse it into what they're doing, and you get those immediate efficiency, productivity gains if your data is messy. You, you likely, unless you, you're a data analyst or like, you know, you work in IT or computer science something, you're not solving that without other expertise.

[00:17:16] And a lot of pain. Like, I mean, we obviously are a more advanced organization than most, and yet we still battle ourselves with data. Like it's just never as clean as you want it to be. Never as organized. It's never in the right places. And so to get into more of the predictive side. that maybe requires more of that advanced data analysis, and data cleansing and things like that, that it's just gonna take longer and it's not gonna be a straight line.

[00:17:49] So I would, I would, I would recommend you start moving there. You start figuring out who are the stakeholders that need to be involved, what are all the data sources, like who's gonna own this project? And you create this long-term [00:18:00] vision, which could be like a, a one to three year, when I'm saying long-term, Hey, we'd love to be in this place where our data is like, this is what we did it Smart X.

[00:18:06] Like we want, this is our ideal user story for the marketing team. Like we want them to have this data so they can make these decisions so they can move faster, da, da, da. Same in sales. We want them to have this data. Now what do, what does it take to get there? But when the generative AI phase, it's literally just go get Google Gemini and turn it on, and then teach people like 1, 2, 3, use cases that address a good percentage of what they do every month.

[00:18:32] And you get these immediate gains with very little training. It's just teach them how to talk to the AI assistant. So pursue both paths, but definitely generative AI use cases is the much faster path to immediate value. 

[00:18:45] Question #6

[00:18:45] Cathy McPhillips: Yep. I kind of segues into number six. we're sitting on decades of historical service data that isn't perfectly clean, but we want to use AI to unlock value from it.

[00:18:54] What is the most innovative way to get started? Do we need a certain level of this data hygiene first, or can a AI help [00:19:00] clean and organize this data as we go? And I'm thinking about, you know, like our knowledge base. Yeah, our chat bot things that we're, we're setting up right now with some of that historical data.

[00:19:09] Paul Roetzer: A again, it can definitely help but. As with any advanced use case of ai, you have to have expertise in the domain with which you're working with it. So if you are not a data scientist and you don't know what good data looks like, you can talk to the AI all day and it may deliver a perfect outcome or output, and you aren't the right person to judge that.

[00:19:34] Because in data, especially if you're gonna be using it for important business decisions or motions within the organization in marketing and sales and service and operations and product, a a 5% margin of error is a really big margin of error. And so if you don't know to identify that margin of error, you wouldn't know it.

[00:19:52] If you saw it, then you're not the right person to lead this. So you've identified a major issue, which is we have this historical [00:20:00] data, it could be really valuable to us. You could give that data to Google Gemini and say, I don't know much about this. I'm not sure how to organize this and clean it up.

[00:20:10] Could you help me? And Gemini will probably say, absolutely, let's go. Like, give me access to the Google sheet, or upload the CSV file. And it may do things that look super impressive. And you may think you've, you did this like in, in two hours I was able to do the work of a, a data scientist. Would it cost $200,000?

[00:20:27] You may make a mistake somewhere in there, or Gemini could make a mistake or have a hallucination that could make the entire thing worthless. And so this is one of those where I always advise, even though AI is capable of these things, bring in the experts. So in another example, I saw a parallel path. I do this with legal stuff now and HR stuff and finance.

[00:20:47] Those are not, I, I would not consider myself an expert in any of those things. I've been an entrepreneur for. 19 years, 20 years, whatever. I've, I've been running companies. I've done all of it, but I'm not an expert in that stuff. I pay [00:21:00] senior advisors to be the experts in those things. Yet I will do a lot of the legwork.

[00:21:06] Now, if I'm doing a legal discussion, if I'm talking to my attorney, I will go in and have that conversation first with Google Gemini, I will arrive at what the brief looks like or what the letter I need to send looks like, or what the application form needs to look like. And then I will bring in my attorney and say, okay, here's what I've created so far, but you're the expert.

[00:21:23] So I think that's what we have to do is like, we know we do now have these capabilities to do things we couldn't have done before, that we aren't experts in, but we also have to accept the fact we're not going to know if the output isn't right. And so that's, you know, again, kind of a parallel path to think about.

[00:21:38] Cathy McPhillips: So speaking of that, I just was getting ready for some 2026 speaking engagements for you and I had your contract. So I took 2025, put 'em in the 2026 folder for the templates. And I was like, I wonder if these need a, an update. So me and AI went through your speaking contracts, it gave me all these recommendations, you know, based on the industry, based on AI today, all of these [00:22:00] things, what are some recommendations that I can take to the legal team to say, did these make sense?

[00:22:04] Yep. From the output. I looked at 'em and I said, well, this doesn't make sense for us. This doesn't make sense for us. This one, and I sent them to Tracy and to Ashley, and I said, what do you think? They added a few things. Sent it off to the legal team. Yep. So what time did that save? What things did, did the legal team not think about because they're not in that speaking world necessarily.

[00:22:23] So it was a whole process and it was super helpful. So we'll see what they come back with. 

[00:22:27] Paul Roetzer: Yeah, and I'll give another just real life example. So literally this morning I dropped my kids off from school. I've got about a 10 minute drive back and I have a, a Tesla that has Grok baked into it. So you just literally hit the button on the wheel and you can talk to Grok.

[00:22:39] And so every once in a while I'll experiment with GR just to see how it's doing. And I was thinking about like gross margins. I was actually listening to a podcast, an amazing podcast with Gavin Baker, and they were talking about gross margins of different businesses. And I was like, oh, you know what?

[00:22:51] We're like largely an e-learning business moving forward, like events and e-learning is kind of the two main things we do. And I was like, I wonder if our accounting is [00:23:00] actually set up properly from a gross margin perspective to factor in the cost of like course production and learning management system.

[00:23:05] Like I wonder if it's being categorized correctly in our accounting. And I happen to be having lunch with my accountant tomorrow, and so I was like, oh, let me have this conversation real quick with Grok. And then I will say, Hey. I think we might not be properly categorizing things to know our true gross margin, but I don't know the actual answer.

[00:23:23] I would not go into QuickBooks and make the change myself, but now I had this like five minute conversation where I have information I can now take to the expert and say, what do you think? Should we be recategorize things? And so this is what becomes possible when you understand what AI's capable of.

[00:23:36] And you also understand it's not the end all, like you still need the expertise. 

[00:23:41] Cathy McPhillips: Absolutely. 10 minutes. Just turn on some songs, Paul. 

[00:23:45] Paul Roetzer: Every once in a while I give myself the, the, the luxury of listening to music. But most of the time I cram in as much knowledge gain as I can. 

[00:23:53] I will, I know. okay, 

[00:23:55] Question #7

[00:23:56] Cathy McPhillips: number seven.

[00:23:56] I love this question. we adopted a problem-based model for AI to build [00:24:00] our AI roadmap and are looking to implement a use case approach with our team if we are currently onboarding enterprise AI tool and enterprise AI tool for the marketing department. Can you talk about what to be aware of and best practices for sourcing use cases?

[00:24:13] Paul Roetzer: So, problem based and use case are two frameworks we teach through SmarterX. They're in our 2022 book on marketing artificial intelligence. So you can go read the book. If you're an AI Academy member, you can take courses on these. They're in the, actually the piloting AI course series. So we can include some, some links to these.

[00:24:31] I also teach these in the intro to AI class, the free intro to AI class we mentioned at the beginning. So I go through like these basic frameworks, but in essence, in the problem-based model, you're looking at existing challenges, goals. You're falling short of known pain points in your organization and saying, is there a smarter way to solve these?

[00:24:49] And you can do this with anything. and we'll put a link in. We have problems, GPT, we'll actually help you like brainstorm these things and then develop briefs. So it's a free custom GPTI built. That's just available [00:25:00] in ChatGPT and then, the use case one. what we often do is take more of a, a role-based approach.

[00:25:07] So you think about jobs, so like JobsGPT is is another custom GPT you can use. We'll put the link in the show notes. and so what we'll do is like say, okay, you've got the enterprise AI tool for your marketing team. Get 'em together in a workshop format. You explain how to analyze their jobs. Look at the workflows they perform each day.

[00:25:25] Think about the tasks that go into those jobs. Then you try and identify which are the ones that would be most valuable to us to apply AI to, which ones can help us do this more? And so that's kind of the approach we we take is we ba basically break jobs down into tasks and then we identify the tasks that are most valuable.

[00:25:42] So like the example Cathy just gave about doing analysis of contracts, like no one on our team finds great joy and fulfillment in analyzing legal contracts for speaking engagements. That is like a really good use. Now, that is not something Cathy does every day, like it's, as she said, it's like a once [00:26:00] a year.

[00:26:00] We'll take a fresh look at these contracts. So she looks as like, ah, I'm just gonna keep putting that off 'cause I really don't wanna do that. Oh, wait a second. I could probably save myself a few hours and just have Gemini analyze this for me. So that's like a one, one off thing. But you also may look at it and say, I'd love to be delivering an analytics report every Monday morning that looks at lead flow and conversions and projects, customer lifetime value, and looks at churn like.

[00:26:23] I'd love to have this data every week. Could I find a way to have AI write that report for me? And so when you sit down and go through like more of a workshop model where you take the time, the 30 minutes yourself or whatever, to think through, brainstorm with JobsGPT or whatever you want to do it. and again, like in our course, we actually offer like a download that like gives you a template to do these things.

[00:26:44] But to do that, but then share the ideas with everybody else. So that's why the workshop model's so good soloed think then you do like a team or a table think like you kind of bouncing ideas around and then you share, hey, here's the three I'm super excited about. I think I could save, you know, 10 hours a week by just doing these [00:27:00] three.

[00:27:00] So that's how I would approach it, is like do it as a role-based and then tie it to tasks and workflows. 

[00:27:06] Cathy McPhillips: I'm not sure if I heard this on our podcast or something else this week, but it was like, there's this fear people have of. Letting their co colleagues know they might be using AI on something like that.

[00:27:17] Yeah. Did you talk about that? 

[00:27:18] Paul Roetzer: Yeah. Yeah. Yeah. 

[00:27:20] Cathy McPhillips: So it's just like the more we talk about the ways we're using it, the more people will be willing to say like, okay, this is okay. Correct. I'm still good at my job, I'm still smart, I'm still relevant. But, it kind of takes away that, that fear of making, you know.

[00:27:35] Paul Roetzer: Yeah. Yeah. And the premise there is like within a lot of organizations, again, most people feel like they're completely behind, and everyone else has figured this out. That is not the case in many more organizations, if not most. There are a few people, like I say, on the marketing team, the sales team, the leadership team, and same thing happens in schools.

[00:27:51] I've heard this from professors and teachers who don't want to admit they're using ai. They don't want other people to know. There's like a negative stigmatism to this, like, and [00:28:00] so. By doing it in this embracing workshop model, it's like, Hey, we want you all to be using it. We actually need you all to be using it.

[00:28:08] Right? Let's learn from each other. Let's develop a center of excellence where we share best practices. But yes, in some organizations there is definitely still this negative perception of people who are using it, and you have to get through that, or your company is gonna become obsolete, like, right? We don't have that choice anymore.

[00:28:25] Question #8

[00:28:25] Cathy McPhillips: Okay. Number eight. My team consists of technical experts and field engineers, not marketers. What is the best way to introduce AI tools to a technical industrial workforce without causing replacement? Fear as to, maybe we can just take even that part. Yeah. how do we frame it as an augmentation of their technical expertise?

[00:28:42] Paul Roetzer: so I I have the same answer. This, this que you could ask this question like a hundred different ways. It's such a, it's such a good question. That's why I think it does come up in all these different ways. But it's basically asking the same thing is when people fear something, how do you get them to embrace it?

[00:28:57] And the way I always approach this is find the [00:29:00] thing in their job they hate doing and build AGI PT for them. Build a gem for them. Show them a workflow where AI helps them do the thing they don't enjoy. So that's like, I go back to, I, I've shared this story before about like when my daughter was 10 and, 

[00:29:15] dolly came out, image generation, she hated it. 'cause she's an artist, like her mom's an artist and she just saw it as this threat and like couldn't stand, AI didn't even like the fact that I was like working on ai. And so there was this like extended period where I was trying to like bring her along and explain like, but it can help you in all these other ways.

[00:29:34] And so I eventually found this like really great use case for her where like the walls came down. It's like, oh, well if you can help me with that, like, that'd be really cool. We started there and you just, you find that entry point where like, you know, again, I mentioned education earlier, like let's say teachers like developing curriculum or coming up with like in-class exercises that en engage kids who maybe like generally just sit there and zone out, like [00:30:00] talk to it.

[00:30:01] Find ways to do that where like you've tried everything and so. I think some of this can be just done in a survey. Like, what part of your job don't you find fulfilling? Like what? And you just find the thing and then start there. Just start with one thing. Don't, don't give 'em a copilot license or a Gemini license or a Jet GBT license.

[00:30:17] They go figure it out. They're not gonna do it. They don't want to do it. It's a, it's a replacement to them. But if you, if you say, Hey, listen, we're giving you Google Gemini, here are three use cases we think you'll really enjoy as a manager who hates doing. Professional reviews or like writing job descriptions or whatever it is, legal analysis of contracts you have to go through, find that thing, and then just like stack those and then eventually it's like, oh, okay.

[00:30:43] Like I get how I can really like amplify my capabilities with this. 

[00:30:47] Question #9

[00:30:47] Cathy McPhillips: Yep. This one's a little bit repetitive, but, I think there's a, we, we can talk about trust a little bit from an individual standpoint, but part of me gets hung up on the fact that. Generative AI is just predicting the next token. I want to trust AI [00:31:00] as a real thought partner, but I can't quite get past the idea that it's all ones and zeros stitched together from existing data.

[00:31:05] What would you say to people like me who are trying to move beyond the mechanics and actually trust the technology enough to make, to use it in meaningful ways? 

[00:31:13] Paul Roetzer: Yeah, I I dunno, just context for people who don't understand the question, it's a very good question. The, the weird thing about Google Gemini and ChatGPT and Anthropic, Claude and all these models is that is in essence what they're doing is there was a breakthrough back in 2017 that invented something called the Transformer.

[00:31:32] It came outta the Google Brain team. That transformer architecture became the basis for GPT generative pre-trained transformer, and the basic premise of that model and what it enabled with the building of large language models or LLMs. Is that it, in essence, just predicts the next word. So you feed it all this human data from your website, Wikipedia, transcripts from YouTube videos, books.

[00:31:56] It just like takes all this information in and it basically just [00:32:00] learns how humans write. And then when you go in and give it a prompt it in essence just sort of predicts what the most likely best next word is. That is at its fun, most fundamental stage, how these things work. And that's weird. And the, the engineers, the AI researchers themselves, while they understand it better than the, the, the normal person, the normal business person, they don't truly actually comprehend why it works.

[00:32:27] And so to this, you know, listeners' question, it's hard to get over that, but it's also like, I don't understand why the speed of light is a thing. Like why is that the fundamental law that like guides the universe, like we can't break that. Why does the universe keep expanding? Like there, there's, why does Gravity exist?

[00:32:45] Like there's just things that we know are true and we can't explain them, and yet I trust gravity every day. and so I think with AI models, we kind of have to get to that point where it's okay to have [00:33:00] skepticism that they make mistakes. That's actually probably a really good thing. 

[00:33:03] Cathy McPhillips: We need to have skepticism.

[00:33:04] Paul Roetzer: Yeah. That, that you're in tune to that. 

[00:33:06] Cathy McPhillips: Yeah, 

[00:33:07] Paul Roetzer: but we also have to accept the fact that for some reason. The laws of physics allow these things that are basically made from grains of sand that form chips, that the chips then become put in data centers, and these data centers are given a bunch of data and outcomes.

[00:33:25] These things that can make all these crazy predictions, like it's the weirdest thing when you actually step back and think about how this all works. I would just sort of get to the point where you just sort of accept that is like a law of physics, a law of nature that allows. To happen. We don't really know why and they're not perfect.

[00:33:43] But if you just use them in that way as an ability to augment yourself and continue to have that skepticism and that little bit of doubt, and like, I gotta stay in the loop, you're just gonna be so far ahead of everybody else. And as of right now, all the scaling laws tell us that they're just gonna keep getting smarter.

[00:33:58] The hallucinations will keep going [00:34:00] down and we will have this alien technology everywhere. 

[00:34:05] Cathy McPhillips: Yeah, but I mean, what we've always said, you know. Look at the output question, the output, ask more questions. Yeah. All of those things will help you get to that point. 

[00:34:14] Paul Roetzer: Yeah. I love that question though, by the way.

[00:34:16] I do too. That's one my one, one of the better questions I've seen. It's, it's really good. 

[00:34:20] Question #10

[00:34:20] Cathy McPhillips: Number 10, there's been considerable debate about whether AI driven search tools pose a real threat to traditional search engines. How do you see this playing out? Are Google and the major platforms actually at risk, or are they act already, already adapting in ways that will keep them central to how we find information?

[00:34:36] Paul Roetzer: So six to 12 months ago, there was lots of doubt here. and I don't know that we actually have like a ton of answers yet as to where this goes. It is like one of those big open questions. But recent months, the data is showing that Google is not being impacted dramatically by this, like their traditional search model.

[00:34:55] They obviously continue to infuse their AI mode. So right now it's like a secondary [00:35:00] window. If you go into a search, you can click over to AI mode or you can activate AI mode. I think over time Google search just becomes AI mode. Like it's, they'll probably eventually sunset traditional Google search in some way.

[00:35:12] But as of right now, Google's business is humming along and as I mentioned on episode 180 4, like part of the leverage Google has in this and the potential to be the one that comes out furthest ahead in the end over openAI's and others, is they are a cash cow business. Like their search and ads business just pumps out money and they can reinvest, that money into building bigger, better models and all this stuff where openAI's is like.

[00:35:40] Completely reliant on funding and eventually an IPO. And, they're just burning through tens of billions of dollars of cash where Google isn't. And so, I don't know, I think that it's gonna keep evolving. I think marketers and content creators and brands, we have to stay in tune to this and figure this out, how it's gonna evolve, especially as [00:36:00] AI agents become more reliable.

[00:36:01] And it's actually like. My AI agent that's coming to your website or coming to your e-commerce site and buying things or gathering information and not a human, there's just so many unknowns. And I mentioned at the beginning that we have this a marketing AI industry council. We formed with Google Cloud, and this is one of the areas that we've identified as like we gotta, there there's like 15 open questions we basically have as the council.

[00:36:25] And we've focused on AI talent first, like the impacts kinda talent, but. These are the kinds of things that we're starting to ask and we have a couple of people on the council already who are experts in this area and we lean on them for, for this kind of guidance. Andy Cina and Will Reynolds, two people that come to mind that we follow when it comes to these kinds of topics.

[00:36:43] Question #11

[00:36:43] Cathy McPhillips: Yep. Okay. Number 11, as generative AI matures, what's the next significant shift? Is it AI that can run directly on our devices, doing its thinking locally instead of relying on the cloud? Is there another evolution coming that we should be paying attention to? 

[00:36:57] Paul Roetzer: Man, the, it's so [00:37:00] funny. Like, go back a year ago, go back to last December.

[00:37:02] And if you listen to the AI Answers podcast, the, these questions are so much more advanced than we would've been getting.

[00:37:08] Cathy McPhillips: And this was an intro class. 

[00:37:09] Paul Roetzer: Yeah. That's wild. super smart questions. okay, so what comes next? The, there's two main things I would watch for in 2026. Reasoning continues to get really, really good.

[00:37:23] That is the ability for the AI assistant to take time at the point of inference is what it's called when you and I use Google Gemini or chat GPT or Anthropic, Claude. When we go in and we ask it a question, we give it a prompt to build an image, to create a video, to analyze data, to build a strategy doc.

[00:37:39] To write an email. Inference is the time when we ask it, and then how much time it takes to think about. What is being asked. And so up until fall of 2024, we just had answer engines based on information retrieval or like things it learned in its training. So you ask a question, it [00:38:00] instantly responds. It's like, one second.

[00:38:01] It starts going now since late 2024. You'll see it thinking, and you'll, it'll literally tell you in the AI system, like thinking, thinking, thinking. and sometimes it'll show you the chain of thought of what it's thinking, but. What they've found is the more time you give it to think, the better and more reliable the answer becomes.

[00:38:20] And so the models are getting really good at that. They're tying different tools to that thinking process. So it can go and run a search, it can do a calculator, it can write code behind the scenes. it can extract things from its context window or its memory to personalize the answer. So all of that is these reasoning models are just going to get better.

[00:38:41] We've seen some really strong improvements in them and that we expect that to continue. And the other is the autonomy and the reliability of AI agents, not just in like AI research, but starting to find its way into like the, the functions of like marketing, sales, service, operations, things like that. So reasoning and agents are [00:39:00] two things that I I expect to be significant to the question of on-device.

[00:39:05] This is actually a really, really important near term question that's gonna actually have ripple effects throughout the, economy and, specifically in Wall Street. And that is like, I, I'll just frame this and then we'll move on. I'll, I'll come, come back to this probably on, I, I'm probably gonna actually talk about this on episode 180 6, 6.

[00:39:28] yeah, next week. so Apple obviously has dropped the ball on artificial intelligence. I've mentioned many times. Surrey is not good. They, they have just fumbled many times their efforts to try and catch up. I think what Apple's play is, is they're accepting the fact that they are not going to be a Frontier Lab building the biggest, best models.

[00:39:50] They're gonna be a distribution channel for those models through iPhones and iPads and Macs. And the bet they're making is that if you take like [00:40:00] the most advanced models today, like a Gemini three, it requires going off to the cloud. Like you have to go up to the cloud to get access to it because the compute required.

[00:40:11] The bet Apple is making, I think, is that a Gemini three or even a Gemini four, they will be able to compress that model and serve it up to you on device probably within one to two years. And so you will have the previous generation state-of-the-art model on your phone without having to connect to any cloud.

[00:40:33] And so I think that's Apple's bet is that these models get somewhat commoditized. And they can serve up the best model to you on the device with complete privacy, low latency. So that would change things, that would change the equation of what is the value of a proprietary frontier model. But again, this is like intro to AI answer.

[00:40:54] So I'm, I'll, I'll probably stop there, but, listen to the episode next week, the weekly, and I'll, I'll, I'll go into [00:41:00] a little bit more about this and refer you a couple of sources. You can go learn more about this. 

[00:41:04] Question #12

[00:41:04] Cathy McPhillips: Great. Number 12, AI is already being introduced in major conglomerates. Do these companies understand AI well enough before reducing their human workforce?

[00:41:14] Paul Roetzer: so most of the time, no. I think what happened in, and I kind of projected this back in 2024, that this is what would happen is there would be pressure on private equity owned companies, VC funded companies, and public companies. To, to drive efficiency gains from workforce reduction because there was a belief, AI meant that you didn't need as many people.

[00:41:41] Now, I actually do believe that is true. I think when properly integrated into a business, you don't need as many humans doing the same amount of work. and many companies will choose to reduce the workforce as a result of that. So ai. Just to level set for [00:42:00] people who are, maybe this is like one of the first times you're like listening to this kind of thing.

[00:42:03] So I, I, I'll level set here. What I, what I mean by this AI is incapable of doing anyone's job today. Take, I don't care what the knowledge work field is, obviously doctors, but you could get into to marketers, to CEOs, to SDRs. We talked about them at the beginning to HR reps, like it cannot do anyone's full job.

[00:42:24] But it is increasingly good at doing tasks within that job. So if you take a, like, like let's say, an associate within a law firm, you know, maybe they've got a law degree and maybe they've got like three to five years experience. Today, maybe the AI can do 20% of the current work of that person. So if you take the tasks and you just say you lock those in and they remain static, we're not adding any new capabilities to that person.

[00:42:51] Any new task, you just take the things they do each week at each month. Maybe it can do 20% of that. Now, if you take 10 of those [00:43:00] associates and AI is doing 20% of all of their work, you don't need 10 associates anymore. You need maybe eight or you need seven. So that's what I mean is like it. It can immediately reduce the workforce if your company isn't growing and if you're not creating new needs of those associates.

[00:43:18] So ideally what you're doing is you're taking a bunch of stuff that isn't happening each month and you're redistributing that 20% and that associate is now doing these new and exciting things that weren't possible before. Not every company is gonna do that. Many companies won't do that. Many companies will take the easy route, the obvious route of they will reduce the workforce because they themselves don't understand the opportunity to drive growth and innovation by redistributing workforces into these things.

[00:43:45] And they're gonna be under tremendous pressure to just cut costs if they're not growing. And this is why I've said many times, it was like the focus of my keynote at Make on this year and the workshop I ran. Our only path forward is innovation and [00:44:00] growth and entrepreneurship. If we don't create the need for more work and more jobs, we will see an onslaught of workforce reduction in the next 18 months.

[00:44:11] Really, like millions of jobs will be reduced if we don't accelerate growth. 

[00:44:17] Cathy McPhillips: Yeah, so last year on our AI for Agency Summit, I was interviewing Amanda Todorovich at the Cleveland Clinic, and I asked her, you know, like. Have you gotten to your wishlist? Like has AI helped you enough that you are now onto the things you've been trying to do for the past 10 years?

[00:44:31] And she was like kind of laughed, like, no. 

[00:44:33] Paul Roetzer: Yeah. Yeah. 

[00:44:34] Cathy McPhillips: I would love to ask her that today. Yeah. Like how much have they advanced over the past 12 months that they are starting to get into some of those things? What are people able to do now that they have another year under their belt with their team?

[00:44:43] Yeah. 

[00:44:43] Paul Roetzer: Yeah. And again, like, I don't mean to be doomsday about this, but I'm gonna tell you point blank, I have met with the people who are being told to have a 10 to 20% reduction ready to go. Like they are basically sitting on call at any moment [00:45:00] to get the go ahead from the board or the C-suite to cut 20% of their team.

[00:45:04] And this is not a single conversation, this is many conversations. I know for a fact from the people who are in charge of the budgets and the people, they are being told to be ready to do massive reductions of workforces. I also have knowledge of other reductions that are already in the pipeline that we will learn about in Q1, Q2 of next year.

[00:45:27] Massive reductions. So this is, this is very, very real, like we have to accelerate growth and innovation. As, as a society like this, as an economy, like this isn't an option, and anyone who tells you this isn't going to lead to workforce reduction, including some of the leaders of the current administration, that is not true.

[00:45:51] Question #13

[00:45:51] Cathy McPhillips: Okay. Number 13, what are the main factors that could slow down the advancements of ai? I think of factors such as government [00:46:00] regulation. 

[00:46:00] 

[00:46:00] Cathy McPhillips: And societal revolt. What could delay the inevitable? 

[00:46:04] Paul Roetzer: Yeah, so I actually, I post, I wrote about this in our, my exec AI newsletter this week. So I actually outlined this.

[00:46:11] So if you get the exec AI newsletter, you can go reference that. 

[00:46:14] if you don't get that, then, 

[00:46:18] go subscribe for it. But let me, I'll, like, as we're trying, so again, like, I don't, I actually don't read these questions in advance. I don't know what Cathy's gonna ask me. So, as, as I'm answering this, I'm actually gonna pull up my newsletter.

[00:46:30] so here, here's the ones I identified. What slows AI progress down breakdown in AI compute supply chain. So obviously like AI is dependent upon NVIDIA and TSMC, which makes the chips Nvidia, you know, works with t SMC to create these chips and sell 'em. If there's a breakdown in that supply chain for any reason.

[00:46:47] lack of value created in the enterprises. We've heard murmurs of that. There was that, MIT report that everybody loves to cite. That was completely not factual, but like, if the belief is that you can't create value with these [00:47:00] things, then people stop buying them. Things slow down. IP lawsuits that could make the existing models illegal.

[00:47:06] Don't think that's gonna happen. Restrictive laws and regulations, that one is something we talk about a lot on the podcast for this reason. There's a lot of effort at state level to put laws in place and regulations that, that hinder the acceleration of AI progress. I would say the scaling laws not working, so like we've, at some point, like the reasoning things just stopped getting smarter.

[00:47:26] the post training stops working. Like I, again, I'm not in the labs, but everybody in those labs is saying that is not happening and we don't see that happening anytime soon. And then one of the other ones I mentioned is this, idea of like a voluntary or involuntary halt to model model advancements.

[00:47:43] I actually, if I had to force rank things that could happen. I think that there is a chance that that one at least becomes part of the conversation. That that Anthropic and openAI's and Google see a major [00:48:00] breakthrough on the horizon. They've proved it out in experiments in labs, and they don't know how to control it.

[00:48:06] So if you listen to the podcast this week, we talk about recursive self-improvement. That's specifically what I'm referring to here. If they find that these things are able to actually improve themselves and we are running the risk of a fast takeoff that we lose control of. and actually we'll talk about on Tuesday, there's a new foundation formed yesterday where openAI's, philanthropic and others are actually collaborating on an agent framework, like a standards for agents.

[00:48:29] That, to me, tells me they're talking at high levels about really important topics, and I know that this is one of the things that is discussed is like. At what point would we actually need to slow down? Now, the reason I don't think that happens is because if the US slows down, China won't, it'll be their chance to catch up.

[00:48:47] So yeah, those are some of the things that could, could be it. But again, if you go get the newsletter, it, it, it'll, you know, kind of walk you through those. And I also talk about those in the AI timeline course, in the AI Foundation's [00:49:00] course series that I mentioned, the fundamentals course series. 

[00:49:02] Cathy McPhillips: Great.

[00:49:03] Last question. I always try to end on a happy note, but I'm sorry we're not. We'll 

[00:49:07] Paul Roetzer: come up with one more after this one. Yeah. 

[00:49:09] Cathy McPhillips: I'll ask you about your holiday or something as 

[00:49:11] Question #14

[00:49:11] Cathy McPhillips: number 14, as AI systems move toward recursive self-improvement, which you just talked about. Yep. What guardrails are needed to ensure they aren't learning from distorted or incomplete views of the world, especially given today's concerns about censorship, rewritten history and bias information sources?

[00:49:25] Paul Roetzer: I, don't have the answer for this one that this person's probably hoping I do. So again, the recursive self-improvement is the idea that these things start to learn to improve themselves without a human in the loop. Like maybe a human lightly in the loop to start, but eventually it just constantly, 24/7, 365 is just improving.

[00:49:43] So the way to think about this is like, you know, imagine the model comes out and it's basically like a teenager. And like a human. So we equate this to like a human world, like humans are recursively, self-improving. We observe the world. We go to classes, we [00:50:00] read things, we watch things we learn and we like make better decisions.

[00:50:05] Now we get guidance from our parents. We do these things, our teachers, but like we're improving machines basically. The AI is very reliant on, the AI researchers to do this improvement through, like post training things. it comes out, it's like got these capabilities, got this intelligence, but it doesn't keep improving itself until there's a new model run.

[00:50:27] What they're basically promising is that we can treat these things or they'll start to function much more like a human, where it just starts learning from everything and the problem is. They become like PhD level in theory, like in days or weeks, like, and then they go beyond PhD level and they become like beyond anything, we're even able to understand what they're doing.

[00:50:51] And so that's the premise of recursive self-improvement is you create these models that can improve themselves through real world understanding, through what it [00:51:00] sees in the world. Once you put computer vision tied to this, to the things it's learning from real time news and things like that. And it starts to learn the way humans do.

[00:51:09] Guardrails, none that the labs don't put in place themselves, but each lab right now, because there is no regulation around this, makes its own decisions about what's right and what's wrong. So you basically are in a position where you're saying, I kind of trust Google would, would maybe like have some guardrails in place, but would X ai put the same guardrails in place like a Elon Musk who's racing to catch up and get ahead?

[00:51:37] Gonna have the same self-control about slowing down recursive self-improvement as others, I don't know, maybe he would, is China if they figured out or if if there's an espionage thing and they get access to what's going on at one of these labs in the US and they figure out how to go do it, are they gonna stop it?

[00:51:56] Probably not. Like recursive self-improvement is maybe like the [00:52:00] unlock that leads to super intelligence and that's what everyone's racing towards. It's why they're raising all this money and building all these data centers. They need to justify that investment. So unfortunately, barring any federal regulation, which I do not see coming, in the United States anytime soon, it's on each lab and then trusting that those labs talk to each other and work together to solve this.

[00:52:24] Cathy McPhillips: Okay, 

[00:52:26] Paul Roetzer: what am I most excited about for next year? That's a good one. so I'll, I'll end with my own question of myself. What am I most excited about? I think a lot of this stuff is, really smart. People who want the best for society and for humanity are working on these things and thinking about these things all the time.

[00:52:46] And I have a sense of optimism that they will figure out the really hard things and the rest of us are going to go through this golden age of innovation and creativity [00:53:00] and entrepreneurship. And reimagining careers in businesses. And I think it's really, really good to pay attention to these other things and to talk about them and to ask questions about them and have debates about them.

[00:53:12] And I think we should be doing more of that, but we shouldn't let it, replace in our mind or cause this like overbearing sense of fear and anxiety because we get to live through maybe the most innovative phase in human history. Like, we're gonna solve diseases, we're gonna. You know, go to other planets.

[00:53:33] We're gonna discover things we would've just never discovered scientifically in the next five to 10 years, you know, it's just gonna be an amazing time of innovation. There's gonna be hiccups, there's gonna be missteps, there's gonna be unfortunate events that occur. Like that's part of human history too.

[00:53:48] Like, there's always gonna be that when progress is happening. But I generally choose to be optimistic about what's possible. otherwise I would just curl up in a ball and stop doing what we're doing. So I think that with enough. [00:54:00] Conversation enough, focus on a positive outcome for this. The net in the end is gonna be a really good thing for society.

[00:54:08] but we have to be honest with ourselves about the, the, the roadblocks and obstacles that are gonna happen along the way and the pain points we're gonna have to go through. But not talking about 'em doesn't help at all. 

[00:54:20] Cathy McPhillips: Right. Okay. So I would just say to everyone listening, a couple things you can do.

[00:54:26] December 14th, we've got our Scaling AI class. We have, our intro class that we're running January 15th, if you are listening to this and you wanted to hear that whole presentation. Paul and Mike are out speaking, all next year. Yeah, I can help you with that. there's a book, we've got a community, we've got our courses, we've got a lot of ways we can help you, big and small, so please stick with us and we'd love to figure out how we can help you in your business grow next year.

[00:54:53] Paul Roetzer: Yeah, and the academy dot SmarterX dot ai is the AI Academy stuff where I was, I referred to that numerous times throughout. So [00:55:00] just so people know where that's at and, we'll put that in the show notes as well. All right, Cathy, thanks. And, the awesome questions, thanks to everybody and what we had, how many people registered for that intro?

[00:55:09] That was like 2000. Yeah, we had like over 2000 people registered. So these, again, these questions come from that audience. They're, we, we had dozens of questions we didn't get to in that, that class. So we'll do the same thing with the scaling ai, whatever we don't get answers to, timed answer there.

[00:55:24] We'll, we'll do another special edition episode of that. 

[00:55:27] Cathy McPhillips: Okay. And thanks as always to Claire for helping us get this all put together. 

[00:55:31] Paul Roetzer: Absolutely. Alright, thanks everyone. Have a great week. 

[00:55:32] Cathy McPhillips: Thanks 

[00:55:33] Paul Roetzer: everyone. Thanks for listening to AI Answers to Keep Learning. Visit SmarterX dot ai where you'll find on-demand courses, upcoming classes, and practical resources to guide your AI journey.

[00:55:47] And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about ai.

Recent Posts

[The AI Show Episode 185]: AI Answers - Getting Started with AI, Core AI Concepts, In-Demand AI Jobs, Data Cleanliness & AI Fact-Checking

Claire Prudhomme | December 11, 2025

AI Answers Episode 185: Paul Roetzer and Cathy McPhillips answer audience questions on AI careers, agents, AI ops, and what’s coming in 2026.

71% of Professionals Feel Secure in Their Roles Despite Layoff Headlines (Informal Survey)

Mike Kaput | December 9, 2025

Despite a news cycle dominated by reports of AI-related layoffs and tech sector volatility, our latest AI Pulse survey reveals a surprisingly resilient sentiment among professionals.

[The AI Show Episode 184]: OpenAI “Code Red,” Gemini 3 Deep Think, Recursive Self-Improvement, ChatGPT Ads, Apple Talent Woes & New Data on AI Job Cuts

Claire Prudhomme | December 9, 2025

On Ep. 184, OpenAI declares Code Red as Google launches advanced AI tools. Plus, the race for self-improving AI models, Apple talent shake ups and more.