42 Min Read

[The AI Show Episode 177]: AI Answers - AI Ethics, Flagging AI Content, AI Accuracy, Book Recommendations, & AI Intellectual Property

Featured Image

Get access to all AI courses with an AI Mastery Membership.

In this episode of AI Answers, Paul Roetzer and Cathy McPhillips answer the complex, and often uncomfortable questions shaping the future of AI. From the moral framing of “good” versus “evil,” to the technical risks of viruses, misinformation, and intellectual property, this discussion unpacks what it really means to use AI responsibly in a world moving faster than regulation or understanding.

Along the way, Paul and Cathy discuss fact-checking AI, the emerging need to authenticate synthetic content, and keeping a human-centered role in creation and communication.  

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

 

 

 

 

What Is AI Answers?

Over the last few years, our free Intro to AI and Scaling AI classes have welcomed more than 40,000 professionals, sparking hundreds of real-world, tough, and practical questions from marketers, leaders, and learners alike.

AI Answers is a biweekly bonus series that curates and answers real questions from attendees of our live events. Each episode focuses on the key concerns, challenges, and curiosities facing professionals and teams trying to understand and apply AI in their organizations.

In this episode, we address 12 of the top questions from our October 28, 2025 Intro to AI class, covering everything from tooling decisions to team training to long-term strategy. Paul answers each question in real time—unscripted and unfiltered—just like we do live.

Whether you're just getting started or scaling fast, these are answers that can benefit you and your team.

Timestamps

00:00:00 — Intro

00:04:31 — Is AI good or evil?

00:08:51 — Is AI a vector for viruses or trojans?

00:11:13 — If we’re using AI information, can we be sued if AI is pulling intellectual property?

00:13:10 — Is there one AI company that’s more ethical than others?

00:16:10 — Someone told me to add a prompt to ‘exclude hallucinations’ to avoid problems. Is that accurate?

00:18:03 — Is it helpful to use one AI tool to fact-check another? 

00:20:08 — Will there ever be a way to definitively identify AI-created videos? 

00:23:39 — Where do you decide where the human stays front-facing, like the podcast or webinars? 

00:29:18 — What books do you recommend reading to learn more about Gen AI? 

00:30:52 — My organization is focused on what not to do with AI. But I think we should also communicate what to do. What do you think about that balance?

00:34:18 — As a Director of Learning and Development, who’s doing AI in L&D right?

00:36:47 — Is there an AI concept for retirees that can help manage issues like healthcare decisions or transfer of wealth?

00:41:51 — It’s estimated Spotify has 100 million songs, and 75 million are AI-generated. Should Spotify and other streaming platforms flag this content as AI?

00:46:47 —  Do you have any moments from 2025 that you want to, that you've been thinking about over the past few weeks?

Links Mentioned


This episode is brought to you by Google Cloud: 

Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.

Learn more about Google Cloud here: https://cloud.google.com/  

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: The best parallel to artificial intelligence is probably the advent of the internet, and if we go back to like the early nineties when it really started being available to consumers, when we started getting access to the internet. You could look forward and say, wow, the internet is gonna enable a whole bunch of really horrible things.

[00:00:17] There's gonna be a dark web where all kinds of horrific things happen. There's gonna be all these cyber bullying and all these things will emerge from this thing we're calling the internet. Should we build it? And the answer is probably a hundred times out of a hundred. Yes s Welcome to AI Answers a special Q&A series from the Artificial Intelligence Show.

[00:00:37] I'm Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute. Every time we host our live virtual events and online classes, we get dozens of great questions from business leaders and practitioners who are navigating this fast moving world of ai, but we never have enough time to get to all of them.

[00:00:55] So we created the AI Answers Series to address more of these questions [00:01:00] and share real time insights into the topics and challenges professionals like you are facing, whether you're just starting your AI journey or already putting it to work in your organization. These are the practical insights. Use cases and strategies you need to grow smarter.

[00:01:15] Let's explore AI together.

[00:01:22] Welcome to episode 1 77 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Cathy McPhillips, chief marketing Officer at SmarterX. Welcome, Cathy. 

[00:01:32] Thank you. 

[00:01:32] Always wonderful to have you on the podcast. 

[00:01:35] Cathy McPhillips: No, I appreciate that. 

[00:01:36] Paul Roetzer: this is part of our answer series, so this is not a replacement to Mike.

[00:01:39] Mike is still the co-host for all of our weekly stuff, but Cathy and I do these special AI answers episodes, which are presented by our partner Google Cloud. So this is a series based on questions from our monthly intro to AI and scaling AI classes along with some of our, virtual events. So this is the eighth one of these AI [00:02:00] answers episodes we're doing.

[00:02:00] So every once in a while, we'll drop a second episode each week. the intro to AI and Scaling AI classes. If you're not familiar with them, you can learn more at SmarterX dot ai. We do free intro classes. We've done 52 of them now, which is actually, this episode is from that 52nd episode of the Intro to AI class.

[00:02:19] And then we do scaling AI classes each month and those are completely free. We had over 2200 people registered for Intro to ai. this week. That was on, that was just yesterday, right? October 28th. The 28th is We did, yeah. So, so what we're gonna do here is we're gonna answer, questions we didn't get to during that class.

[00:02:36] So I usually present for about 35 minutes, and then Cathy and I go through a bunch of questions. But again, we had a lot of people on that class and so there's a whole bunch of questions we couldn't get to. So that's what this series is all about, is. just kind of simple answers to questions that our attendees have about ai.

[00:02:53] So, as I mentioned, this is in partnership with Google Cloud. They have a sponsor, not only of this but our AI literacy project as [00:03:00] a whole. They, they help us with a lot of those initiatives. So we have a great partnership with the Google Cloud marketing team. They sponsor not only the AI Answers series, they do the intro to AI and scaling AI classes as well, and then a collection of AI blueprints.

[00:03:14] Plus we have a marketing AI industry council. That we run with Google, so you can learn more about Google Cloud at cloud.google.com. And then we've been mentioning these, they have these new AI boost bytes, which are great. It's a series of short training videos that are designed to help build AI skills and capabilities in 10 minutes or less.

[00:03:33] So check the show notes. We will put a link to their AI Boost Byte series in there. So Cathy, I'll turn it over to you. You can give a little bit more context of how this all works. And then we'll dive in. I know you sent me a brief, I have not looked at it, so I love being surprised by the question, so I'll, I'll let you run it from here.

[00:03:51] Cathy McPhillips: I sent you the brief for like my sanity, not, not for yours. So we recorded intro to AI 52 on October 28th. As Paul mentioned, it is the [00:04:00] morning of October 29th. It is 10:04 AM. Eastern when we were recording this and what we do is there's 20, 30, 40 questions we didn't get to every episode. So Claire, who, if you're a podcast listener, you've heard us talk about Claire.

[00:04:14] She runs through, does a dive into the questions, makes sure that they haven't been asked on a previous episode. If they have, we try to mix it up a little bit and then we're gonna gonna go through these questions now, whatever is left or some ones that we wanted to ask again alive on the podcast. Sounds good.

[00:04:31] Question #1

[00:04:31] Cathy McPhillips: Let's get started with number one. We're jumping right? We're jumping right in. So, okay. Is AI good or evil? What is safe for, for a business environment? 

[00:04:41] Paul Roetzer: Wow. Okay, so the good real, so it's interesting, I was teaching a class a journalism class last night at Case Western Reserve University, actually, and I love the questions I got from students and a number of them were actually related to this idea.

[00:04:54] Like, is it good, is it bad? how is society reacting to it? So this is [00:05:00] becoming a more common thing. it can be both. Like I often say the best parallel to artificial intelligence is probably the advent of the internet. And if we go back to like the early nineties when it really started being available to, consumers, you know, when we started getting access to the internet, 

[00:05:19] you could look forward and say, wow, the internet is gonna enable a whole bunch of really horrible things. There's gonna be a dark web where all kinds of horrific things happen. There's gonna be all these, you know, cyber bullying and all these things will emerge from this thing we're calling the internet.

[00:05:33] Should we build it? And the answer is probably a hundred times out of a hundred yes. That the net good for society is, makes it to where you deal with the ramifications and the negatives as you're building it. You figure those things out and then you try and create as much benefit for society as possible.

[00:05:52] I think of AI in a lot of the same ways. There are going to be horrible things that happen as a result of ai. Some of them I could sit here and list for you. [00:06:00] Some of them I don't even want to conceive of. and in the process, we're also gonna solve diseases and figure out how to create abundance in terms of energy.

[00:06:10] we're gonna hopefully create new levels of human fulfillment. Like, I think all these incredible things will happen. It is not gonna be a straight line, and it is not, going to be without very messy, painful parts of the process. So that's, that's why I say it's both, it is good and it's evil. what is safe for a business environment?

[00:06:31] I, that's a pretty broad question, I would say overall, but, you know, I think that. We teach a responsible approach to this, that it should be a human centered thing. That whatever you're doing in a business environment, you should think about how it positively impacts the people within the organization.

[00:06:47] There are definitely gonna be some companies that look at this as a replacement to people, and they will do that as quickly as humanly possible. They will find ways to automate jobs so they can just have fewer people working. I don't think that that'll be the norm. [00:07:00] I think that that'll make the big headlines and there'll be all kinds of stories about that impact in a negative way.

[00:07:05] But I think most companies, especially privately held companies, like with good people running them, they're gonna look at this as a way to create fulfillment for their people. Like, I think about this all the time as we scale, Cathy, it's like, how do we enable people to spend more time with family and friends while still, and, you know, and their own wellbeing.

[00:07:24] Like how do we enable that while still scaling a company really fast? That has a really broad impact. But we can do it in a way that like we can do the work of what would've taken 10 people before with a smaller team. And so we're gonna keep hiring, like, we'll probably double our staff next year, but we probably would've had to have four x our staff to do what we're gonna do next year in a normal business environment before generative ai.

[00:07:47] So that's how I think about it. And I, again, I think it's gonna be independent choices that each company and each, you know, leadership team will make about the impact it has on their business. 

[00:07:55] Cathy McPhillips: Yeah, and I think, you know, we look, talk about a lot about AI policies, but I think those AI [00:08:00] policies are more like human behavior policies.

[00:08:02] Yeah. Like we need to make sure that, you know, here's all, here's the reasons we're doing all these things, but really it comes down to how is the human using all of these tools for good or for evil. 

[00:08:12] Paul Roetzer: Yeah. And we can put the link to our responsible AI manifesto that I wrote in 2022, if I remember correctly.

[00:08:19] Or maybe it was like early 2023. It might have been 2022, I think. but the whole premise was that as we go through this phase, like it has to be human centered. Like there's 12 principles I outlined, and there it's creative common, like anybody can take these principles and use them. So yes, I think you're right Cathy.

[00:08:33] So much talk is around generative AI policies and preventing risk and like guiding people how to use it. And not enough talk is about responsible AI principles, which should be part of that, right? Which is how do we use it in a responsible, human-centered way, not just for our own employees, but for our customers, our community, all of our stakeholders.

[00:08:51] Question #2

[00:08:51] Cathy McPhillips: Absolutely. Okay, number two, then we asked this live, but I wanted to ask it again. Is AI a vector for viruses or [00:09:00] Trojans? 

[00:09:00] Paul Roetzer: Yes, it is. mainly the computer use agents is the biggest concern here. So what I mean by that is, ChadGPT Atlas is the browser. We talked about episode 176, so we went pretty extensively into this.

[00:09:11] So I will mainly say, go listen to episode 176 to get more. concrete examples of how this works. But in essence, once you allow the AI to sort of take over your computer and go and do things on your behalf, like click around websites and copy and paste text, things like that, it starts to open you up to, nefarious ways that people can very creatively take over your computer, inject things into your.

[00:09:37] drives that can do things you don't know are happening. So it, people in the IT world are, are thinking deeply about this. They're working on this, the labs are working on ways to prevent this. But as a user, just know that specifically when it comes to agents and agents that can go do things on your behalf, on your devices.

[00:09:56] There are entirely new, [00:10:00] risk factors that start to emerge. And so don't just jump in and say, oh, great, I can use Chad Chip Atlas and let it take over my computer, or I can use Google Chrome's, you know, computer use agent and philanthropic has one. And like, just because the tech is there doesn't mean it's fully tested and safe is, is kind of like the thing.

[00:10:17] So in, in a corporate environment, listen to it like this, this is a situation where yes, I know it can, it can slow things down. But you have to rely on the experts who actually understand this stuff because they're trying to keep you and the company safe, 

[00:10:30] Cathy McPhillips: right? And it's such an easy thing to do. It goes back to the principles and the guidelines.

[00:10:33] It's this easy thing to do. Oh, this will save me so much time. Oh, it's Google and this company, you know, it must be safe, right? Or it must have been tested thoroughly that I know this is going to protect me and my company. But asking the question, you know, not waiting for it to come to you and say. Yes, this is okay.

[00:10:50] Go to them and say, is this okay? 

[00:10:52] Paul Roetzer: Yeah. And we even cited the chief information security officer from openAI's on episode 1 76, 76, who is explicitly [00:11:00] saying, this is not fully tested technology. Like there are risks and we're aware of them and we're trying to resolve them, but they put the product into the world anyway.

[00:11:08] Cathy McPhillips: Yeah, absolutely. Education, education, education. Right? Yeah. Yep. Okay. 

[00:11:13] Question #3

[00:11:13] Cathy McPhillips: Number three. If we're using AI information, can we be sued if AI is pulling intellectual property. 

[00:11:19] Paul Roetzer: This is one that comes up a a lot, but I don't think poli people fully like comprehend this. So an example here would be like, let's say when SOA two first came out from OpenAI a few weeks back and they hadn't put the guardrails in place for intellectual property.

[00:11:32] And so you could create Disney characters doing things, you could create actual people doing things, celebrities, politicians, your CEO, like you, you could do whatever you wanted. And there was like almost no guardrails in the first 72 hours. So the question becomes to make this tangible. Just because openAI's allowed you to create South Park characters or Marvel characters, does it mean that you are liable for actually doing it?

[00:11:58] I don't [00:12:00] know what the case law would say here. I don't know that it's been defined quite yet, but I am under the assumption that you should be cautious in creating copyrighted material, trademark material. When using ai and don't assume that the liability lives at the lab level, you know, Disney, all these companies, they're gonna sue whoever they can to protect this, and they're gonna go after the big guys first.

[00:12:28] But that doesn't mean that you don't have some liability as an individual user. So you have to understand the terms of use. You have to, you know, I don't know, like I always say, you have to have these moral clauses. 'cause the law's not gonna keep up with as fast as this is moving. And so you have to decide.

[00:12:43] From a moral perspective, am I going to do this? Like from an ethical perspective? And that's why general AI policies are so important within organizations is you have to have these sort of principles in place about how you will act and behave, even if the law is uncertain in some of these areas. [00:13:00] 

[00:13:00] Cathy McPhillips: And that's the responsible AI manifesto one that I use all the time as like the legal precedent is lagging so far behind.

[00:13:07] Yes. Like we just need to make the right decisions. 

[00:13:09] Paul Roetzer: Yep. 

[00:13:10] Question #4

[00:13:10] Cathy McPhillips: Okay. Number four. Is there one AI company that's more ethical than others, especially around environmental impact and data sourcing? 

[00:13:20] Paul Roetzer: that's a tricky one. So, you know, I would say it's in the eye of the beholder. you could say certainly that Anthropic seems to take the higher moral ground.

[00:13:33] at least that's what they say. They want to create that perception that they have a, a greater focus on, on safety. And yet they like, they got a $7 billion fine for, or I don't know what the total fine was, but the, they stole 7 million books basically to train their model and they actually had to pay fines for that.

[00:13:55] So I think I mentioned this, I dunno which thing I was on this week where I mentioned this, but what you can go search and see [00:14:00] if your books are in that training set. Two of my books are ironically not the one I wrote about artificial intelligence, but my other two books are in there and I think I'm actually eligible for $3,000 per book.

[00:14:11] As a fine against Anthropic for stealing 7 million books, so even the ones who present themselves as being more ethical. All, they all stole copyright material. Like now stole is like, again, a relative term. It may be found that it was fair use and they were allowed to do it. but we don't, we don't know.

[00:14:30] So there are some, like Adobe tried to do ethical training of their models early on, from an image generation standpoint. I don't know that that worked so well. Because if you do that, then you basically have to restrict the training data to things you are permitted to use. And that is a much smaller universe of data.

[00:14:49] And so it affects the quality of the models. And at the end of the day, I don't know that consumers care enough to use an ethically sourced model. They just want the model that works [00:15:00] best, right? And so I think that's kind of where Silicon Valley landed on this is like, screw it, we're, you know, people don't really care that much.

[00:15:06] There might be a small percentage of people. And we'll figure it out and pay the legal bills later. Like, let's just go Hoover up everything we can possibly get and we'll train these models. So, yeah, I don't know. I mean, there may be some small niche players that are a little bit more ethical about this, but generally speaking, they've all did the same thing when it came to training the models and they continued to do the same thing when it comes to training them.

[00:15:25] Cathy McPhillips: Yep. you did reference environmental and ethical AI in episode 1 63 if people wanted to go back to that one and take a listen. 

[00:15:34] Paul Roetzer: Yeah, and environmental, same, same thing basically applies you, you need to use a bunch of compute and a bunch of energy to train the biggest models. There are some people training smaller, more efficient models that are gonna be better on the environment, but at the end of the day, like we fast forward five years.

[00:15:48] The major pull on energy and the major impact on the environment isn't the training of the models, it's the inference. It's all of us using intelligence in everything we do in our devices, in our [00:16:00] software. yeah, it it, that's what's gonna end up being the thing. And so there's, it's, it's kind of hard to prevent that from happening at this point.

[00:16:10] Question #5

[00:16:10] Cathy McPhillips: Yep. Okay. Question number five. Someone told me to add a prompt to exclude hallucinations to avoid problems. Is that accurate? 

[00:16:20] Paul Roetzer: I doubt it. I haven't heard that. So to unpack that a little bit, if people aren't familiar, so hallucinations is the technical term used by the labs. That means it just makes stuff up, like it gets stuff wrong.

[00:16:31] So if you ask ChatGPT to help you write a research report and it does it and it looks incredible. But then you start peeling it back and you realize, wow, it just completely made up a citation. Or the book its citing doesn't exist. Or it was a different author than what the, it says, or it gets a date wrong or a person's name wrong.

[00:16:48] Like it just makes stuff up. I don't know, like telling it to think harder or like check your work. I could see those things possibly having an impact. I can't see them removing the [00:17:00] human in the loop of still having to verify everything. I guess it's possible, like these models are weird, like little things like before we had reasoning models in September of o4 or 2024, you used to be able to get them to improve their outputs by saying like, think harder about it.

[00:17:17] and like nobody really knew exactly why it worked, but it just did. So yeah, it wouldn't shock me. It, I don't know about exclude hallucinations. I could see something like, check your work. Make sure your citations are accurate, go search any citation and confirm the data. Like I could see things like that possibly making an impact.

[00:17:38] but I also imagine that all of that is gonna be baked into the system prompts for every one of these models anyway. They're, the labs are trying to reduce the hallucinations and I'm sure that they've thought of all the prompting tricks they can on the backend for the system prompt itself. But it might make a meaningful difference.

[00:17:53] Not enough for you to. I guess from an analogy perspective, take your hands off the wheel. I was gonna say like, you still gotta check it. 

[00:17:59] Cathy McPhillips: Yeah. Moral of [00:18:00] the story is don't put that in there and then say like, oh good, it's done. 

[00:18:03] Paul Roetzer: Yeah. 

[00:18:03] Question #6

[00:18:03] Cathy McPhillips: Yeah. Okay. Number six. Is it helpful to use one AI tool to fact check another, like using ChatGPT to check Gemini?

[00:18:11] Paul Roetzer: Potentially, I have done this. I will take an output from Chad GBT and I'll throw it in a Gemini and say, can you assess this? Can you edit this kind of thing Again, I think it's probably, sometimes it might work, sometimes it doesn't, it doesn't remove the need for the human to still be the one that verifies it and still holds the authority of putting the thing into the world and being responsible for whether or not it was actually correct.

[00:18:35] But I could see it being a layer of kinda like having someone edit something. You know, if I write something and I pass it to Cathy and say, can you edit this? But then I'm still the one that publishes it at the end of the day, if there's something wrong in that, it's on me. Not Cathy, she was my editor. and so that's kind of how I would feel about this situation.

[00:18:51] I would equate it to a human fact checker where you, as the authors, still hold the end game responsibility. But it might be helpful and it might, you [00:19:00] know, you could say, Hey, check the tone, check the style, check the grammar. yeah, I could see using it like that as an as assist, not as like a final.

[00:19:09] End product kind of thing. 

[00:19:10] Cathy McPhillips: Right, and I mean, and the diversity of the model training. Does that matter? 

[00:19:14] Paul Roetzer: Yeah, I don't know. They, the models end up sort of functioning in a very similar way. The thing that makes them different is the system prompt from the lab to tell it how to behave and what sort of personality to have, and.

[00:19:26] And stuff like that, and you're gonna be able to customize those things yourself. Like right now, you can actually go in and change the personality of ChatGPT so that it's like, you know, more positive and happy toward everything it says to you, or you can make it more critical, like, and so I think over time the models are gonna be personalized based on your individual preferences.

[00:19:46] Right now, they, they behave differently. Like the formats are a little different. The output's a little different, but again, it has way more to do with decisions made by humans in the labs that tell 'em how to behave versus did they train on different data sets [00:20:00] that make them kind of come out of the box different.

[00:20:03] It's way more about human choice that goes into how these things function when they're out in the real world. 

[00:20:08] Question #7

[00:20:08] Cathy McPhillips: Got it. Okay. Number seven, will there ever be a way to definitively identify AI created videos? What if someone makes a video of me doing something illegal or includes misinformation about my business?

[00:20:20] How can I pro protect myself or my clients? 

[00:20:23] Paul Roetzer: Yeah. So this, this is two good questions. Packaged as as one here. So the first is identifying a created videos, no universal standard at the moment. It would require all the labs working together and the industry coming together to establish a standard. That enables any platform to know a video is generated by ai.

[00:20:42] So in essence, like picture a universal watermark, so whether it appears on Instagram or TikTok or YouTube or X or wherever, that social platform can instantly know AI was created because it has this universal identifier. What's happening right now is [00:21:00] each lab has an identifier. So there's watermarks in VO that Google DeepMind knows that VO was used to create it.

[00:21:07] Same would go for soa, like openAI's knows anything that was used to create soa. They can do the same thing, by the way, with text and images and anything they, they, the individual labs can put those things in there. It's just that we don't have a standard across the industry that everybody agrees to. And I don't see that happening anytime soon.

[00:21:25] I would love it. That would be an incredible step forward. I just don't see these labs co coordinating to make that happen. what we need is the social media platforms at minim to recognize and disclose, the different markers from the different labs at, at, at minimum would be a good step. So the second one about what do you do if someone makes a video of you or of your kids, or if you know your CEO, your board.

[00:21:52] I people aren't gonna like this answer. Like, you, you screwed. Like there. This is, this is a fundamental problem and we've known this was coming [00:22:00] for years. So we talked about this in our 2022 book about the impact of DeepFakes and how you actually had to build this into your crisis communications plans as a company of what happens if our CEO gets deep fake doing something, saying something they didn't do.

[00:22:13] It spreads for two hours and then we get it taken down. But like we all know how social media works. It's already out there so. Yeah, this is something you do need to be planning for. I would say as you're going to 2026, your crisis communications team has to be dealing with this right now. You have to have a plan for what happens, who do you call?

[00:22:30] The different platforms that all this needs to be listed. So if you never build a crisis communications plan, you basically game plan scenarios of what could go wrong. You assign probabilities to that going wrong, and then you look at the potential impact. Of if that event happens. And then what you do is you layer in what do we do if it happens, who do we call?

[00:22:48] Who's the email? Who's the point of contact? How do we do it? How do we inform the board this? All of that happens at a crisis comms plan. I used to do this for a living, like the crisis comms was like early in my career. So, that has [00:23:00] to live in your 2026 planning docs. And if your team or your company isn't thinking about that, you gotta go think about it now.

[00:23:06] What do you do in your personal life? Different story. I, I, It's one of those things like, I just, I wait for the moment, like someone we know has to deal with this, because I know it's coming. we'll talk about an example of this next week for a, a famous scientist who's being deep faked and he's like, what, what the hell do I do?

[00:23:23] And he was like, tagging everybody. openAI's, Google, YouTube, like in his tweet saying. I'm being deep baked, saying things I never said, and now I'm getting all this blowback from stuff that I didn't do. How do I stop this? So this is a very real thing. It's gonna be a major problem in 2026. 

[00:23:39] Question #8

[00:23:39] Cathy McPhillips: Yeah. Okay. Number eight.

[00:23:43] Where do you decide where the human stays front facing, like the podcast or webinars? Do you think the end user will drive our decision? Is it trust is, is it a desire for human connection? 

[00:23:54] Paul Roetzer: Yeah, so I did, I don't think we put this one online. We should put this online. Cathy, my key [00:24:00] opening keynote for the AI for Writer Summit this year.

[00:24:02] We should put that on YouTube if we haven't already. So this is where we dealt with this. Like I basically went through and said, just 'cause AI can write doesn't mean it should. And like, how do you know when to just write yourself versus let the AI do the thing? And for me, like the biggest factor comes down to authenticity.

[00:24:20] Like expect expectation of authenticity. So when I'm sitting here answering these questions, you don't want to know, I just had ChatGPT bullet point these things out for me, like that, that would not be authentic in any way. And it's like, well, I could do that myself. I think the people re the reason people listen to the podcast is because it is authentic and unscripted like this.

[00:24:43] This is literally like on the fly. Cathy's asking me questions that I don't even know what they're gonna be, and I'm responding based on. How 14 years of studying artificial intelligence and meeting with thousands of exec, like that's what we're trying to bring. And it, if it wasn't that, [00:25:00] if it was literally just someone who decided to try and become an AI influencer and they're just asking Gemini and ChatGPT, like all these questions, and then they're summer, then that, that falls flat in a second.

[00:25:11] if you get them off of where they're coming, you can't talk about a script. So I always say like, to have confidence in the material. You have to be able to stand there and answer unscripted questions for like 10, 20, 30 minutes. If you can do that, then you actually have domain expertise. You actually have confidence in what you've done and and the experience you've gained.

[00:25:31] So I do think that that's what matters is whether it's your podcast, webinars, whether it's like articles you write, posts you put on social media. You, you have, if you want authenticity, if you wanna establish thought leadership, it cannot be the words of an AI assistant. Anyone can do that. and then the desire for human connection.

[00:25:52] That's why I think things like this are so important. I think in-person events we're very, very bullish on in-person events. You can't fake those things. And I [00:26:00] think that more and more, and we see this ourselves with our, you know, our community and our MAICON event that we just had a few weeks ago. That human connection is becoming more and more critical.

[00:26:08] Like that was the thing we heard probably more than anything at that event, is just how meaningful it was for people to be with other people who are all trying to figure this out. And you cannot simulate that on a Zoom call. You can't simulate it on, you know, in a, in, in, in a Slack channel. Like you can initiate it, but there's nothing like human connection.

[00:26:27] So, yeah, I think that's gonna be just like absolutely essential and you should be thinking strategically about it going into next year. How do we. Amplify human connection and how do we, ensure authenticity, comes through in the content we're creating. 

[00:26:44] Cathy McPhillips: There was a podcast I listened to a few months ago.

[00:26:46] They were taking all the AI news, and I'm fairly certain they were taking all the AI news from this podcast and they were synthesizing it and making an AI powered podcast. And I was like, interesting. So like I, for a minute I was like, oh, is this [00:27:00] going to hurt us at all? And I listened and I was just like, oh, we're good.

[00:27:04] You know, it's just such a different experience hearing it from someone. Explain this to us, talk about it, help us understand it, put it in words we understand versus just like, here's your news and this is like the most boring thing I've ever heard. Yeah. So I think we want a little, I mean, not that this is like super entertaining, but there is a little element of that that brings this a podcast to life.

[00:27:25] Paul Roetzer: Yeah. And I think, like I'm a huge believer in, being imperfect. I don't know if that's the right way to say this, so like, I don't know if people understand this, but like our podcast, this is episode 1 77, Cathy. We do almost zero edits. Like Mike and I have probably three times now I can think of in all the weekly episodes Mike and I have ever done, have, we actually stopped to pause and like take a break and it was usually due to coughing fits.

[00:27:55] We, we don't like. Do a segment and be like, ah, we didn't do that well enough. Let's [00:28:00] go back and ask those two questions again. Everything we do is first take and then we turn it over to Claire and Cathy and they create the product in. In a matter of 24 hours, it goes from us recording it to it is live actually less than 24 hours.

[00:28:13] Less than 24. 

[00:28:14] Cathy McPhillips: Yeah. 

[00:28:14] Paul Roetzer: So it's completely authentic and I actually, like I've told the team specifically, don't take out the imperfections like that. That is actually what makes it human. And so I like the fact that like sometimes I actually don't know the answer and I'll tell you point blank, I don't know the answer.

[00:28:30] Now it puts a lot of pressure on you to not say something stupid. But I also feel like even that, it's like if I make a mistake or if I like, you know, have to go back and correct something, which I can't actually think of, like examples, I've had to go back and say that we we're very, very thorough in checking all of our sources and we're gonna cite something.

[00:28:47] We're gonna double check the data points. We approach it as journalists would basically. and I think that's what Make, makes the weekly kind of unique is like, it is this very imperfect human approach to everything [00:29:00] that is, augmented by our ability to use AI to do it as streamlined as possible from a planning and production and promotion standpoint.

[00:29:06] But the human show up and do it every week. 

[00:29:09] Cathy McPhillips: Yeah. There actually have been times when you've said like a sentence and I was like, Ooh, that could be taken outta context, but we didn't do anything with it. 

[00:29:15] Paul Roetzer: Yeah. Yeah. So, yeah, I know. 

[00:29:18] Question #9

[00:29:18] Cathy McPhillips: Okay. Number nine, what books do you recommend reading to learn more about generative ai?

[00:29:25] Paul Roetzer: yeah, so there's, yeah, I know we, I think I mentioned this, this might have come up yesterday too. in, in the AI Academy courses, I actually have a course where I talk about all like my favorite books when it comes to this stuff. There are some that actually predate generative ai, like prediction machines is amazing.

[00:29:44] The algorithmic leader by Mike Walsh is amazing. Those are written in like 2017 to 2019 range. Ethan Molik, co intelligence is more of a modern day genius makers by Cade. Mets came out before generative ai, but it helps you understand kind of how we got to the [00:30:00] generative AI phase. we had those questions about environmental issues and things like that.

[00:30:05] If you want to understand Geopolitics and environment, empire of AI with Karen Howe is incredible. the one I think I mentioned yesterday is Jeff Woods, who actually did an AI driven leader keynote for us. That's the name of the book. 

[00:30:17] Cathy McPhillips: AI driven leader is really good. So, and then, and I remember that one actually, just because I felt like, you know, he is coming to the event.

[00:30:25] Yeah. I just wanna be prepared. That's really good. 

[00:30:27] Paul Roetzer: It is really good. Yeah. And then our book Marketing Artificial Intelligence came out in 2022. But we foreshadowed all of this happening. So there's actually a section where it says like, what happens when AI can write like humans? Like we knew generative AI was emerging that could, was going to be able to do the things that we then got when we got ChatGPT later that year.

[00:30:48] So that's still a very relevant, book as well. 

[00:30:52] Question #10

[00:30:52] Cathy McPhillips: Nice number 10. My organization is focused on what not to do with ai, but I think we should also communicate what to do. [00:31:00] How do you think about that balance and how should leaders frame it? 

[00:31:03] Paul Roetzer: Yeah, if they're not thinking about what to do, they're gonna be obsoleted.

[00:31:05] So It's not even just like, nice to have, I think of it as a business imperative. So depending on what industry you're in. Yeah, if you are not doing this stuff and other people are, you know, if it's a SaaS company, you're cooked like it's game over, like SaaS companies have to have been doing this for the last two years.

[00:31:24] If you're in manufacturing or maybe pockets of healthcare or law or professional services, like there's a chance you've gotten by to this point not having figured this stuff out, but it's not gonna be long now until everybody else starts to figure this out. And so you either have the opportunity to get out ahead of this and be that like AI emergent company that.

[00:31:43] has the opportunity to actually accelerate growth or you're just gonna eventually kind of fade into obsolescence. So, you know, I think about, and then even from there, like the adoption, there's a couple layers. Like my workshop at MAICON this year was about AI innovation, and the main takeaway there [00:32:00] was, optimization is 10% thinking innovation is 10 x thinking.

[00:32:04] And what I meant by that is. It's gonna be table stakes to use AI to optimize efficiency and productivity. Like everybody is gonna be doing that 10% is honestly probably a low bar, of what you can gain in efficiency or productivity. But like, let's just say you're thinking about let's incrementally improve the things we're already doing.

[00:32:23] Let's use AI to kind of level those things up. That'll work for a little while. But if you have, competitors, people in your industry who are looking this as saying, yeah, but how do we reimagine the entire thing? Like how do we change the pricing model completely? How do we, bring new services to market, go into new markets?

[00:32:42] Those are the people who have the 10 x thinking, like they're looking at dramatic transformation. And if you're up against a company that's doing that and has the will and vision to execute it. You're done. Like, and so that's how I think about our business is like we're basically a media company, like first and foremost, probably as the foundation, [00:33:00] like build an audience and then create value for those people.

[00:33:02] So we, we do things like the podcast and the newsletter. We are an event business. We have MAICON and our virtual events, but we actually run probably like dozens of events throughout the year depending on, you know, which things you'd throw in that category. And then we're an education business, and the education largely comes through our academy.

[00:33:18] And I spent every day thinking about what is a smarter version of all of those business models. Research would probably like get thrown into the content side as well, and I literally think about how do we just disrupt the entire industry? All of those, like what is just a different version and not because I've anything against any of the companies that exist within those, industries.

[00:33:35] There's a bunch of good companies. I have friends running companies in those industries. I just, what motivates me is to say, how do I do something different and better? not because I really care about the competition or like wanna beat any particular company, it's just like that. Otherwise I don't wanna get outta bed.

[00:33:52] Like if, if I'm not trying to like completely reimagine something, I just lose motivation. So that's kind of how I think about it, is like you [00:34:00] gotta get to the optimization phase if you haven't yet as a company, but you have to quickly also be thinking about, and transitioning into the innovation phase, which truly drives transformation.

[00:34:10] Cathy McPhillips: Yep. Yeah. So it's more about like guiding innovation less about, less about policing the risk of everything. Yeah. Okay. 

[00:34:18] Question #11

[00:34:18] Cathy McPhillips: Number 11, as a director of learning and development, who is doing AI and l and d, right. 

[00:34:23] Paul Roetzer: yeah, so this, this one comes up. I often cite Moderna. I think I mentioned that on the intro call.

[00:34:31] you know, it was a question. I think we, we might have talked a little bit about, they, they did a really good job. It's a great case study. We featured in our courses. You can go online and actually, you know, OpenAI, maybe we'll drop that link in there. OpenAI had a case study on Moderna pretty early on.

[00:34:48] we've seen organizations like Cleveland Clinic as an example. You know, they, that's just someone like we, we've worked with, so I can, speak directly on theirs. where, you know, they, they looked at it from a leadership [00:35:00] perspective. They look at it from practitioner perspective. and they really started thinking about how do we, we drive that within.

[00:35:05] HubSpot does a great job with this. Baptist Health is a company we've mentioned from a healthcare perspective that does some really cool things. There's actually some on our Academy SmarterX AI site. We do have some logos there. Most of the big companies that we work with, we can't disclose who they are.

[00:35:22] but I will say categorically, we, the way the best companies are doing it is. They are infusing it into their existing programs, especially at the a larger enterprise level. They're looking at it as complimentary to any programs they currently have and then they're building specific AI curriculum and so sometimes we will work with them where they basically plug our AI Academy into that to to immediately level up, and then they'll compliment it with other learning platforms like LinkedIn Learning and Coursera and Google and openAI's and throughout all these people have great courses.[00:36:00] 

[00:36:00] And so that's kind of how we teach it, is we build learning journeys through our academy that's specifically designed for people to drive personal and business transformation, but we actually believe deeply in the value of a lot of the other ecosystem, a lot of their content being created. so yeah, those are, you know, I don't, I don't know, I don't know if I gave like great examples of case studies to go look at and things, but, 

[00:36:20] yeah. The companies that are doing it, I've generally found, aren't talking publicly about what they're doing. 

[00:36:25] Cathy McPhillips: Right. 

[00:36:25] Paul Roetzer: Because it is a pretty distinct competitive advantage at the moment. McDonald's, we, you know, Michelle Ganley, the former chief, a officer at McDonald's did a talk on how McDonald's was doing this, so that might be another one to go look at.

[00:36:36] That's public knowledge, but yeah, it's, it's early and there's very few who have done it really, really well and are willing to talk about the fact that they've done it really well. 

[00:36:47] Question #12

[00:36:47] Cathy McPhillips: All right. Number 12. I love this question and we, I didn't even see this yesterday when we were going through. is there an AI concept for retirees that can help manage issues like health care decisions or transfer of wealth that's 40 [00:37:00] million people who could benefit?

[00:37:03] Paul Roetzer: I mean, you could come up with a really cool system prompt to do this with AGI PT, but is there like a company that has done this yet? I'm not aware of it. I now, I'd have to deep dive in. We have CB Insights is like a platform we use to do market analysis. I would've to go in and run like a healthcare industry, search and see if we could find something like this.

[00:37:28] I would imagine, you know, a a RP organizations like that might be doing this kind of research already or building this kind of tech. But I love this kind of thinking because this is basically what we always teach is. Once you understand AI and what it's capable of and what it will be capable of in the, you know, coming months and years, you start to look at every problem differently.

[00:37:48] And so this is an example of that. Like I've, I think I talked on a podcast a while back of how insanely complex it is to get, medication, some types of [00:38:00] medication. So, you know, a family member who has a medication that, the supply is very low. And so you literally have to call around to like five different pharmacies to try and get this medication that they won't give you three month prescriptions of.

[00:38:15] So every 20 days you have to bounce around between five facilities. And then if you get one where say we can get it in next, you's like, okay, put in the order. Then you call another one. They say, we can't fill that because you have an order at another pharmacy, then you gotta call. I'm like, how is this in 2025 the way we do pharmacy?

[00:38:32] And I consider us privileged in our ability to solve this. And we have the resources to where the finances isn't even the issue. So we are like already in an advantage and I still can't do this. And I think what do people do who don't have those privileges that we have? And so that's something I've thought a lot about.

[00:38:52] It's like, how do I solve that? Or like, mark Cuban's worried about this when like maybe I'll just let Mark Cuban do it. But these are the exact kinds of [00:39:00] things where ideas are born when you look at problems differently. And so I really like this question. I hope whoever asked it. Actually like, you know, thinks more about this and maybe tries to find some people who can work on this thing.

[00:39:12] 

[00:39:12] Cathy McPhillips: and is this something that you could throw into like a problems GPT or something and you, or you could just start asking one of the tools some questions on how to get started? 

[00:39:21] Paul Roetzer: Yeah, like, this is kind of like when I had the issue of, I didn't think parents understood AI enough and the dangers of it, so I built kids safe, GPT, you know, which basically was for parents to better understand risks and talk to their kids about those risks.

[00:39:34] I would imagine you could probably build AGI PT in an afternoon. That would do something similar where you just went in and gave it the prompt of, you know, I'm trying to help seniors who maybe, you know, don't fully understand the best ways to do this, how to make healthcare decisions, dah da da. You can't function as a doctor, but you can provide medical guidance that they can ask their doctor.

[00:39:53] Like, you just write the system prompt and then you build it. transfer of wealth for sure. Like, I'm planning my [00:40:00] trust, like Yeah. I could, if I had three hours Yes. You, we could design AGI PT to probably. Build a minimum viable product of this in an afternoon. 

[00:40:10] Cathy McPhillips: Well, I'm gonna call my friend Michael at AARP and tell him that.

[00:40:13] Yeah, 

[00:40:13] Paul Roetzer: if anybody wants, wants to like, I'm gonna say, gonna say call me, but I'm like, I, I'm, so I read this tweet this morning. I think I, sorry, this is a total sidetrack now, but I, people might find this interesting. so there was a tweet from a guy who just left XAI. So I guess that's the connection here is it's a top researcher at XAI that worked for Elon Musk.

[00:40:36] And he talks about a common mistake companies make is not allowing engineers to have enough freedom and time. And then he like followed up with another tweet where he said, the best work comes from focusing on one problem at a time and nothing else. And so I tweeted that and I said, I feel this in my soul.

[00:40:49] Constant struggle to live it. So that is the curse of like, when you look at every business and you look at every problem and say, oh yeah, I could solve that. Like it would just take me three hours. And if like you spend [00:41:00] your life constantly looking around and seeing all these problems and not like saying, okay, but for the next three months I am explicitly solving customer experience Red Academy.

[00:41:10] Like, and I have, throughout my career of 25 years, I have always struggled with this. Like I have too many ideas. And I often can't just lock in and do something. I actually heard an interview with Jeff Bezos, who he was talking about this recently, where he just like some advisor told him like, you have enough ideas to kill this company.

[00:41:26] Like you, you, you just always bring this thing and then the team can't like, focus. So, I was gonna say like, reach out and I'll help you. 

[00:41:37] Cathy McPhillips: Call me in January. 

[00:41:37] Paul Roetzer: Yeah, reach out. But like, if I say no, don't be offended because like, I'm trying very, very hard to focus on like a few key problems at, at a time. 

[00:41:46] Cathy McPhillips: We appreciate that.

[00:41:48] Sure. Okay, last question. 

[00:41:51] Question #13

[00:41:51] Cathy McPhillips: Number 13, it's estimated Spotify has a hundred million songs and 75 million are AI generated. I can't, I cannot confirm or deny that math Someone, someone, this was someone else's question. Should Spotify or other streamings platforms flag this content as ai? 

[00:42:07] Paul Roetzer: okay. Yeah. So let's just categorically say there is a lot of AI generated music that is emerging.

[00:42:13] It's, rumored, openAI's is working on a competitor to, like Suno uio, where they're gonna build their own music. each platform has to have its own policies about whether or not they allow AI generated music or they have to designate if it's AI generated. My personal opinion on, on stuff like whether it's an AI generated video or an image that could be mis misconstrued as being like fully humanly authentic.

[00:42:38] is that it should, you should be able to know that. I've seen that on some of the social platforms where it'll kind of indicate if it is. a bigger question here is just around, you know, the evolution of. What is considered music and what is considered entertainment, and you know, these platforms are gonna give people what they want and they're willing to pay for, I [00:43:00] saw, I think it was Suno.

[00:43:04] Has, like they, they recently said they had like a hundred million in revenue or 150 million in annual revenue. And somebody's like, for what? Like what? Who is paying for this stuff to like these AI generated songs? And so sometimes I just feel like, and I've said this before on the podcast, like I don't know that I have the best taste when it comes to what's gonna work for the broader consumer market when it comes to this stuff.

[00:43:26] Like SOA two to me is a ridiculous product and I don't know why people would spend a ton of time on that platform. But that is one unique perspective of a middle aged male. Like that's just me as a dad and like I look at this stuff as like, I don't get it. Like we waste our time on enough things. I don't need an AI generated slot machine to like take more of my time.

[00:43:46] That doesn't mean that a whole bunch of other people find it fascinating and it's a nice distraction for them and like they enjoy it. Yeah. And so again, like I'm, I try and be as. Objective is possible when it comes to these things. [00:44:00] And I accept that there may be a whole bunch of people who love this idea of like on demand music and they make it sound like whatever they want.

[00:44:06] And to them that's creativity. And I'm not gonna judge that. Like I just, I try and sort of just observe what's going on. So my personal opinion about, Hey, I generated music and is it good or is it bad? Like it's kind of irrelevant what I personally think. It's just look at the numbers and say, well, is there demand for it?

[00:44:23] And if there is, then somebody's gonna build it and they're gonna keep serving it up. So I don't, in the end, the data will, you know, tell the story and how people react to this stuff will sort of drive whether the platform's designated AI content or not. Like, I don't know. If they decide it's less, people stick around and listen to it, then they might not show the AI generated thing.

[00:44:41] Cathy McPhillips: Well that's what I was thinking, like, it's one thing to say, I am gonna tinker with this tool. I'm going to use this to make something. It's all, it's all together different to say like, I actually am enjoying listening to it as a, as a consumer, right. 

[00:44:54] Paul Roetzer: Yeah, and I, I, again, there's so many layers to, to this.

[00:44:59] Like, [00:45:00] I will say I've like personally been fascinated recently by these, these clips I've seen where they're turning like hip hop songs into like fifties, sixties, jazz and blues music. It's like infinitely interesting to me. And I know it's all AI generated, obviously, but it's such a wild, way to like listen to the song in a totally different way, like an Eminem song to like a.

[00:45:24] 50 blues track. It's just wild to hear. So again, like I'm, I'm kind of like talking outta both sides of my mouth here. Like I actually find some of this stuff super interesting. 

[00:45:33] Cathy McPhillips: but would you pay for it? 

[00:45:35] Paul Roetzer: I, I don't, I highly doubt that, but 

[00:45:37] that's, 

[00:45:37] I don't know. And I think this is kind of the exciting part is like we just don't know where this stuff goes and, 

[00:45:44] you know I I don't know. I think there's, there's so much to be learned and observed, and I was shocked to see the 150 million revenue number. I kind of have the same reaction. What, like, but yeah, I don't know. It's interesting stuff. But I, I, I, long [00:46:00] story short, I do think that stuff like that should be indicated as ai, for now, but that may evolve over time.

[00:46:06] Sure. 

[00:46:07] Cathy McPhillips: And also if you are a mastery member listening, our Gen AI app series. This Friday is actually on Suno. Claire did an amazing video that's dropping tomorrow 

[00:46:17] Paul Roetzer: and also on that track if you are or are not an academy member. We have a blog now that actually posts, anytime we have new content available to academy members.

[00:46:27] And so it's a great way to keep up on all the Gen AI app reviews that are coming out, all the new courses and certificates and things like that. So we'll put the link in the show notes, but I think it's just academy dot SmarterX dot ai slash blog, if I'm not mistaken. That was a lot of words, but we'll put, put link in the show notes.

[00:46:45] Cathy McPhillips: I'm gonna ask you one more question before we sign up. Okay. 

[00:46:47] Question #14

[00:46:47] Cathy McPhillips: Do you have any moments from MAICON 2025 that you want to, that you've been thinking about over the past few weeks? 

[00:46:53] Paul Roetzer: I trust. Yeah, so, I mean, me kind of, people aren't familiar. We started in 2019. It was 300 people. [00:47:00] this year was 1500. In the process, we almost lost everything.

[00:47:04] Like, you know, when COVID shut down the event business, we went to nothing. as a business and then me personally, I'd bet everything, my entire financial wellbeing on AI working and being a thing people cared about. Preach at GPT. And so for me, I think like so much of it is just gratitude of being there and seeing all the hard work that the entire team put in all these years.

[00:47:30] And then like being with the people that we don't get to see all year round that we maybe hear from on LinkedIn or get some emails or uc and Slack channel. I don't spend much time in Slack. but we don't get to hear their stories enough. And so for me it's just those like three or four days of being together with all these people that feels like this massive extended family in a way.

[00:47:50] Because everyone's so cool and supportive of each other and empathetic of where people are at. And so just to be there and hear these insane stories, I mean we had people from what, [00:48:00] 19 countries this year? 

[00:48:01] Cathy McPhillips: Me too. 

[00:48:01] Paul Roetzer: yeah. And like to hear these stories of people who listen to the podcast every week in New Zealand or Japan or like wherever in, or people who like took this leap and left their safe career at a corporation 'cause they weren't AI forward enough and they went and did something else and they were terrified but it worked.

[00:48:15] And they're, now they're in this like amazing place. That's the thing I love. Like it's just being together with all those people. I know. What about you? I mean, you, you, you get to be with him too in like, in 2019. 

[00:48:26] Cathy McPhillips: I came to 2019 with Joe Plei as a paid attendee. And I remember sitting through Keith's session and Katie Berg session that year and I left.

[00:48:33] I'm like, that's cool. And I went back to work because I just didn't know how to do, how to implement a lot of things they were saying and it like, how do I find the time? How do I find the resources? And then this year I'm like, oh my gosh, there's so many things I can like go do now. Yeah, there's so many more stories.

[00:48:48] and the stories are getting very broad. You know, there's very, very basic ones. And there's some folks, Lisa Adams comes to mind who are like there, she's so far ahead, so many of our attendees are so far ahead. [00:49:00] So seeing that all those changes is pretty remarkable. 

[00:49:02] Paul Roetzer: Yeah, and I mean, even like finding speakers in 2019 was an insanely difficult process, and I owned that process largely until probably 2023.

[00:49:12] I focus on the main stage now. But back in 2019 and then we came back in person in 2022, there weren't that many people that actually were doing interesting things and I think so many people would come to us and like, wait for the answers from us. And then once Chet GBT emerged and everyone could actually get in and start using this.

[00:49:34] Yeah, this like, it just exploded where all these interesting people were doing really fascinating things and sort of pushing the frontiers. And so now it's hard to narrow down the field every year. Like we only get to have 50 some speakers or whatever it is. But we have hundreds of submissions plus people we track all year round doing interesting things.

[00:49:54] To the point where now we learn, I think like hopefully more than people learn from us. Like we still [00:50:00] do our best to stay on the frontiers and teach everything we can. But yeah, there's so many speakers doing things, you're like, oh my god, I never even thought to do that. Right. So, yeah, I love the level of like, the quality of speakers, the quality of the sessions.

[00:50:13] 'cause you really can walk into any room and learn something new. 

[00:50:18] Cathy McPhillips: Yeah. And on that note, if you are interested, MAICON 2025 on demand is available right now. You can sign up and get immediate access to those 20 sessions. And MAICON 2026 is open for registration. so if you are listening to this on Thursday, tomorrow is our last day of our very, very early pricing on October 31st.

[00:50:38] So if you're interested, it'll save you like 

[00:50:39] Paul Roetzer: $800 registration 900. Oh my God. Okay, so do it now if you're, get that price raised fast. 

[00:50:46] Cathy McPhillips: So do it now while you can. and I think that's it. 

[00:50:49] Paul Roetzer: All right. Yeah. And that's just MAICON.ai M-M-A-I-C-O n.ai. Both the reg and the on-demand are right at the top of the page.

[00:50:57] Check those out. All right, Cathy, thanks. And [00:51:00] Claire, thanks for curating questions for us as always. And we will be back with episode 1 78, right, the weekly. Gotta figure out when we're recording that 'cause I have talks next week in San Diego. And where am I at? San Diego and Miami. 

[00:51:13] Cathy McPhillips: San Diego. And Orlando.

[00:51:15] Paul Roetzer: Orlando. And it was in Florida somewhere. 

[00:51:17] Cathy McPhillips: all right, next intro AI class is December 3rd. Next scaling AI class is November 14th. We would love to see you and your teammates there, and we'll see you next time. Paul, 

[00:51:27] Paul Roetzer: Thank you. 

[00:51:27] Cathy McPhillips: Thank you. 

[00:51:29] Paul Roetzer: Thanks for listening to AI Answers to Keep Learning. Visit SmarterX dot ai where you'll find on-demand courses, upcoming classes, and practical resources to guide your AI journey.

[00:51:42] And if you've got a question for a future episode, we'd love to hear it. That's it for now. Continue exploring and keep asking great questions about ai.

Recent Posts

[The AI Show Episode 177]: AI Answers - AI Ethics, Flagging AI Content, AI Accuracy, Book Recommendations, & AI Intellectual Property

Claire Prudhomme | October 30, 2025

In Episode 177 of The Artificial Intelligence Show Paul Roetzer and Cathy McPhillips explore AI’s ethics, risks, and human impact.

[The AI Show Episode 176]: ChatGPT Atlas, ChatGPT Atlas Security Issues, Letter to Pause Superintelligence, Amazon’s Plan to Automate 600,000 Jobs & New Data on AI Relationships

Claire Prudhomme | October 28, 2025

In Episode 176 of The Artificial Intelligence Show, we cover ChatGPT Atlas, Atlas' security flaws, a letter trying to pause superintelligence, and more.

[The AI Show Episode 175]: AI Answers - AI for 10X Innovation, Rethinking GTM, Dangers of Progress at All Costs, Autonomous Marketing, How to Keep Up with AI, and Future of Web Traffic

Claire Prudhomme | October 23, 2025

Episode 175 answers questions from MAICON attendees on AI strategy, autonomous marketing, future of SEO, and, more in this AI Answers episode.