AI isn’t just becoming more capable. It’s becoming more personal. And even more "adult."
This week, Paul and Mike lead off with Sam Altman’s provocative comments about ChatGPT’s role in mental health and the growing debate over our emotional relationships with AI.
Then, from blue-collar workers adopting ChatGPT as a daily tool to tech CEOs warning of an impending jobs shock, the episode explores how AI is quietly reshaping both the labor market and human identity.
They also unpack major industry releases, from Google’s new Veo 3.1 and Anthropic’s Haiku 4.5 to Spotify’s “artist-first” AI music push, revealing the race to define who benefits from intelligent machines. Over and over in this episode, we ask what's becoming a defining question of the AI age:
Who’s in control of the future we’re building?
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:05:09 —ChatGPT, AI Relationships, and Mental Health
- Altman X Post 1 on ChatGPT Mental Health
- Altman X Post 2 on ChatGPT Mental Health
- X Post from Ed Newton-Rex on Anti-AI Sentiment
00:18:58 — MAICON 2025 Takeaways
00:29:57 — AI’s Increasing Impact on Labor and Jobs
- Your plumber has a new favorite tool: ChatGPT - CNN
- AI Jobs Shock Is Coming and Firms Aren’t Ready, Klarna CEO Says - Bloomberg
- Why AI Will Widen the Gap Between Superstars and Everybody Else - The Wall Street Journal
00:40:05 — Google Veo 3.1
- Bringing new Veo 3.1 updates into Flow to edit AI video - Google Blog
- Veo 3 and 3.1 - Google AI Studio
00:43:25 — Claude 4.5 Haiku Is Released
00:46:30 — Anthropic Co-Founder Essay Angers White House
- Import AI 431: Technological Optimism and Appropriate Fear - Import AI Substack
- X Post from David Sacks
- Anthropic Gets Ready to Go Startup Shopping - The Information
00:54:22 — OpenAI’s Opt-Out for Sora 2 Causes Problems
- X Post from the OpenAI Newsroom: OpenAI’s Opt-Out for Sora 2 Causes Problems
- OpenAI blocks Sora 2 users from using MLK Jr.'s likeness after "disrespectful depictions" - CBS News
00:57:44 — Elon Musk Clarifies His Definition of AGI
01:01:12 — New Paper Says AI Method Can Reproduce Human Purchase Intent
01:04:46 — Music Industry Leaders Join with Spotify to Create Artist-First AI
- Sony Music Group, Universal Music Group, Warner Music Group, Merlin, and Believe to Partner With Spotify to Develop Artist-First AI Music Products - Spotify Newsroom
- X Post from Charlie Hellman
01:08:00 — New AI Feature in Google Sheets
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off either an individual purchase or a membership by using code POD100 when you go to academy.smarterx.ai.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: Like we are nowhere near ready as a society for people becoming attached to these things, like you may not choose to use these tools in this way and that that's fine, that's your choice, but the labs are going to give you that choice. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:26] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by Mike, co-host and marketing AI Institute, chief Content Officer Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:48] Join us as we accelerate AI literacy for all.
[00:00:55] Welcome to episode 1 74 of the Artificial Intelligence Show. I'm your host, Paul [00:01:00] Roetzer, along with my co-host Micah put, we are fresh off of MAICON 2025. We're recording this on October 20th, 2025 by 11:00 AM Not sure if anything crazy is gonna be going on this week, so we'll timestamp it as always to begin with.
[00:01:15] we'll talk a little bit about MAICON in, in one of the main topics, but just an amazing experience. So thank you to all of our listeners who came out. It was, I feel like every single conversation I had Mike on about you, but it started off with, Hey, big fan of the podcast. Oh yeah. Like the amount of people who were there, who are podcast listeners.
[00:01:33] It was just awesome and to get to meet everybody because I've said it before, like podcasting is a very opaque like thing. Like we don't know who listens. Like we don't have any data on those people. We don't get access to. Contact information. We don't have a database of those people like we have, we have no idea.
[00:01:51] So you get very limited analytics into your podcast in terms of like geographically, number of downloads, things like that. But it, it [00:02:00] is not like a transparent medium if you just have good insights into who they are and what their jobs are and what industries they're in. We don't know any of that stuff.
[00:02:09] So to get to meet so many people, in person was just awesome. And Mike and I got to record a podcast on site that's actually gonna drop on Thursday. Yeah. So that'll be, episode 1 75 is gonna be an AI answers that Mike and I did sort of a live recording of, in the middle of the exhibit hall at MAICON, which was wild, seeing everybody kind of moving around us and stuff.
[00:02:31] So yeah, just again, we'll, we'll talk a little bit more about MAICON, you know, one of the upcoming topics here today. But just thanks again to everyone who came out. It was so cool to. Just get to spend a few days with everybody and hear every, you know, the stories from people and what they're doing with AI and how it's impacting their careers and their own companies and everything.
[00:02:50] So good stuff. again, thanks to everyone who was at MAICON. It was an awesome experience and, sort of still coming, coming down off the high from that event, it was [00:03:00] getting back into reality and starting to work. Alright, so this episode is brought to us by AI Academy by SmarterX, which, if you're at macom, we had this awesome booth.
[00:03:08] It was the first year we've had a, a dedicated area for, AI Academy. So we've completely reimagined this with all new courses and certificates to help individuals and teams accelerate their AI literacy and transformation. These include AI fundamentals, course series piloting, AI scaling, AI industry and department specific course series and certificates, and our ongoing gen AI apps and, app reviews and AI Academy live series.
[00:03:34] Mike led the charge on a new course series AI for marketing. I'm gonna turn over Mike. Let him give us a quick little rundown on that series, which is live now.
[00:03:41] Mike Kaput: Yeah, for sure. Paul, this is one of kind of our flagship courses right now in AI Academy. Outside of the fundamentals and piloting and scaling, we have so many marketers in our audience, and this course is designed to teach 'EM kind of A to Z.
[00:03:55] How do you actually get started with ai? How do you actually apply AI to your specific [00:04:00] use cases as well as find those use cases? And then how do you get started actually selecting and applying technology for those use cases? And the great thing is we've structured the course so that not only is it highly accessible, but it is adaptable to any type of marketing you do based on the frameworks and tools that we use within it.
[00:04:20] So no matter what type of marketer you are, AI for marketing is going to teach you. Exactly how to move beyond just kind of information and a couple little tools here and there to actually systemically like transform your work with. So I'm super excited about, we've had a lot of really good early feedback, a lot of people posting on LinkedIn after they finish their certification for it and just super excited for this one.
[00:04:43] Paul Roetzer: Yeah, and you can go to academy dot SmarterX dot ai and learn all about all the courses. the marketing one's there. We'll put a link directly to AI for marketing in the show notes as well. Alright, Mike, before we get into, you know, key takeaways and stuff from MAICON and the second topic, let's get into [00:05:00] erotica.
[00:05:00] I don't know, I else to say this one. Mental health and, AI erotica, it seems to be the, hot topic at the moment.
[00:05:09] ChatGPT, AI Relationships, and Mental Health
[00:05:09] Mike Kaput: So, first up, Sam Altman is saying that ChatGPT is going to get a bit more human and a bit more personal. So in a pair of posts this week. openAI's, CEO said the company plans to relax restrictions in ChatGPT that were initially put in place to prevent mental health harms.
[00:05:28] So the changes will allow users to customize chat GT's personality. It'll let you do things like customize how it uses emojis. It can act like a friend, and it's gonna start echoing again what many people loved about the four oh model should you choose to have that experience. And Altman said, openAI's now has new tools to better mitigate serious mental health issues in the platform.
[00:05:52] And that makes it safer to bring back a more expressive personal and human-like experience. So [00:06:00] he previewed some of, what they're thinking. There's some new policies arriving in December. They're actually going to, age gate certain things, but also they have this broader principle. And this is where what you mentioned comes in Paul to quote, treat user adult users like adult.
[00:06:16] That includes allowing verified adults to access more mature content. The specific example Altman gave was erotica, which from the point that he then posted Drew way more attention than he expected and was just an example of allowing adults to choose how they use the tool. He also clarified after this that mental health safeguards remain in place and that miners will continue to receive heightened protection within the tool.
[00:06:43] But as he put it, OpenAI is aiming to balance freedom and safety without becoming quote, the moral police of the world. So I guess Paul, like these posts are not particularly surprising, I guess, from Sam, based on what we've been discussing lately. But it really feels like we're [00:07:00] on a slippery slope here where like openAI's, the other labs are going to start mediating for better or for worse, whether they want to or not.
[00:07:08] The types of relationships that literally billions of people can form with ai, like was that. Going through your head as you were reading these,
[00:07:18] Paul Roetzer: this is definitely the direction they've indicated they were going. I mean, this isn't shocking in any way to me because Sam has continuously said that the future of their AI assistance would be, personal.
[00:07:30] And so we're just now kind of heading, I guess, more aggressively in this direction. So I'm gonna read the two tweets, Mike. 'cause I think just for, so people have the full context of what was said and how it was said. Yeah. So Sam's first we was October 14th, at, at around noon Eastern time. He said we made chat chi PT pretty restrictive to make sure we were being careful with mental health issues.
[00:07:55] We realized this made it less useful, enjoyable to many users who had no [00:08:00] mental health problems. But given the seriousness of the issue, we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
[00:08:15] In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people like about 4.0, referring to an earlier model. we hope it will be better if you want your ChatGPT to respond in a very human-like way or use a ton of emoji or act like a friend Chad.
[00:08:35] GPT should do it, parentheses, but only if you want it to. not because we are usage maxing, meaning they're trying to, you know, make the app more addictive and keep you on, he wanted to say in December. As we roll out age gating more fully and as part of our quote unquote treat adult users, like adult principle, we will allow even more like erotica for [00:09:00] verified adults.
[00:09:00] So that, that was the end of the tweet. Then there was actually one, reply I saw in that thread, and it said, from a user, why do age gates always have to lead to erotica? Like, I just want to be treated like an adult and not a toddler. That doesn't mean I want perv mode activated. And the reason I picked that one is because Sam actually replied to that one and he said, you won't get it unless you ask for it.
[00:09:25] a second user said about time chat. GT used to feel like a person you could actually talk to. Then it turned in a, into a compliance bot. If it can be made fun again without losing the guardrails, that's a huge win. People don't want chaos, just authenticity. And Sam replied to that one. He said, for sure we want that too.
[00:09:47] Almost all users can use ChatGPT however they'd like without negative effects. For a very small percentage of users in mentally fragile states, there can be serious problems. He then [00:10:00] said, 0.1% of a billion users is still a million people. We needed and will continue to need to learn how to protect those users.
[00:10:09] And then with enhanced tools for that, adults that are not at risk of serious harm, mental health breakdowns, suicide, et cetera, should have a great deal of freedom in how they use ChadGPT. So that was October 14th. He then, about 27 hours later on October 15th, tweets, okay. This tweet about co upcoming changes to the Chad CPT blew up on the erotica point much more than I thought it was going to.
[00:10:34] It was meant to be just one example of us following more user freedom for adults. Here is an effort to better communicate it. AKA, the communications team is now rewriting this, or GPT five is rewriting this, whatever. But here is what he then wrote. As we have said earlier, we are making a decision to prioritize safety over privacy and freedom for teenagers, and we are not loosening any policies related to mental health.
[00:10:59] This is a [00:11:00] new and powerful technology and we believe minors need significant protection. We also care very much about the principle of treating adult users like adults. As AI becomes more important in people's lives, allowing a lot of freedom for people to use AI in the ways that they want, is an important part of our mission.
[00:11:18] It doesn't apply across the board, of course. For example, we will still not allow things that cause harm to others, and we will treat users who are having mental health crises very different from users who are not without being paternalistic. We will, we will attempt to help users achieve their long-term goals, but we are not the elected moral police of the world.
[00:11:40] In the same way that society differentiates other appropriate boundaries, such as R rated movies, we want to do a similar thing here. So, just a, a couple thoughts here, Mike. I think the key is like, there, there has to be a lot of tr, a lot of trust in society and with users that [00:12:00] technologically open the eyes finding ways to solve this because as a reminder, chatbots are not determinist, deterministic systems, not software that just follows rules every time.
[00:12:13] They will at times just do what they want and they can be, led to do things that they're not supposed to do quite easily. So people who know how to work with these systems, even when they're designed or told not to allow certain conversations or to behave differently, if it's determined that the user might be under mental distress, that does not mean that they're going to do what they're told each time.
[00:12:41] So each lab has to make choices on how their AI assistants are gonna behave out of the box, which is kind of what he's saying here. We're, we're putting more, imagine basically the system prompt says if the user is behaving in this way, then don't help them go that direction, like bring them back to a better place.[00:13:00]
[00:13:00] Or if someone that we believe to be a 13-year-old starts talking to, in an inappropriate way, you have to shut down the conversation. That is basically what, what is happening. They're, they're telling the system to behave a certain way if a condition is met, but that's not how they really were. And so again, you can make it behave in a certain way, but it doesn't mean it'll always follow those rules.
[00:13:25] And then each lab is gonna decide what that out of the box experience looks like, and then how customizable those behaviors are. So xai and meta, for example. We'll likely push the boundaries of what is acceptable in society further than openAI's, Anthropic and Google earlier. Elon Musk, as we've talked about recently, has been very aggressive in the AI avatar erotica space, literally as a feature of Grok.
[00:13:54] Like he's tweeting this stuff all the time about, I don't remember the names of these AI avatars, Mike, but like [00:14:00] you can basically have a relationship with an AI avatar and Elon Musk endorses it. Google will likely be way more conservative on, on this side. I would imagine, or at least they're certainly not, I can't imagine Google, tweeting that you can do erotica, but like maybe Gemini will enable it.
[00:14:17] that being said, I did just yesterday when I went into the Gemini app, Mike, and I don't know if you've seen this yet, it popped up and said, you can now personalize your experience using past chats to better understand you and your world. Yep. It's available in 2.5 Pro and coming soon to live and flash.
[00:14:33] So I said, all right, cool. Let's see what happens. So I go into the first chat and it pops up and it says, Hey, there, great to see you. I was like, that's an interesting way to start a conversation, with an AI assistant that it's almost like it wants to have the relationship with you now. and then it said, based on what we've talked about here, a few fun things we could jump into, and mine is pretty boring, but Brown's game day brainstorm, which is weird because I don't remember ever talking to it about the Browns, your next big a AI idea, which [00:15:00] okay, yeah, makes sense.
[00:15:01] And then AI powered Wellness boost, custom workouts, healthy recipes. So I was like, all right. Yeah, that's, those are probably things I've talked to Gemini about. So, again, keep in mind we're gonna go this direction. Like there, there is no turning back here that all the AI assistant are a hundred percent capable of doing these things.
[00:15:18] These more RR we'll just call 'em r rated conversations. a hundred percent. They're trained to do those things. The only reason they don't do them outta the box is because the labs have told them not to. Companies like character ai, however, will absolutely exploit what is likely a hundred billion dollars plus market, if not more, to build these AI companions and, r rated assistance.
[00:15:45] And it, this is a totally where this is going. Now, Mike, not to put you on the spot here, but you had this conversation with Alex Kitz, who was our Yes. Opening keynote on the final day at MAICON. big tech podcast, huge [00:16:00] fans of Alex. He's by the way, nicest guy in the world, in person. So Alex was, if you do listen, like awesome to meet you.
[00:16:06] But Mike and Alex had this conversation because Alex has actually personally tested these. So Mike, I don't know if you have any context from your conversation with Alex about this.
[00:16:16] Mike Kaput: Yeah, so Alex was raising really good points around the fact that this almost seems in some ways inevitable. You know, we were joking about how deep down the rabbit hole he went with testing these out, but he's testing them for a very real reason, which is that millions and millions of people are using them.
[00:16:34] The user bases for these tools are increasing and they're a key part to understanding in his mind. I think where some of the big labs and the big technology companies are going when it comes to, even if it's not romantic or kind of adult. Like you mentioned, AI getting to know you and being essentially your best friend, your best assistant, your best coworker.
[00:16:56] Paul Roetzer: If you want it to
[00:16:57] Mike Kaput: be. If you want it to be.
[00:16:58] Paul Roetzer: Yeah. [00:17:00] Yeah. It's, it's gonna be interesting. But again, and we've talked about this on previous episodes, you just have to be ready. Like you may not choose to use these tools in this way, and that, that's fine, that's your choice, but the labs are going to give you that choice.
[00:17:17] And that means your friends, your family, your parents, your grandparents, like we are nowhere near ready as a society for people becoming attached to these things. And so as weird as it is, it, it is a conversation. You have to start preparing yourself to have, I think I maybe mentioned this, but like with my own kids, I haven't directly, I've, I, my kids know more about this stuff than the average teenager.
[00:17:45] But we haven't sat down and had like the heart to heart conversation about becoming, connected with an AI assistant to the point where you're sharing, you know, very personal stuff. I imagine it's probably a conversation I should be [00:18:00] having, honestly, like I, and I think what I've said on the previous episode is even if it's not my kids, it might be their friends.
[00:18:07] Like, or, or it might be, you know, someone in their class and maybe it's like someone who's more introverted and they don't share as much and they don't maybe have someone to talk to. Like maybe that's the person. and so maybe my kids need to understand this more deeply so that they can talk to other kids because maybe their parents have no idea to talk to them about, I don't know.
[00:18:26] Mike Kaput: Yeah.
[00:18:26] Paul Roetzer: So it's, that's what I'm saying, like, as a society, we're just so unprepared for this and we know people are already doing this. I think it came up at Makea and somebody said, like there's been people who've like attended weddings of like humans and their AI. Chatbots. Wow. Like this is like a thing that's already happening.
[00:18:46] I haven't seen that story, but yeah, I mean, someone's saying like, yeah, this is like a thing we're seeing, so it's gonna get weird and we just have to, we have to be ready in some way.
[00:18:58] MAICON 2025 Takeaways
[00:18:58] Mike Kaput: Alright, Paul, our second big [00:19:00] topic this week is something you've already alluded to make MAICON 2025 is a wrap.
[00:19:04] We just finished up last week at our annual marketing AI conference, MAICON for short, from October 14th to the 16th here in Cleveland, Ohio at the Huntington Convention Center. We had 1500 marketers and business leaders come together for three incredible days of content, connection, and community. And there was so much incredible brown covered in the keynotes, the breakouts, and the workshops that we wanted to actually take a few minutes in this week's episode and talk through what happened at the event.
[00:19:36] Some of the practical advice that came out of it, and maybe give the listeners a couple actionable takeaways because even if you couldn't attend MAICON, we'd certainly love to see you there. We still wanted to give you some information about the event and provide some value for you. So Paul, I'll turn it over to you to kind of kick us off here with your initial thoughts of the conference.
[00:19:55] Paul Roetzer: Yeah, so I'll, I'll provide some commentary just on, on me on a particular, but I think [00:20:00] just overall, one of the things, Mike, that I took away is just the desire people have for that human connection. So I'm, I'm just, I'm very bullish on in-person events because you can't fake it. and I think that more and more as AI plays a greater role in our lives, I think people just want that community feel, that human connection, that ability to be with each other and to inspire each other.
[00:20:23] So that was like overall, what I took away from last week was just an incredible few days of 1500 plus people coming together. all curious, all with open minds, all willing to help each other. It's so fun for me now running this event six years after. You know, we founded it in 2019 when we created the first one, and back then it was so hard to find use cases and to find speakers who were doing interesting things.
[00:20:49] And now there's just a flood of incredible speakers and doing awesome stuff. And like every session is so actionable and people are giving away like entire workflows. Like, here's how you [00:21:00] do it, here's how you set this up. Like so generous with what they're learning. And so that was the number one thing for me was the culture.
[00:21:06] The speakers were incredible. Again, we had 10 main stage. That's basically what I saw. I didn't get a chance to go into any of the breakouts. There was 30 some breakouts and demos and lunch labs and all these things and workshops. So just tons of stuff. I mentioned earlier, the attendee stories were amazing.
[00:21:24] Just people from all over the world that were doing incredible things with their own career transformations, business transformations. one thing I noted Mike, that was really interesting to me is so many people, and I don't know if it's 'cause of the podcast or what, we're not marketers like, there was a very large percentage.
[00:21:41] I don't know what it is. Like I don't, I don't even know that we asked the right question on registration to get at this. I guess we could look at titles is probably a quick way to do it, but there was a lot of people there who were just there for the AI side, the business side of the event, and that was really cool to see and led to tons of conversations.
[00:21:58] I'll just break down [00:22:00] two quick things, Mike. 'cause you mentioned this idea of like, giving away some of this value without having to have been there. So personally, I ran an AI innovation workshop on the first day and then I did the move 37 moment for knowledge workers. So I'll just share a few takeaways and, you know, kind of tips from those for people.
[00:22:17] So the AI Innovations Workshop, my main premise here is that every business is focused on AI for efficiency and productivity. We wanted to drive, the efficiency, wanted to reduce cost. In that scenario, we will need fewer humans doing the same amount of work. This is the thing I keep harping on with the economy is unless a company is growing, it won't need as many people.
[00:22:41] So for me to take a human-centered approach, a responsible approach to AI integration into business and into society, we have to grow. It's the only option. So we have to accelerate growth, and that happens through innovation. So my workshop is all about how do we ideate to create significant impact [00:23:00] that accelerates change.
[00:23:01] We talked about optimization. Using is using AI to do the same things better, faster, and cheaper. Innovation is using AI to do new things that create new forms of value for customers and the organization. So the. Kind of like the one tweet slide I guess I had was optimization is 10% thinking innovation is 10 x thinking.
[00:23:22] So I wanted people to consider opportunities with AI to innovate across products, processes, and business models, and not just with where the tech is today, but where it's going. So I said, explore innovation on the frontiers of where AI is going. So two quick tools that I, released during this workshop that are free for people to go check out.
[00:23:43] One is an AI value calculator. This is new. if anyone's taken my Scaling AI courses on AI Academy, I released it there as a worksheet and I gave people like the formulas. We've turned that into an app. So you can go to SmarterX dot ai slash calculator. And [00:24:00] what that does is it allows you to calculate a potential efficiency lift or productivity lift for you or your team.
[00:24:08] When you get AI training in AI technology, so it's basically like a mortgage calculator would function, but for, value creation with ai. So you can go try that out, play around with it for free. The other thing I introduced during this workshop was innovations, GPT. So I didn't know who's listened to us for a while, knows we have a collection of these GPTs we've created.
[00:24:28] There's jobs, GPT Campaigns, GPT Problems, GPT, and now there's Innovations GPT. And what it does is it helps people brainstorm innovation ideas, strategizes them, it'll actually write strategic briefs for you based on the idea, and then sample innovation ideas as maybe my favorite function. It just, and you could say, gimme more in this idea.
[00:24:48] Or this is my title, this is my company. Help me figure these things out. And so that was what the workshop was. It was just, I dunno, we had like 220 people in my workshop. And, just amazing ideas. So it was a [00:25:00] lot of just working with each other to brainstorm and then sharing those ideas. And then the other thing I, the main thing, other thing I did besides my ending conversation with Dr.
[00:25:08] Brian Keating, which was incredible, was my opening keynote, the Move 37 moment. And so the premise here, if you haven't seen the Alpha Go Documentary one, I would say go watch the AlphaGo documentary. It's free on YouTube. it was the moment where Lee Setall realizes that that AlphaGo the machine was better than him at Go, that it had become superhuman.
[00:25:29] And my premise of this opening talk was that we will all have that moment. It, it may be at individual tasks to start not hold jobs, but you will increasingly have these moments where you realize that the AI is better than you at the thing you do. Then what do we do from there? So I kind of went through like, why is this happening now?
[00:25:48] The technology progress, the market opportunity of, you know, replacing softwares three to $500 billion a year industry, replacing wages in the United States is 11 trillion total. [00:26:00] US wages about 11 trillion, probably about five to 6 trillion of that is knowledge workers, accountants, lawyers, consultants, and the premises.
[00:26:07] These software companies, they're gonna go build AI that can do parts of the US economy because the opportunity is massive. And then I got into kinda like, what does it mean and what do we do about it? And the big challenge for me was to land this in like a hopeful, optimistic toward the future way. and so hopefully I did that.
[00:26:25] Like, that was the hardest part, was just kind of figuring out, it was heavy, it was like a heavy way to start an event, but through some excerpts from Alpha Go, and then through just making some connections about the opportunity we have and thinking like human plus AI instead of human verse ai. That was really kind of where I went with it.
[00:26:43] So, yeah, I mean, I could go on about all the main stage sessions. By the way, those will be available on demand. they should be, probably by the end of this week, you'll be able to get those on demand. they're, they're paid on demand, but they will be available through MAICON.ai. [00:27:00] if you wanna watch it, they, they're worth the price of admission.
[00:27:03] I mean, they're, each individual talk on its own. I would probably pay that fee for access too. So, I don't know. Mike, did you have any other big takeaways, from your sessions or from some of the ones you watched?
[00:27:14] Mike Kaput: Yeah, for sure. I think I echo so much your takeaway just about the importance of in-person community overall.
[00:27:22] I mean, just hearing not only from our perspective, but from the attendees perspective. We've got people that are new friends and collaborators now commenting on each other's LinkedIn about things they learned at Mayon, connections they made. So. That was super, super valuable. And I think, yeah, from my, productive AI productivity workshop, just very briefly, the whole idea was to take all these kind of disparate pieces of AI that you're doing and actually orchestrate them into a whole kind of repeatable, useful system to get better results over time and actually have those documented and shared with your team.
[00:27:57] So I won't go into all the details of that, but [00:28:00] really it was all about like actually learning how to repeatably, create really solid prompts, be really diligent in documenting those, building out the context needed to accelerate those prompts. And then really building AI infrastructure and using AI tools around different prompts and use cases so that you have this like always on, always ready to go playbook for any type of use case or workflow you have to get more out of AI and how you're using it for the stuff you're already doing, but also for the innovative stuff you're gonna come up with thing forward.
[00:28:33] Paul Roetzer: Yeah, I, I'm, I'm gonna have to go through all the show notes. We used AI to, to do summaries of every session, so all 47 sessions or whatever. so there's AI summary notes of everything, and then all the, you know, the presentations. And I'm, I'm definitely gonna go back and rewatch a couple of those main stage sessions.
[00:28:48] 'cause I wasn't getting a chance to take too many notes. I'm like, for sure. I'm kind of running around backstage. I'm MCing the whole thing. So I wasn't able to like, really attend. I was more like looking at a big picture of how everything was going. But [00:29:00] yeah, so again, tha we don't, you know, we don't spend too much time talking to MAICON and, but just an incredible event.
[00:29:06] Again, the idea of really think about the opportunity to do in-person stuff, whatever your company does, whatever your, your role is. Like, there's just nothing replaces the human connection that comes from stuff like this. So, we're definitely doing more. We'll have some, you know, probably, I don't know, maybe by the end of this year we're, we're diversifying our event portfolio.
[00:29:27] We're not trying to reproduce MAICON another markets per se. There's gonna be a bunch of exciting things we're gonna be doing around in-person events and in different geographic markets. So stay tuned. We're gonna, you know, I, there's no like that that the high you get from that once a year big event is, is hard thing to replace.
[00:29:45] But I'd like to like have micro doses of that throughout the year and like do more stuff and get people together more often. I think it's good things happen when you get really good people together. For sure.
[00:29:57] AI’s Increasing Impact on Labor and Jobs
[00:29:57] Mike Kaput: Alright, our third big topic this week we're tracking [00:30:00] another handful of stories related to how AI is reshaping work.
[00:30:04] So first, CNN reported that blue collar workers like plumbers and electricians are increasingly turning to tools like ChatGPT to draft estimates, customer emails, even troubleshoot complex jobs. So they kind of highlight how ChatGPT and other tools started out as white collar assistants, but are now becoming kind of universal coworkers.
[00:30:27] That are actually helping blue collar workers bridge skill gaps and save time on site. And they cited all this great, these great customer stories. We'll link to in the show notes how some of these firms are doing that. meanwhile, Bloomberg has reported that Klarna, CEO is once again talking up the impact of ai, warning of an AI jobs shock and saying most companies are unprepared for the wave of disruption that is coming.
[00:30:54] And last Matthew call a researcher writing in the Wall Street Journal notes pretty [00:31:00] extensively how his research shows that AI is already widening the divide between top performers who can harness it and everyone else who can't. And his thesis, which is stated right in the article, is quote, workplace tensions and resentment will rise if top performers benefit more than everyone else from AI tools.
[00:31:21] Now, Paul, that last piece of the jobs picture this week kind of stood out to me. It seems like we've got here a few examples of how AI is widening the gap, maybe the economic gap, the skills gap between companies and individuals who master it and those who don't. What do you think?
[00:31:39] Paul Roetzer: So the, I'll come back to that one, Mike, because I'd like to talk about that one for a minute.
[00:31:43] the consumer services stuff, I think is massive. so this idea of building smarter back offices for plumbers, electricians, yeah. Whatever, any contractor. just from personal experience, like anybody who owns a home and has to [00:32:00] deal with contractors, it's so hard to find a great contractor who great at the labor, who also runs a great business and is very customer friendly in their operations and technology.
[00:32:14] And so the idea that you could take these very talented laborers who, who can do the work and enable them to have smarter back offices, the companies that figure that out are gonna do extremely well. And that is, there are a lot of those kind of verticals where they're not gonna, you're going to have a few companies that solve it, and then everybody else is gonna, you know, have a very difficult time.
[00:32:40] So I'm very bullish on that. I think that that companies that do that are gonna be a big play. I would imagine private equity may drive that, because that's one of those things where it's like a standard private equity playbook. Go in, buy up a bunch of, you know, verticals, pick a, you know, a bunch of companies in one vertical and then just apply smarter attack and unlock 20, 30, 40% [00:33:00] gains in, efficiency.
[00:33:02] And, you know, 20% in margin. So. That's, that's really interesting to me. the Klarna thing. So their CEO says things in ways that I wouldn't necessarily say them, but like I think we're on a very similar page with this stuff. So, there was a quote in here, said, I feel a lot of my tech bros are being slightly not to the point on this topic.
[00:33:26] I think there's a massive shift coming to knowledge work, and it's not just in banking, it's in society at large. So this was when he was being interviewed on Bloomberg. He said, society will have to figure out what we are going to do because new jobs will be created. Yes. But in the short term, that doesn't help the Brussels translator.
[00:33:43] He's not going to become a YouTube influencer tomorrow as a pretty specific example. But you get the point. It's like all these jobs that are coming, like they're not coming tomorrow. And the people who aren't needed right now are gonna have a tough go. Then he did talk about the fact they went from [00:34:00] 7,400 down to 3000 and maybe they over, corrected one direction and now they're sort of bringing more people back into customer service and stuff.
[00:34:08] But the same time, he is not changing the overall tune, which is these jobs are going away way faster than they're coming back. Now. On, on the last one, Mike, that you talked about this idea of like superstars, Ben benefiting more from ai. This is a, this is a very relevant topic because the first day of MAICON, last week we actually had a meeting of our marketing AI industry council.
[00:34:29] So there's about, at about 30 or so people on this council, we had about 20 in person and the entire focus was on the impact of AI on talent. And this is actually one of the debates we were having. We sort of looked at nine specific like core questions about where we're going with AI's impact on talent.
[00:34:46] By the way, we're gonna be publishing a report later this year with like key insights from that council. So we've bringing that to, to light. Um. But overall, this is the debate is if we give AI to like, let's assume this perfect [00:35:00] rollout scenario where we go get copilot or ChatGPT or Gemini, whatever it is, we implement it into our system, we give it to everyone.
[00:35:09] We train them on how to properly use it, we give them personalized use cases, like we do all the things we should do to properly roll out AI technology. Who benefits most the A players who apply this to have superpowers and can now do two, three x, the work they were doing before, maybe more in some cases, or the B and C players who let's just say are like the average employee and maybe they like kind of adopt it, but maybe not fully.
[00:35:39] what happens? The A players are just like, I'll just do their job. Like if they're not gonna get this done, I can do their job for them tomorrow. So like, let's say you're in marketing and someone in customer success isn't doing the thing you needed them to do. They, you, you know, you needed something from them.
[00:35:57] Maybe it's a report or a, a briefing or [00:36:00] something and they're just like dragging their feet and not get it done. And you're like, screw it. Like I'm just gonna do it tonight. And you go in and you write the report with Chad GPT in 20 minutes that probably was gonna take them two or three hours and then you just email it to 'em like, Hey, here you go.
[00:36:12] It's like, it's gonna create so much friction between, there's always been friction between the A players, the superstars, and the the non A players. But now we're talking about dramatic increases because if you have a generalist, a player who knows, they can dabble in sales and operations and HR like enough to be dangerous and now all of a sudden they can do it on demand because they can just call up Chet GPT to help 'em.
[00:36:38] It, it is gonna create some serious conflicts and challenges. And this research is saying these A players are actually the ones who are gonna benefit most. There was some study, there was study that Ethan Mogan I was a part of in like 2023. Yeah. I think like with mm-hmm. G PT four, where they're like, no, it's actually gonna like level it up and the average people are gonna increase like 40% in their capabilities while the A players only [00:37:00] increase 14%.
[00:37:01] And I don't dispute that. Like I guess the average person, if they actually apply AI fully, could level up pretty significantly. But I think the greater likely outcome is that average employees are gonna remain average. They're not gonna work harder than they need to work. They'll definitely use the tools, but probably as shortcuts and a crutch not as a thinking partner to become better.
[00:37:27] Like if they wanted to be better, they'd have read more books and worked harder to begin with like. I don't know. So the B players will be B plus players probably. and like I think that's the scenario is like, yeah, they're gonna get incrementally better, but this is a human condition thing. It's you're being given tools, are you going to use them to become great at your job?
[00:37:48] And the reality is there's just a whole bunch of people who don't care to be great at their job. so I think that's, I don't know, it's like a really interesting scenario, but I could see a lot of the people who are [00:38:00] taking full advantage of this, they're gonna be very valuable within the companies, but they're also gonna get very frustrated with the people who aren't taking advantage of the tools.
[00:38:10] Mike Kaput: Yeah. And the advantages that the A players are gaining are also compounding over time. This stuff doesn't stop. Right. I wonder how quickly, even in let's say a B or C player that is adopting the technology fully, like are they going to be. Pressing the limits of it, innovating enough to keep up with that widening gap.
[00:38:29] Paul Roetzer: Yeah. And as a, like, as a CEO, I can tell you like point blank, I spent the whole weekend thinking about this, not because of this article. I was thinking about like our hiring plans and how, like, do I hire within these specific departments? And I think I talked about this a couple episodes ago, but like, it's still in my brain, a lot.
[00:38:49] Or do I just go hire like people with like 15 years experience, 10 years experience, who are super like cri, excellent critical thinkers, very curious, strong like [00:39:00] imagination, like good at innovation. And do I just give 'em the tools and say, Hey, we're just gonna go solve it all. Like I'm gonna get five or 10 of these people and we're gonna go solve customer success and sales and operations and hr.
[00:39:12] Like we don't have to necessarily hire a bunch of specialists in each department. Let's just get, go get some generalists, give them the tools. Then let's just, let's just go like, first principles, let's just build a smarter company from the ground up. And there's a, honestly, like I, I'm starting to feel like that's what I've leaned toward doing mm-hmm.
[00:39:32] Is don't go hire a bunch of specialists, just hire a bunch of generalists who are super motivated to figure out what these tools are really capable of, and then give 'em the autonomy to go do it. I don't know, like I, that's what we did at the agency, just like they were just consultants. Like, we could do anything.
[00:39:48] And I think that that might be the play, but I don't know. I need to think about this more. This is all like Saturday, Sunday thinking, like, you know, on a jog or at the gym. Like I'm just, my mind [00:40:00] is wandering. Like I couldn't take the whole weekend off mentally as much as I tried.
[00:40:05] Google Veo 3.1
[00:40:05] Mike Kaput: Alright, let's dive into some rapid fire for this week.
[00:40:08] First up, Google has announced Veo 3.1, which is the latest version of their stunning video generation model. Now, according to Google, via 3.1, delivers enhanced realism, better prompt, Terrance, and richer native audio and dialogue. This powerful update introduces a suite of advanced creative controls, including the ability to guide generation with reference images for character and style consistency, extend video clips to create longer scenes and generate seamless transitions by providing a first and last frame.
[00:40:42] These new capabilities complete with generated audio, offer you unprecedented control to bring your most ambitious creative visions to life. So via 3.1 is now available via the Gemini API. For developers, it's in Vertex AI for enterprise customers [00:41:00] and in the Gemini app. So Paul Veo three, hard to remember, not released that long ago, just May of this year.
[00:41:07] It took the world by storm. It's pretty incredible to see some significant updates to it. And I don't know, I mean, we had what PJ Ace at MAICON talking about how he used Veo with the Calci ad for the NBA finals to generate this kind of viral, incredible ad. I mean, it really feels like we're entering a golden age of video generation.
[00:41:29] Paul Roetzer: the AI video, PJ Ace's keynote was unbelievable. Like I have given many keynotes in my life. I have attended many keynotes in my life. That's like top three. It, it was one of the better talks I've ever seen. Mm-hmm. And he literally just went through how he does this. I mean, he gave us the prompts, he gave us the, how he works with Veo to make it happen, like a ChatGPT plus Veo thing.
[00:41:53] So if, if, if, if you aren't aware of how good AI video is getting, it is getting [00:42:00] extremely good. you don't have to see PJ's like keynote to, to understand it, but if you were there, you, you know what I'm talking about. Like it was mind blowing. you, we'll put his Twitter handle in, he, he does tweet threads with the stuff he shared on stage.
[00:42:15] So you could just go and for free look at his Twitter profile and go look at some of the videos that they've put together. But now what's happening is major brands are coming to p people like PJ and his team to produce ads for like $75,000 that they used to pay 3 million for. So not only is the tech becoming real and easy to use for the average person who just wants to play around with AI video, it's transforming things like the advertising industry basically overnight.
[00:42:43] Mm-hmm. And now all the big shops are having to figure out what do we do like, 'cause now they're getting disrupted by people like pj. So yeah, this video stuff is wild. And I think, I'm not mistaken, like, didn't openAI's announce a Sora update? Like the same day, just basically I believe they did. Yeah. Yeah.
[00:42:58] Trying to steal the thunder [00:43:00] from Google. So, yeah, AI video is, is the real deal. It is, it's here now in, in bites, like 10, 15 seconds. Requires some, you know, pretty decent human in the loop, but it's, it's moving so fast.
[00:43:16] Mike Kaput: Yeah. Talk about the top performers in that field. They are going to rush ahead if they start embracing these kinds of tools.
[00:43:22] Definitely. Alright, next up,
[00:43:25] Claude 4.5 Haiku Is Released
[00:43:25] Mike Kaput: Anthropic has released Claude Haiku 4.5, and this marks another big step forward in the race to faster, cheaper high performance ai. This was announced on October 15th, and this model delivers near Frontier coding performance. Importantly at one third the cost and more. More than double the speed of Claude Sonnet four.
[00:43:47] It even surpasses that model in specific areas like using computers and running multi-agent workflows. So KU 4.5 is designed for real-time low latency use, which means [00:44:00] things like chatbots, customer support, pair programming, where responsiveness is critical. On the safety side, Anthropic says Haiku 4.5 is its most aligned model yet they also highlighted how Haiku 4.5 and the also brand new Sonet 4.5, which just came out a few weeks ago, how they can work together.
[00:44:20] They gave an example of how sonet, the smarter, bigger model can orchestrate complex reasoning and multiple instances of KU 4.5 could execute subtasks based on that reasoning in parallel. So I guess Paul, what jumped out to me here is this comes hot on the heels of sonnet 4.5 that was released just a couple weeks ago.
[00:44:41] A few weeks ago. And we covered back then this idea that the labs are starting to develop these much smaller, cheaper models that perform almost as well as the bigger models over time, that gets us closer to this idea of like intelligence being everywhere and eventually it being too cheap [00:45:00] to meter.
[00:45:00] This just seemed like a pretty stunning example of that.
[00:45:03] Paul Roetzer: The trend line definitely continues to go in that direction that, you know, every six to nine months, the smaller version of the model is basically on par with what was the biggest frontier model nine months earlier. And then the other thing we know is gonna happen, 'cause it's already happening, is like Google, meta will probably do this, Excel, do this.
[00:45:22] They're gonna take what is today's frontier model and open source it 12 months later. So yes, like all these capabilities are just gonna keep moving so fast. the one thing, Mike, that I know I want solved, I think I've mentioned this on the show before. When I am using Voice mode while driving.
[00:45:40] Mm-hmm. The fact that it always drops because you're in dead zones on the cell signal. I, I, like, I, I, maybe there's probably a technical solution I should have asked Chris Penn while we were at me this week, but like, I just want a model running on my phone that doesn't have to go out to the internet. and thereby doesn't need a [00:46:00] wifi or cell connection to be working.
[00:46:02] If Apple could do that tomorrow, I would use whatever that is in Apple in my iPhone all the time. Yep. 90% of the time I choose not to use a chat assistant while I'm driving is because I get so annoyed when it drops all the time. So,
[00:46:20] Mike Kaput: yeah,
[00:46:21] Paul Roetzer: I couldn't agree more with that. This will enable it, like these smaller models that can run on device is what is what's gonna make it possible.
[00:46:28] Mike Kaput: Some other Anthropic news.
[00:46:30] Anthropic Co-Founder Essay Angers White House
[00:46:30] Mike Kaput: This not is not so positive. Anthropic is facing some criticism for its role in shaping AI regulation. This comes after White House AI and Crypto czar. David Sachs accused the company in a post on X of driving a sophisticated regulatory capture strategy that is built on fear.
[00:46:51] So in this post, Sachs claimed Anthropic was principally responsible for the state regulatory frenzy that is damaging the [00:47:00] startup ecosystem in his words. Now, the reason he's posting this is because he was commenting on a post published by Anthropic co-founder, Jack Clark, and in this post he basically published a transcript of a recent talk he gave at the curve conference in California.
[00:47:17] And in that talk he describes. His own conflicted feelings about AI's rapid progress and the serious dangers we find ourselves facing in AI development. He even at one point suggests in this talk that today's frontier AI systems are what he calls quote, real and mysterious creatures, not simple and predictable machines.
[00:47:41] And basically he concludes everyone in society, not just those in Washington or Silicon Valley, need to be asking more questions about the technology and demanding more of those who are building it. So Sachs clearly does not agree. He basically sees this type of commentary as [00:48:00] Anthropic trying to coordinate, you know, regulatory capture, like owning the levers of power that are regulating AI so they can kind of pull the ladder back up after them and establish market position.
[00:48:12] So there's a ton going on in this one. Paul. Like I would say, Jack Clark's comments about AI alone are really interesting and blunt. And then it sounds like, let me know if you agree or disagree. The Trump administration now has a target on Anthropics back baby.
[00:48:28] Paul Roetzer: Yeah. So I mentioned last month, I think on the podcast that Anthropic was not making any friends in the White House.
[00:48:34] They, they were very clearly taking a, an approach that was not going to be in faVeor with the current administration. Mm-hmm. And so this kind of stuff is inevitable. David Sachs assume whatever he tweets is the official position of the White House. So if Sachs is saying it on Twitter, there's a pretty good chance these conversations are already happening within the White House and they're very unhappy.[00:49:00]
[00:49:00] Now, why would anyone care if they're unhappy? Well, most of these labs are going to need government contracts. Like they're going to need the faVeor of the government to do things. And if. If the government that maybe is vindictive at times decides they don't like somebody, it usually doesn't end well right now in America.
[00:49:19] And so if they decide that Anthropic is a pain, then there's all kinds of ways they can make things very difficult for them. Anthropic appears to be staying on the line though, like they're, they are not backing away from the things that would cause this conflict. So just to give the context, I will, I will read a few excerpts here from Jack Clark's essay that was from, as you mentioned, remarks he gave at the curve conference recently.
[00:49:46] So he said, what we are dealing with is a real and mysterious creature, not a simple and predictable machine. Only by acknowledging it as being real and by mastering our own fears, do we even have a chance to [00:50:00] understand it, make peace with it, and figure out a way to tame it and live together. I came to this view reluctantly.
[00:50:07] I joined a openAI's, which is where he was before Anthropic, soon after it was founded, and watched us experiment with throwing larger and larger amounts of computation at problems. GPT one and GT two happened. I remember walking around openAI's office in the Mission District with Dario Dario Ade, who is the co-founder of Anthropic.
[00:50:28] we felt like we were seeing around a corner others didn't know was there. The path to transformative AI systems was laid out ahead of us, and we were a little frightened. Years passed the scaling laws delivered on their promise, and here we are. And through these years, there have been so many times when I've called up Dario early in the morning or late at night and said, I am worried that you continue to be right.
[00:50:51] Referring to Dario and the scaling laws. Yes, he will say there's very little time now. The proof keeps coming. We launched [00:51:00] Sonnet 4.5 last month and it's excellent at coding and longtime horizon AGI agentic work. But if you read the system card, which if you're new to this, the system card is sort of like the specs of how the models work.
[00:51:12] you can also see its signs of situational awareness have jumped. The tool seems to sometimes be acting as though it is aware that it is a tool. That's a really interesting line. We are extreme growing, extremely powerful systems that we do not fully understand. Each time we grow a larger system, we run tests on it.
[00:51:35] The tests show the system is much more capable at things which are economically useful. And the bigger and more complicated you make these systems, the more they seem to display awareness that they are things. Meaning it's becoming self-aware is what he's saying without just directly saying self-aware.
[00:51:53] My own experience, as he said, is that as these systems get smarter and smarter, they develop more and more [00:52:00] complicated goals. When these goals aren't absolutely aligned with both our preferences and the right context, the AI systems will behave strangely. I feel that our best shot at getting this right is to go and tell far more people beyond these venues, meaning where he was giving the talk, what we're worried about, and then ask them how they feel, listen and com, and then listen and compose some policy solution to it.
[00:52:23] So that's what got sack pissed off it. It's just this like continued pushing that we don't understand these machines and we need to do more. And it's really honestly, like it's hard to argue with a lot of the way philanthropic is approaching this. They're the only ones that seem to actually be cautious now.
[00:52:43] People are throwing the effective altruism thing at them. Like, oh, you guys are just like effective altruists and you don't, the EAs like, you just, you're just trying to like stop progress. And I don't, I don't get that feeling from Dar, like, I don't feel like Anthropics just sitting back and like trying to thwart progress.
[00:52:59] [00:53:00] I think they truly are concerned and it's, it's going to continue to cause major friction between them and the administration. And like I said, I don't, if I was an investor in Anthropic, I would feel very uneasy right now that they are going to get themselves in some serious trouble with the administration.
[00:53:20] And as I said, there's lots of ways that could play out that don't end well for Anthropic. I don't know. I don't know where this goes honestly, but they, they don't appear to be giving in and, changing their tune and I think that's gonna make some, some very interesting storylines in the next six months.
[00:53:37] Mike Kaput: Yeah. And we'll include in the show notes a article that came up in, in this research was titled Anthropic Gets Ready to Go Startup Shopping From the Information. Basically there's some reports that they're about to potentially go in a spending spree 'cause they have not acquired that many startups.
[00:53:53] That is going to get very rocky, I would imagine, if they make powerful enemies Yeah. Within the government.
[00:53:59] Paul Roetzer: [00:54:00] Yeah. And I still don't know that there isn't like a decent chance they just get acquired at some point. Mm-hmm. Like at some point, like they, it may just come to the Realiz, like they're not gonna have the support they need to have to bring this powerful AI as they call it, to life safely without some other resource.
[00:54:17] I don't know, like they're gonna be a very interesting company to watch.
[00:54:22] OpenAI’s Opt-Out for Sora 2 Causes Problems
[00:54:22] Mike Kaput: Next up, OpenAI has announced it has paused the use of Martin Luther King Jr's likeness in its Sora two video generator after users created what the company called, quote, disrespectful depictions of the Civil rights leader. They actually issued a joint statement with the King estate saying that they made the decision at the request of King's daughter as it strengthened safeguards around how historical figures are portrayed.
[00:54:49] The company emphasized that while there are strong free speech interests in depicting public figures, their families and estate representatives quote should ultimately have control over how their [00:55:00] likeness is used. Now, this is obviously not the only instance of this happening as Sora has surged in popularity, you can create cameos of people.
[00:55:10] There are a lot of unauthorized recreations with copyrighted characters, but also with like other deceased individuals. Now, openAI's says that states and public representatives can now request that their likenesses be restricted. So Paul, you, I think when you drop this topic in our sandbox for this week, you said something to the effect of like, openAI's popped out.
[00:55:32] Policy here with Sora too is just a ridiculous approach.
[00:55:36] Paul Roetzer: It, it, I honestly, it just feels like a game of whack-a-mole. Like I don't understand how this scales, I, this is what meta tried to do with their newsfeed that got 'em in all kinds of trouble with the current administration. Right. and so eventually they, they kind of just moved away from this stuff.
[00:55:53] I don't understand how you do this. basically if you have enough money or can make enough [00:56:00] noise, you can probably get stuff restricted, meaning a system prompt that says don't produce images or videos of Dr. King. Like that basically what they're doing, it's just patchwork telling the system, don't do it.
[00:56:13] And then someone could easily like, hack around it. I don't know. I, again, I maybe they're just way smarter than me and they have like way more insights into how this works and how you operationalize this as a company. I can't fathom how a trust and safety team maintains a system like this where it's just processing requests like.
[00:56:35] Mike Kaput: Also, it's like this comes out October 16th, I think. And my first to do on my list if I ran one of these estates or trust is like Monday morning. It's like we're filing to have our person removed. There's gonna be thousands of these starting right now, won't there?
[00:56:52] Paul Roetzer: I would imagine. But again, like who, who, what, what is the criteria that determines my request [00:57:00] gets seen by anybody.
[00:57:01] Like,
[00:57:01] Mike Kaput: wow, that's true.
[00:57:02] Paul Roetzer: Who is this random person that has submitted this request? You have to be a celebrity or like a certain level of celebrity or like, have achieved some certain level of like public awareness. I don't know. Like I, that's what I'm saying. It's, it's just a totally subjective system that is gonna be fraught with holes and frustrations and.
[00:57:26] I can't imagine openAI's actually wanting to staff humans to do this. Like this is, you're gonna be like submitting and a request to an AI agent and it's gonna come back and like, no, sorry, your request is denied. And now what do I do? I'm talk to another AI agent, like, gimme a break. I just, it just seems like a mess waiting to happen.
[00:57:44] Elon Musk Clarifies His Definition of AGI
[00:57:44] Mike Kaput: Our next topic is Elon Musk has publicly clarified a little bit his definition of AGI or artificial general intelligence. So in a post on x musk said that AGI is probably three to five years away and defined it as a system [00:58:00] capable of doing anything a human with a computer can do, but not smarter than all humans and computers combine.
[00:58:07] He added that the next version of his company's chatbot Rock five will be better at AI engineering than specifically he called out Tesla's former director of ai Andres Carpathy. He thinks pretty clearly that X AI models are rapidly closing the gap with top human research. Now all of this was in response to commentary around a previous post of his, where he said, my estimate of the probability of ROC five achieving AGI is now at 10% and rising.
[00:58:37] Now Paul, this is notable because every AI lab, every AI leader seems to have their own definition of AGI. I think Musk hasn't shied away from making AGI predictions, but also hasn't really defined what he thinks AGI is. So this gives us at least a little clarity, I guess, what do you make of this definition and also his timeline?[00:59:00]
[00:59:00] Paul Roetzer: it's a one, it's a way more realistic definition. I was just looking up, 'cause I quoted his definition previously in my road to a GI timeline. Yeah. his previous definition was AI that is smarter than the smartest human, which he said would be here by next year. Like, so the first thing you have to know with Musk is.
[00:59:18] He commonly like makes these timeline statements and it's really hard to put any credence to like the actual timelines and then he'll like just move the goalpost. So this is one very specifically, he has been on the record saying what he thought AGI was with a very specific definition and a timeline was gonna happen and that is now very different than the one he is giving.
[00:59:39] All that being said, that's cool. Like I don't mind when people update priors like this is good. Like it's a, he is now looking it like, okay, a better definition is, capable of doing anything a human with a computer computer can do, but not smarter than all humans and computers combined. which would be super int intelligence roughly.
[00:59:55] Yep. So prior his definition of [01:00:00] AGI or general intelligence was what most people would consider to be super intelligence. He's now saying three to five years away, the human with a computer like that, that's about what I would think like that, that's actually checks really well with how I would roughly define it.
[01:00:16] Mine is more of like average human, so I tend to like, qualify it with just needs to be better than the other person we'd hire. so yeah, it's, it, it is interesting. And then this like, but then he is saying Grok five has a chance of a 10% chance of achieving it. Well, Grok five's coming this year. Yeah. we know Grok five will come this year.
[01:00:35] We are now getting word that, GPT six might actually come out this year and we know Gemini three is gonna come out this year. So, and I would imagine maybe Anthropic will drop Claude five. Like there's a very decent chance that, and at Meta who's been very quiet lately, they, I would imagine next long, it's like I would not be surprised at all if we get the next version of frontier models from every lab before the end of this year.
[01:00:58] It's gonna be a very busy [01:01:00] November, December. and I think there's an increasing chance that one or more of the labs will claim their next version is, is AGI. Like it is trending in that direction. Wow. Yeah.
[01:01:12] New Paper Says AI Method Can Reproduce Human Purchase Intent
[01:01:12] Mike Kaput: All right. Next up, a new study from Pie mc Labs and Colgate Palm Olive. The consumer goods companies suggests that large language models can mimic real consumer behavior with remarkable accuracy.
[01:01:25] So in this study, researchers introduced a method called Semantic Similarity Rating, SSR, which is basically a technique that asks AI models to write short free text reactions to products. Then it converts those statements into a traditional kinda one to five purchase intent score using embedding similarity.
[01:01:46] So they tested this approach across 57 personal care product surveys with over 9,000 human responses, and found that this method ranked the products nearly as reliably as real humans. [01:02:00] Based on some industry standard statistical measures that they used to measure how well it was doing the job they wanted it to do.
[01:02:07] So basically what this means is that when GPT-4o Gemini 2, and later models though, those are the ones they tested, are conditioned on demographic personas. They not only replicate average consumer ratings, but also reproduce demographic trends. For example, lower purchase intent among younger and lower income respondents.
[01:02:30] All of this means the authors say that this could transform market research. So basically you could replace early stage human surveys with scalable, low cost synthetic consumers that still provide human-like ratings and richer, more detailed qualitative feedback. So Paul, this is just one study, but kind of interesting considering who it's coming from.
[01:02:54] Definitely interesting for marketers and business leaders, I mean. I don't know if we can draw any conclusions about [01:03:00] how widespread this will become, but market research is extremely time consuming and expensive. I already know. And you know, people who are spinning up AI personas to understand their audiences better, this seems like it could be end up at some point, a pretty big deal.
[01:03:15] Paul Roetzer: I think it's probably a bigger deal now than most people are aware. Personas, in my ai, AI innovation workshop personas came up like probably five times as something people were doing or thinking about doing. I have, talked with consumer companies who are running simulated, worlds basically for their consumers.
[01:03:31] So imagine, you know, a a million simulated agents that you can basically test campaigns against, test creative ideas against, things like that. And so, I mean, if this is a really abstract thing to you, just simplify this down to, if you could create a single persona for a specific buyer or consumer of your products and goods.
[01:03:51] And imagine being able to like pick up a phone and talk to them and ask them questions with their behaviors and traits and preferences all built in. [01:04:00] That's basically what happens. Now, do that a million times, like just 'cause it's infinite. We could just create as many of these as we want. And then you run entire simulations of your customer base.
[01:04:09] And so if you know enough about your customers, you just build a simulated model. So this, this is happening like within Yeah. like car companies as an example, stuff like that. but you can play around with, this is the beauty of ai. You can go and ChatGPT and say, Hey, help me build a persona. Here's the basics about my company.
[01:04:27] Like I want to play around with this idea of doing market research. It'll do it with you. So for your 20 bucks a month, you can actually do these kinds of things. Right now you don't need full simulated worlds like the big brands are doing. But this is absolutely a, a direction that, you know, marketing and business is going.
[01:04:46] Music Industry Leaders Join with Spotify to Create Artist-First AI
[01:04:46] Mike Kaput: Next up, Spotify has announced a major alliance with the world's biggest record labels to shape the future of AI in music. So they've announced they're partnering with Sony Music, universal Music, Warner Music Group, and a couple of other representatives [01:05:00] of independent labels to develop what they call artist first generative AI products.
[01:05:05] Basically tools that are designed to empower musicians rather than compete with them. So this is coming amidst all this tension we've talked about a little bit, where the music industry is seeing, you know, unauthorized AI generated song creators, deepfake Veocals, having a hard time regulating all of the use of copyrighted material to train AI that then goes and generates those outputs.
[01:05:28] So Spotify says this collaboration of the tools they create. Aims to protect creativity while enabling innovation. And it's grounded in four principles upfront, licensing, partnerships, artist choice in participation, fair compensation, and preserving human artistry. So Paul still really early is kind of just an initial announcement of this.
[01:05:48] They're going to get a little more specific moving forward about what products and features they're building, but is interesting and probably welcome to a lot in the music industry that they're actually taking an approach [01:06:00] where they're, they've admittedly said, Hey, copyright matters. We're not trying to get rid of it.
[01:06:03] Let's work together to create a responsible AI future.
[01:06:07] Paul Roetzer: I hope we seem more like this. I think it's becoming more possible to do this, with these smaller training sets that are actually licensed data. So I, you know, I hope this is more of a, like an early trend that we start to see emerging where other industries do similar things.
[01:06:25] you know, it's just side note, Mike, I actually, um. That Anthropic case we talked about where they had to settle and pay like $3,000 per instance or something. two of my books are in that training set, and I actually have the, I went in to check it and you have to like, fill out all this paperwork, but I assume that means I'm, I am eligible for $6,000 in compensation for Anthropic for my two ex.
[01:06:49] Ironically, the, like our artificial intelligence book that we co-authored is not in the training set. My marketing agency blueprint Interesting. And Marketing Performance [01:07:00] Blueprint are maybe like, maybe they're like, Hey, we don't, won't steal any with AI in it because maybe those authors know what we're doing and they'll come after.
[01:07:06] I don't know. But, yeah, I did check in the, my first two books are, are in the training set.
[01:07:11] Mike Kaput: Oh, no kidding. No.
[01:07:12] Mike Kaput: One important little, addendum here is that actually on this post from Charlie Hellman, who's the head of music at Spotify, where he teed this up. Ed Newton, Rex, we've talked about a ton on here.
[01:07:24] Said, he replied and said, this sounds like a really positive step. Can you confirm that Spotify won't train models on any copyrighted music without a license? That feels important, but isn't quite stated in the release? And Charlie responded and said, correct. Our whole first principle in the blog post is talking about taking a proper licensed approach to which Ed said, that's great to hear.
[01:07:44] Thank you. So just some validation, at least right now that he seems to be on board with the general direction of the approach, but we'll see. Yep. All right.
[01:08:00] New AI Feature in Google Sheets
[01:08:00] Mike Kaput: Last up. Google is officially bringing Gemini AI into Google Sheets. It's introducing an AI function that's lets users generate, summarize, and analyze data directly in spreadsheets.
[01:08:06] This is available to users on eligible Google Workspace or Gemini plans, and basically it allows sheets to act more like a data analysis assistant. So you can use this to generate text based on spreadsheet data. Summarize information. Categorize or classify content, you can actually sort things like feedback by sentiment or tag messages by topic.
[01:08:29] You can analyze sentiment across text entries and identify positive, neutral, or negative tone and access realtime information from Google search. So like formulas could pull in UpToDate facts like population data or author birth dates, for instance. And you can also insert these kind of AI columns that auto generate results across entire tables and refresh them as the data changes.
[01:08:53] So this on the surface is just kind of one feature in Google Sheets, but Paul, I mean, we've been seeing what [01:09:00] magic can happen when they actually bake in like the fully powerful version of Gemini into these types of tools.
[01:09:06] Paul Roetzer: Personal anecdote here. So when I was doing the prep for my MAICON keynote, I pulled Bureau of Labor Statistics data off of the government website, which it was updated through May of 25.
[01:09:18] I was trying to get a total wages in the us, right? So that's stat. Mentioned earlier, 11 trillion. The way I arrived at that was you can download the data set and it'll break it up by occupation of total number of employees. What it doesn't do is give you the total wages per employees. So taking like the median times, the number of employees will give you total wages.
[01:09:38] And so I actually in Google Sheets not knowing, I maybe I had put this in the sandbox and hadn't even like looked at it myself, but I just clicked on Gemini and I was like, ah, let me see if it's like actually doing anything in here these days. And I said, first find all occupations related to marketing.
[01:09:54] And it did. So it found like seven different rows out of 800 rows that were relevant to [01:10:00] marketing. And then on a whim, I was like, let me see if it can actually do anything. So I said, can you add a column that calculates total annual wages by occupation? And it went away for about four seconds. And it said, I've had a new column named total annual wages to your sheet.
[01:10:13] This column calculates the estimated total annual wages for each occupancy by multiplying. It told me what it, and so sure enough, I go to the it, it did it, I was like, right. Oh my God, it's actually useful. And then there was a couple of data points where I was like, not a hundred percent sure if I was reading it correctly.
[01:10:29] So I just talked to Gem. I was like, Hey, am I interpreting this correctly? And it did. It is, yes, it is a small thing, Mike, but this is the stuff that could change the way people work. When, when the AI is embedded into Word or sheets or Docs or PowerPoint or whatever, and it is actually super functional and does the thing you ask it to do, this is where you as someone who understands ai, knows what it it's capable of and knows what questions to ask of [01:11:00] the ai.
[01:11:01] This is where you start to get a massive difference between people who are AI enabled and people who are not. Because a lot of people live in Word Docs and sheets all day long, and if AI can help them do their jobs way more efficiently, it, it can skyrocket productivity.
[01:11:17] Mike Kaput: Yeah. And if that what Paul just said, interests you, go check out the link, with the Google announcement in the show notes.
[01:11:24] It's super simple. It's just kind of like a, here's all about this feature, but they provide all these examples of things you can do. So something like summarizing stuff in a cell, you're not just telling ai, summarize this for me. You're actually like calling AI to, include a prompt to use on the data so you can say like, you're the owner of a pet sitting business.
[01:11:44] Write a two sentence summary for the customer about their pet's. Last stay, be a little funny in a spreadsheet of like thousands of pieces of data, for instance. So there's a lot of really cool little examples here that I think communicate what's possible.
[01:11:57] Paul Roetzer: And we are planning to do a gen [01:12:00] AI app review of this feature in AI Academy soon, so stay tuned on that.
[01:12:06] Mike Kaput: All right, Paul. The first post MAICON podcast. technically the the one that comes out Thursday is Post MAICON, but not when we recorded it. So it's good to be, good to be back.
[01:12:16] Paul Roetzer: Yep. So reminder again, if you're listening to this on October 21st, 22nd range, there's another one dropping on Thursday.
[01:12:23] So two episodes this week, and then we will be back with regular weekly next week. So, yeah, and again, just thanks to everyone, the 1500 plus of you that were at MAICON with us in Cleveland last week, incredible experience. I will never forget. That was a great week and it wouldn't have been possible without all of you.
[01:12:40] So thanks. Thanks for that. And thanks to everyone who, you know, listens to the podcast. It wasn't able to be with us. Hopefully next year we can get you out there. We did announce, by the way, October 13th to the 15th, 2026. we'll be back in Cleveland, so if you missed it and want to join us again next year, tickets are already on sale.
[01:12:55] MAICON.ai, you can go check that out. Alright, Mike, [01:13:00] I'll see you in the office this week. Yeah, thanks Paul. Sounds good. Later. Thanks for listening to the Artificial Intelligence show. Visit SmarterX dot AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy, and engaged in the Marketing AI Institute Slack community.
[01:13:29] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.