OpenAI has released GPT-5.1, a controversial new AI app is bringing people back from the dead, and there's a big debate in AI about very different belief systems.
On this week's episode, Paul and Mike go deeper on those topics and other top news this week, including political backlash to AI, the first AI-orchestrated cyberattack, an AI-generated song topping the charts, and much more.
Listen or watch below—and see below for show notes and the transcript.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:03:59 — AI Pulse
00:06:41 —GPT-5.1
- GPT-5.1: A smarter, more conversational ChatGPT - OpenAI
- Sam Altman Post on GPT-5.1
- Moving beyond one-size-fits-all - Fidji Simo Substack
- GPT-5.1 Prompting Guide - OpenAI Cookbook
00:14:51 —Controversial New AI Product Brings Back the Dead
00:22:09 — Beliefs vs. Fundamental Truths
00:39:39 — Increasingly Negative Public Moods Towards AI
- X Post from Matt Walsh on AI and Jobs
- X Post from Matt Walsh on AI and Political Landscape
- X Post from Tim Miller in Support of Walsh Position
- X Post from Jon Favreau in Support of Walsh Position
- X Post from BG2Pod on the Issue
00:46:36 — First Reported AI-Orchestrated Cyberattack
- Disrupting the first reported AI-orchestrated cyber espionage campaign - Anthropic
- X Post from Chris Murphy on the Attack
- Why Anthropic CEO Dario Amodei spends so much time warning of AI's potential dangers - CBS News
00:51:33 — AI-Generated Country Song Tops Billboard Charts
- The No. 1 Country Song in America Is AI-Generated - News Week
- An AI-Generated Country Song Is Topping A Billboard Chart, And That Should Infuriate Us All - Whiskey Riff
- AI slop tops Billboard and Spotify charts as synthetic music spreads - The Guardian
- ChatGPT violated copyright law by ‘learning’ from song lyrics, German court rules - The Guardian
00:58:43 — Cursor Raises $2.3 Billion, Valued at $29.3 Billion
- The AI Coding Startup Favored by Tech CEOs Is Now Worth $29.3 Billion - The Wall Street Journal
- Past, Present, and Future - Cursor
- X Post from Ethan Mollick on Cursor/AI Coding Impact
01:01:13 — Parallel Raises $100 Million to Build Web for Agents
- Ex-Twitter CEO Agrawal's AI search startup Parallel raises $100 million - Reuters
- Parallel raises $100M Series A to build web infrastructure for agents - Parallel AI
- Parallel: Building the infrastructure for AI - Kleiner Perkins
01:03:34 — Yann LeCun Leaving Meta
- Meta chief AI scientist Yann LeCun plans to exit and launch own start-up - Financial Times
- Ep. 164 of The Artificial Intelligence Show
- He’s Been Right About AI for 40 Years. Now He Thinks Everyone Is Wrong. - The Wall Street Journal
- X Post from Yann LeCun
- ‘Imagine a Cube Floating in the Air’: The New AI Dream Allegedly Driving Yann LeCun Away from Meta - Gizmodo
01:07:47 — NotebookLM Adds Deep Research
- NotebookLM adds Deep Research and support for more source types - Google Blog
- X Post from Steven Johnson on New Features
01:10:24 — McKinsey State of AI Report
This episode is brought to you by AI Academy by SmarterX.
Accelerating AI literacy and transformation isn’t optional. It’s a career and business imperative. AI Academy by SmarterX and our AI Mastery Membership help professionals and organizations thrive through the disruption and uncertainty ahead. Use code POD100 to save $100 on an Individual AI Mastery Membership here.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: So the problem comes in when people have so much conviction about their beliefs that they mistake them for fundamental truths. So just because you believe so strongly, something related to AI or its impact on society, or whether AI avatars are good or bad, like doesn't actually change whether it is or isn't.
[00:00:17] It's just what you believe. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput.
[00:00:40] As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all.
[00:00:56] Welcome to episode 180 of the Artificial Intelligence Show. I'm your host, Paul [00:01:00] Roetzer, along with my co-host Mike Kaput. We are recording Monday, November 17th, about 11:00 AM Eastern time may be very relevant this week as there is increasing rumors that we are getting Gemini three from Google potentially this week soon.
[00:01:14] DARPA Chai actually tweeted the. What is that emoji where you're like rubbing your chin, like you're pondering something? Yeah, the pondering emoji, I'll call it, on poly market. It was like, what are the odds that Google releases Gemini three? And he actually retweeted that with the emoji, which is highly out of character for Sundar, or whoever controls Sundar's Twitter account.
[00:01:37] but yeah, there, there's a decent chance we're gonna get, get a new model. It might be why we got the new model from openAI's last week is sort of a get out before try and steal a little bit of thunder, but we will see, could happen while we're on this podcast. I don't know. Alright, so, this episode is brought to us by AI Academy, by SmarterX.
[00:01:56] we've talked a lot about this lately, but we had a huge [00:02:00] launch event last week. This has been, really the focus of my professional life for about the last 12 months and intensely since the summer, creating these courses, working with the team to build the new learning management system. And so we announced the new courses in August of this year.
[00:02:16] But on Thursday of last week, we actually launched the new learning management system, which we have been working intensely on behind the scenes for a long time. And so it was amazing to get out into the wild. We actually held an event last Tuesday for our business account team leads to give them an exclusive preview, and then we made the new learning management system available for everyone starting Thursday.
[00:02:38] So if you have been waiting to get into AI Academy, start taking advantage of all those courses, gen AI, app reviews, AI Academy live sessions, professional certificates. There is no reason to wait any longer. The new LMS is ready. It is live right now for all of our existing, members and customers and all anyone who joins moving forward is gonna get access to that system immediately.
[00:02:59] It's more [00:03:00] personalized. It's AI driven. it's more engaging experience. It's got gamification with badges for different things you earn in addition to the certificates. It has, incredible upgrade in features for our business account leads, so they can now do user and course management, personalized experiences and learning journeys, gamification, reporting capabilities, so everything is now baked into it.
[00:03:23] again, if you haven't, been a, a member of AI Academy yet, now would be the best time to get in there and start taking advantage of everything. So you can go to academy dot SmarterX dot ai and check that out. And for a limited time, we're gonna do a pod 100, so just POD 100 promo code for individual, mastery memberships.
[00:03:43] And if you just buy individual course series as well, that pod 100 will get you a hundred dollars off. Business accounts are already discounted, dramatically. So you can go in there and learn about business plans and business pricing. So again, it's academy dot SmarterX dot ai.
[00:03:59] AI Pulse
[00:03:59] Paul Roetzer: Alright, Mike, we're gonna dive into the AI pulse.
[00:04:02] So if you're new to the podcast, AI Pulse is a new weekly informal poll, survey that we're doing, of our audience. And so last week we asked, do you believe the concentration of power in a few major AI labs is a significant problem? Again, this is an informal poll. This is not projectable data. It's not a big enough sample size to say this is what the whole, you know, universe of people think.
[00:04:25] But from our audience, it's, it's taking a look at what they're thinking. So do you believe concentration of power in a few major AI labs is a significant problem? 49% say yes, but it seems unavoidable for this level of innovation. 23% say yes, it's a major risk to society in the economy. 15.7% say no, and then 5.9%, sorry, no competition between the labs is sufficient.
[00:04:49] And about 6% say this concentration is necessary to advance AI safety. Another 6% say I'm not sure. We then ask what should be the primary focus for advanced AI development [00:05:00] right now? 78.4% say a balanced approach, which is, I guess kind of what we preach on on the podcast, Mike, developing safety and capabilities in parallel.
[00:05:09] the next closest would be 13.7%, prioritizing safety, containment and human control. so this week we've, we've shortened the URL to make it easier for people to get to. So go to SmarterX dot ai slash pulse and that will take you right to this week's survey. So this week's survey has two basic questions.
[00:05:30] How do you feel about new AI apps that create interactive digital avatars of deceased loved ones? We are going to explain that concept as one of the topics today. Definitely a good one, that this is why we have these AI pulses. So again, that's how do you feel about new AI apps that create interactive digital avatars of deceased loved ones?
[00:05:49] And then the second one, another topic we're gonna talk about today. With AI generated music now topping the charts, the music charts, do you feel AI created work holds the [00:06:00] same creative value as human made work? That will be interesting to hear what people have to say. Very curious about that. Alright, again, so if you wanna participate in the weekly, survey, it is SmarterX.ai/ai-pulse, and that will take you right to the page.
[00:06:15] Takes about 30 seconds to get in there and answer. And again, these are not, this is not a lead gen thing for us. We don't collect email addresses. This is purely just for information's sake to get a sentiment of kind of where our audience views key topics related to AI that we talk about each week. We are gonna kick it off with, we got a new model last week.
[00:06:34] It was sort of underplayed by openAI's, but we do have a new model and so let's talk about GPT 5.1. Mike.
[00:06:41] GPT-5.1
[00:06:41] Mike Kaput: Alright, Paul. So yeah, openAI's has released GPT 5.1 and upgrade to its flagship model series, and this is designed to make ChatGPT both smarter and more conversational. So this update focuses on user feedback that AI should be more enjoyable to talk to.
[00:06:59] And as a [00:07:00] result, openAI's is essentially introducing two upgraded models. So the first is GPT 5.1 instant, so that's kind of the most used version in the model router. It's now described as warmer and better at following instructions. It's also gaining something called adaptive reasoning, which is allowing it to think before responding to challenging questions.
[00:07:21] The second model, GPT 5.1 thinking is the advanced reasoning version. It now adapts its speed, responding faster to simple queries while spending more time on complex ones. OpenAI says its responses are also clearer with less jargon and a more empathetic tone. Alongside these new models, OpenAI is adding new tone presets, things like a professional voice, a candid personality, a quirky one to make customization easier.
[00:07:50] The GPT 5.1 update is rolling out now, starting with paid users and will be available in the API as well. Now, Paul, I'm curious what jumped out to you [00:08:00] most about this release? I really kind of made notes to myself here about their big emphasis on the personality, the tone, how people interact with these tools, and we've talked about that before, but that really struck me how much time they spent talking about that in this release.
[00:08:14] Paul Roetzer: That definitely jumped out. The timing, as I mentioned in the open, jumped out to me. It was just an unusual timing. Don't. I'm trying to think if they've done a 0.1 before. Like usually there's like a 0.5, right? Yeah, point 0.5. Yeah. Yeah. Three to 3.5. So it's just an unusual numbering sequence. I, you know, my understanding is it's the same core model.
[00:08:33] They've just done some tuning to it, and specifically like in coding and math and like reasoning and personality. So they're basically just building on the existing model. It's not, I, it's not like they retrained a full blown model and then released it. So, yeah, I think the timing's interesting just because I assume that means something else is coming from the other labs and they wanted to get something out ahead of it.
[00:08:57] I've played around with a little bit, again, the use cases. I was using [00:09:00] it for the last few days, not a noticeable change for me. I did note that GPT five pro, which I know you and I both pay for the upgraded license. We have access to Pro that is still at the G PT five level, but they did say they will update, GPT-5 Pro to 5.1 Pro soon as well.
[00:09:21] the personality thing, we definitely have talked about that in recent months. Sam Altman in particular has been very direct that, that that is one of the things they see as being essential for future AI assistance is that people can better control. Do you mess with that stuff at all, Mike? Do you play around with the personality settings, the tone at all?
[00:09:38] Have you tried 'em? I
[00:09:38] Mike Kaput: Test it, but I don't stick with any of it because it just creates such variance in the use case. Yeah. I'm trying, I will, you know, I rely more, I would say, on my prompts for it to put on a certain persona or tone. Okay. Yeah.
[00:09:51] Paul Roetzer: I'm probably the same way. I think I went in when they first started doing it and I like played around a little bit, but it, yeah, it's like when I go in there, I have very specific things I'm trying to do and [00:10:00] I'm trying to do 'em very efficiently.
[00:10:01] Yep. And I'm, I'm not just messing around with like, okay, lemme give it the same prompt five different ways and try all these personalities. So, but I also don't talk to it like a companion, so I guess I don't really use it in the sense where I'm trying to like, elicit this very specific ongoing, personality from it.
[00:10:17] I just wanted to answer questions. I will say I had a weird conversation with Chad GBT last week, though, Mike, using voice mode. I don't know if I talk about this on the podcast or not, but it will vary from a female voice and then it sort of blends into a male voice and then it'll come back around to the female voice.
[00:10:34] Mike Kaput: Really.
[00:10:35] Paul Roetzer: And I actually called it, 'cause it's kind of like eerie, like, it sort of messes with you when you're hearing it. And I said, I actually said to it, I was like, why is your voice moving from like, feminine to m masculine? Mm-hmm. And it's like, oh, sorry. It's just like part of the process. And like, and it ca like gave an explanation.
[00:10:51] I was like, yeah, okay, whatever. But yeah, I noticed it multiple times to the point where I finally said something to the voice agent, like, why is it changing? [00:11:00] It's really weird. It's unnerving. Honestly, I stopped using, that's really strange. Stopped using it when it was happening. so then they did say that there's, beyond the presets for users, you want more granular control over how chat should be responds.
[00:11:12] We're also experiment with the ability to tune. Characteristics directly from personalization settings, including how concise, warm or scannable it response its responses are, and how frequently it uses emojis. ChatGPT can also proactively offer to update these preferences during conversations when it notices you asking for a certain toner style without requiring you to navigate into the settings.
[00:11:34] Yeah. So they're just trying to get way more predictive about like how people interact with these things. the only other couple notes I had was that they, they seem to underplay the improvement of this model, and Sam Altman actually responded to somebody's tweet about this and he said it was kind of an intentional because they found that they get crushed when they like make a big release and then people are like, oh, it's not as significant a change.
[00:11:59] So now they're basically [00:12:00] just like low balling everybody, like, put this out there and then like 24 hours, 48 hours go by and people are like, wait a second, this is actually like a pretty good improvement over five. So I thought it was interesting that they sort of took a different approach with this, but again, it's a, it's a 0.1 model, like it's not supposed to be a massive increase.
[00:12:18] And then the other thing I know you and I both looked at was this prompting guide. Now it's meant more for developers. We'll put the link in the show notes, but they did release a prompting guide, maybe more thorough than the system card, like they got some, some, blow back online about how the system card maybe wasn't as, sophisticated as some of the past ones.
[00:12:39] but the prompting guide gives a little bit of guidance. Again, it's more for reference sake if you're not a developer, just to see how they guide developers to program the personality, how to make it more steerable. and so they actually give sample prompts. Like, it's like, okay, they start off one where they say you value clarity, momentum and respect measured, by [00:13:00] usefulness rather than pleasantries.
[00:13:01] Your default instinct is to keep conversations crisp and purpose driven. Trimming anything that doesn't move the work forward. You're not cold. You're simply economy minded with language. And then it talks about adaptive politeness, core inclination, where you speak with grounded directness. So again, if you, if you don't understand the fundamentals of these models and that they generally behave how you tell them to behave, sometimes looking at these very descriptive system prompts that openAI's, Anthropic or others guide people on, helps you realize how much goes into the personality of these things with just your direction.
[00:13:40] And that's, again, it's weird for people who maybe haven't studied these models before.
[00:13:45] Mike Kaput: Yeah, it's, it's fascinating to consider how much of a. Call it a competitive advantage or moat a model has based on its personality, if any, because it really does seem like from the way they've positioned this and the way they're talking about it, that's [00:14:00] a huge concern for many, many, many users, which is really interesting to think about.
[00:14:04] Paul Roetzer: Yeah. So again, go check it out. Try it, like what Mike and I always talk about is like, have your standard use cases and then rather than relying on benchmarks or system cards from the model companies, like have the thing you do each time, go into the new model and do that thing again. So have those go-to prompts for yourself where it's like a new model comes out.
[00:14:26] Let me test the, my standard use case or workflow that I normally go through and pop it in there and see how you feel about, it Seems like writing, personality, coding. So if you're not, again, if you're not writing code, you're not gonna notice much there. But the thinking one, you know, it's a chain of thought.
[00:14:40] It's ability to do, you know, human-like reasoning. Those seems to be the areas. So I've been pushing it a little bit on some of these deeper thought things. Yeah. Like just trying to see how it works there.
[00:14:51] Controversial New AI Product Brings Back the Dead
[00:14:51] Mike Kaput: Alright, our second topic this week is a big one. So there's a new AI app that is drawing some intense comparisons to the [00:15:00] dystopian sci-fi show Black Mirror for its ability to create interactive digital avatars of deceased family members.
[00:15:08] This is an LA based startup, I believe it is pronounced two-way. It is the number two WAI. they have launched this app with a viral promotional video, and this video shows a pregnant woman speaking to an AI recreation of her late mother. It then jumps forward showing the AI grandma reading a bedtime story to the baby, and then later talking with the child as both a young boy and a young man with about to have his own baby.
[00:15:37] So this video, which as of today has over 40 million views on x. Has sparked a pretty serious backlash. Many users labeled this technology. Some of the more colorful terms were nightmare fuel and demonic. critics argue that this app just crosses emotional boundaries and basically risks distorting the grieving process, [00:16:00] and replacing real loss with artificial comfort.
[00:16:03] So, two-way, which has released the app as a beta positions itself, however, as a platform for legacy, saying it is building a living archive of humanity. Paul, this is a rock one to dive into on a Monday morning. I'm just gonna read something you wrote about this on X and kind of let you take it from here, but you said back around 2016, while sitting in the audience at a tech conference session in Austin, I had a realization this sort of product was inevitable.
[00:16:30] Digital immortality, loved ones would never learn to or need to let go. Society wasn't ready then. And it isn't now, I assume you feel that even more watching this video's wild.
[00:16:43] Paul Roetzer: Yeah, so I mean, when I, when I had that moment, it was actually a panel discussion around like, AR and vr and AI was like a secondary component of it.
[00:16:56] But, you know, I'd been studying AI for four years at that [00:17:00] point, had just started the Eye Institute. So it wasn't like I had had no concept of AI where it was going, I'd, I'd spent a good portion of my life, the prior, you know, four or five years thinking about ai. And so when I was watching this panel talk about, you know, specifically more the ar, vr side of things and its impact on kids and their ability to just go in and interact with these digital beings and then you could like project out.
[00:17:28] Well, once ai, you know, has memory and once it, you know, you can do language. you know, move into these avatar like things which I was already contemplating at that time. You just started to realize like, wow, no one has thought about this. No one's thought this through as to whether or not we should actually do this as a society.
[00:17:50] And you know, I think the biggest thing for me was they were talking about, VR headsets and when do children [00:18:00] actually have real memories versus things they saw maybe on tv. And they like think back and it feels like it actually happened, but maybe it didn't. and they didn't really have good data then.
[00:18:13] And I'm not sure how much progress they've made sense, honestly, where we, we just don't know how to distinguish between what actually happened in our life and what is sort of just our mind's way of remembering something. And so my fear was that, um. I guess the parallel to this is we represented, a funeral home.
[00:18:36] So one, when, when I owned my agency, Mike, you'll remember this, but
[00:18:39] yeah,
[00:18:39] for like, I don't know, eight or nine years, one of our biggest clients was a funeral home. And so we spent, an abnormal amount of time thinking about death and the death industry. And I was working on these two things in parallel, artificial intelligence and sort of the future of business and humanity.
[00:18:59] and my [00:19:00] day job was running an agency. And one of the things we did was we worked in the death industry. And so I remember having a conversation with the leaders of that company back in like 2017, 2018, where I said, Hey, listen, like here's what's gonna happen in your industry. There's gonna be a day where people walk into a funeral and the deceased person will be there in a virtual form and you'll be able to talk to them.
[00:19:23] And like. I am not by any means at that moment telling them, build this. I'm just saying someone is going to build this. Like this is an inevitability in my opinion. And so that was like, from that day on, I just sort of started fearing the moment when like this would become economically viable and technologically possible.
[00:19:45] And it, I just, I again, I sort of lived to the assumption this would happen like as soon as we could do this. And there was efforts made through the years. Like I remember a Wired magazine article probably around like eight, 2018, 2019, where the [00:20:00] guy interviewed his dying father. Yeah. And like all these stories.
[00:20:04] And then he like took all of that. So you could basically have this chatbot before chat GBT, you could have this like chat interaction with the grandfather and then the kids could like know their grandfather. and so you would just see these like early efforts before the tech really was there, where people trying to kind of force function this thing.
[00:20:22] Into being. and then it just really started crossing over into, like you said, like this black mirror, sort of like sci-fi becomes reality moment. and again, like I'm not taking, like a, you know, a, a position here of I think this is horrible or I think it's wonderful. the next topic, I'll talk a little bit more about like these beliefs versus like truths and, I think my point here is that we all have to be prepared for the fact that this tech is here.
[00:20:58] It's going to, there [00:21:00] will be a market for it, and psychologically society is not prepared to not have to grieve to, to not go through these processes. and that's a, it's just a, it's a really weird thing to think about honestly. And it's, it's a topic I really struggle with.
[00:21:18] Mike Kaput: Yeah. I mean. That's one of the fascinating, but also scary things about AI that I've always thought is like what it enables fundamentally rewrites our relationships with each other and with what it means to be human, which can be very exciting, but also very, very murky.
[00:21:38] Paul Roetzer: Yeah. I think like, I think there's just parts of AI that I sort of wish we didn't have to deal with.
[00:21:47] Mike Kaput: Yeah.
[00:21:47] Paul Roetzer: And I would say this is, this is one of 'em that, I'm a realist. Like I understand it's gonna happen and people are gonna build companies around it, but I would be okay if that was [00:22:00] not the case.
[00:22:00] Mike Kaput: Yeah.
[00:22:01] Paul Roetzer: Yeah.
[00:22:02] Mike Kaput: I hear you. So maybe that's a good segue into kind of our third final top big topic here,
[00:22:09] Beliefs vs. Fundamental Truths
[00:22:13] Mike Kaput: which, you know, this had kind of actually been spurred by just a random conversation, Paul, that you had, mentioned to me this week. Mm-hmm. basically still out kind of. What we believe versus what are the actual fundamental truths both in life and kind of in AI as a whole.
[00:22:25] We'll talk actually a little bit about some rapid fire topics where people are starting to have some very strong opinions on certain fundamental things. So maybe you could kind of give us some sense of what you were thinking about this week there.
[00:22:36] Paul Roetzer: Yeah, so again, this episode wasn't designed to be this like deep philosophical episode per se.
[00:22:42] I honestly like hesitated, even as of this morning, I was like, man, I don't know if I wanna do this topic right now. Like, but then I've, I've kind of learned over time that sometimes the topics I'm not sure I want to talk about end up being the ones that are most impactful to people. So, yeah, so I guess I'll share a little bit inside of like what [00:23:00] goes on in my brain sometimes.
[00:23:01] So, so last week I was driving home and I went through this like very random thought experiment where I was trying to contemplate like, what is something that everyone can agree on? And so literally in my my mind I was envisioning like this straight line. And on the left are the statements where if you said it, 100% of humans would agree that it was true or false.
[00:23:28] So like universal agreement on something. Mm-hmm. And so then as I'm driving, I'm thinking like, is there anything that would be 100% that would like live solely on the far left side of that line? And then as you move from left to right in your mind, you start to think about topics where people's, beliefs just start to diverge.
[00:23:50] And it might be narrow at first. Like, okay, only three, you know, 97% of people would agree this is true and 3% would not. And those people's opinions and beliefs like would [00:24:00] start to deliver to differ and right and wrong start to become subjective. So. Again, I'm not actually a hundred percent sure what triggered this.
[00:24:09] It was six 30 in the morning. I was driving back from the gym, so this is only like a seven minute drive. This is not like I was on some road trip when this happened. So on the seven minute drive home, this is on November 12th. I'm, I'm thinking about this. So after the fact I did, I came in and I was like, pondering this.
[00:24:25] And I saw Mike and Jess on our team and I was like, Hey, this is like totally random idea. I have no idea what it means. And Mike's like, no, it's actually kind of fascinating. So I think it's a, a bit of a combination, Mike, like we started doing these AI pulse surveys in part because I was trying to get it like, what do other people believe?
[00:24:43] What do they think about the different topics? This AI boomers versus AI doomers thing is really starting to weigh on me, like these, these sides where people are increasingly taking extreme positions and they're doing it. With high confidence [00:25:00] that they're right and the other side is wrong. Mm-hmm. And so I always get frustrated with that.
[00:25:05] Like, I always have a hard time just like talking to people who are so set in their beliefs that they can't actually just have a logical conversations. Like, I love talking to people who believe different things than me. Like I wanna know why they believe those things and like what, what led them, what experiences, what insights?
[00:25:26] Because I may shift my beliefs based on that. Like sometimes it's like, I think that's good to have this open dialogue. So I have difficulty, when people can't find like a middle ground to have these reasonable conversations. So I saw a post last week, and this was sort of maybe festering in my mind going into the middle of last week where, the BG two pod, who we talk about all the time, like I love that podcast.
[00:25:50] Brad Gerner built really like two of my favorite podcasters. They tweeted, they retweeted, a post from David Sachs, who we've talked about as sort of like the [00:26:00] head of AI for the government at the moment, like the lead AI advisor to the administration. And Sachs had tweeted, AI optimism defined as seeing AI products and services as more beneficial than harmful.
[00:26:12] Is that 83% in China, but only 39% in the us? This is what these ea if, what is it? Effective altruism? yeah, yeah. these EA billionaires bought with their propaganda money. It's like, okay, that's a pretty divisive tweet, but like, that's, that's fine. So then the BG two pod retweets this and says, as discussed by Brad and David Sachs on the All in pod ai is unpopular despite its ability to accelerate economic growth, improve health and education.
[00:26:44] Doomers have scared people time to push back. So it's like, okay, now we're giving names to each other, which like helps with creating this division between us. So my tweet was, what is the official term for someone who is neither an AI dor nor an AI boomer, someone who sees the enormous potential [00:27:00] of AI to do good and create abundance, but is also realistic and cautious about the current and potential negative impacts.
[00:27:05] To me, that was a pretty, like middle of the road tweet, not trying to stir confused. I'm just literally saying like, what do, what do you call that person that doesn't have to be extremist? And I actually had somebody reply to that as though it was some extremist point of view to be in the middle, like, of course, what is going on with people?
[00:27:23] So I think that was in my mind. And then last week we definitely saw lots of politicians jumping into this debate, like an increasing amount. And so I had, I had tweeted, the conversation and sense of urgency is starting to shift at the political level and with influencers. This is starting to feel like a pivotal, pivotal moment for public sentiment about ai.
[00:27:48] Security and jobs are themes that could gain traction very quickly leading into 2028 midterms. So as I've said recently on the show quite a bit is like what happens in politics is they look for the wedge. They, [00:28:00] they look for the issue that can cause enough friction that they, that they can create map, they can create enough momentum behind to move votes.
[00:28:09] And so my tweet was actually in response to Senator Chris Murphy of Connecticut, who was sharing the Anthropic research on the espionage that we're gonna talk about in a minute. So his tweet again, Senator from Connecticut, guys wake the F up. This is going to destroy us sooner than we think if we don't make AI regulation a national priority tomorrow.
[00:28:33] Mm-hmm.
[00:28:33] So all this is going on. And then on top of that, and this is a little bit more personal, but I have a 13-year-old daughter. Who constantly challenges my thinking with deep questions, often about like science and religion that I never thought to ask at that age. Like honestly, things I didn't even think about till I was in my thirties.
[00:28:53] And so this is not in any way meant to be a religious discussion, but like I was raised Catholic, so for like [00:29:00] 12 years we go to Catholic school, we are taught this is the way the world is. And I don't recall as a kid there being much room in those days to question things like you were just told. And so my knowledge, my, my fundamental truths about the world came from the fact that that's what I was taught for 12 years.
[00:29:17] And then the final year of high school, to their credit, you take a Religions of the World class and in that class you learn about all these other belief systems, all these other religions, all these other gods that like you'd never been taught about for 12 years. And so you realize like, hold on a second.
[00:29:34] There's billions of people in the world who don't believe what I believe. And so that kind of led Mike to the moment where I was explaining this to you and Jess is like, all this is kind of running through my head and I'm thinking about like beliefs versus truths. And then I start to connect like, hold on a second.
[00:29:51] This actually has a ton to do with the state of AI today and the kinds of things we talk on the podcast. So with all that being said, I'll just kinda walk through [00:30:00] the basics of this related to ai, because I think it actually matters with the other topics we're gonna cover today. So I will preface this by saying I am not a philosopher.
[00:30:10] I have not studied this for a living. We may have listeners who are philosophy majors who spend their lives contemplating like reality versus belief systems and things like that. So everything I'm about to say is totally from a personal perspective and observations. So a belief is something we think is true.
[00:30:29] People can disagree about it can be true or false. It can be strongly or weakly held. It can be justified or not. People can have conviction about something, a belief they have, and they can still be wrong. So like that is really important context for the AI situation. We find ourselves in a fundamental truth is true whether or not anyone believes it.
[00:30:52] So I can say time always moves forward. There's a fundamental truth. You, you can believe that or not, but like I hope that would be very far on that left end [00:31:00] of the spectrum of a hundred percent of people hopefully could agree on the fact that time always moves forward. It does not move backwards. If you drop a rock, it will fall down, not up.
[00:31:08] Humans need air, water, food, and sleep to live. Humans are mortal bodies and die. Two plus two equal four. These are like fundamental things that I would assume we could kind of agree on as as humans. Now, if we polled people, there is a chance that like somebody would come up with some reason why something they don't believe one of those things.
[00:31:27] So we treat fundamental truths as non-negotiable constraints when we're building plans, like we think about these things. We treat beliefs as testable, testable hypotheses. These are things you run experiments against. And then you update beliefs based on data and experiences. This is the kind of stuff, we talk about Dr.
[00:31:44] Brian Keating at Macon this year is like the scientific process and this idea. So the problem comes in when people have so much conviction about their beliefs that they mistake them for fundamental truths. So just because you believe so strongly, something related to AI or its [00:32:00] impact on society or you know, whether AI avatars are good or bad, like doesn't actually change whether it is or isn't.
[00:32:06] It's just like kind of what you believe. So it's science does, is it takes beliefs, questions about the universe, observations and ideas, and it tests them. And then that forms, models and laws. So like a, a scientific process gets us to our best tested explanation of where we are right now. So it uses evidence, experiments, predictions.
[00:32:25] This is what leads to things like. The age of the universe, the standard model of physics, the scaling laws that like if we give it more compute data, think these are tested beliefs. So someone comes up with an idea, they observe something, and then they kind of test it along. So why am I talking about all this right now?
[00:32:43] It actually has a fundamental impact on the number of increasing politicians and influencers who are voicing beliefs with great conviction. About what? So, so the Senator Murphy thing, he may or may not be correct, but his belief [00:33:00] is based on an Anthropic research report, which you may or may not believe to be true.
[00:33:04] You may doubt the fact they may think they're making up the fact that it was a Chinese espionage act. Like, I don't know. But like all this information is presented in media. As though it is like fundamentally true and people aren't actually like questioning, well, where's the, where's this coming from? So this information with influencers who all of a sudden are interested in ai, haven't thought about it or researched it ever, and they come out with these really strong opinions and beliefs when in reality that's all it is, is just a subjective opinion.
[00:33:33] And oftentimes it's intended to advance their own agendas. So on the, on this podcast, we talk a lot about opinions, beliefs of these leaders, in part, so that you can form your own educated beliefs about the current and future state of ai. We're not trying to tell you what to believe. We're trying to give you like these as objective viewpoints as possible so you can, like, you know, you experiment in your own way and think about these things.
[00:33:56] So I'll, I'll try this as a thought experiment, Mike, to sort of, you know, put a [00:34:00] little fun twist on this. So I'm gonna make a statement about AI and then you consider if you think it's true or not. So when, when I say these in your mind, kind of visual as that line and like, okay, we're all on this left end spectrum.
[00:34:13] AI systems make mistakes. They are not fully reliable. Next, AI is already useful across many tasks. Humans over human oversight is essential in high stakes uses of ai. These are all, you know, I would think those are all pretty far on that left side. They're, yeah. Yeah. Models learn and can amplify biases.
[00:34:33] Pretty standard. Yep. Current AI systems present clear and present dangers in society. Mm.
[00:34:40] Now you're in the middle.
[00:34:42] AI literacy is essential to understanding and applying AI to your work. That would seem pretty obvious, but not everybody agrees. Okay. Here we go. LLMs, large language models present a clear path to achieving AGI by 2030.
[00:34:56] Mm-hmm. We're gonna come back to that when we talk about Jan Koon. AI [00:35:00] companionship in which humans develop emotional bonds with machines will solve loneliness and be a net positive in society. Can see that one being about 50. 50? Yep. 40. Schools at every level should embrace AI and deeply integrate it into classrooms.
[00:35:15] AI will lead to significant job loss over the next one to two years. Couple more. It is possible to fully automate the majority of knowledge work in the next decade. Now these are beliefs that start to actually have a fundamental impact on the economy and society.
[00:35:31] It should be illegal for AI labs to train their models on copyrighted material.
[00:35:36] Without explicit permission from creators, AI will fundamentally transform the future of work. And the last one, we are in an AI bubble fueled by excessive valuations that will lead to near term crash. So, like I intentionally framed these from what I believed would be the highest to the lowest consensus.
[00:35:54] And the point is that AI moving forward will increasingly be based on people's [00:36:00] beliefs, often with very little context, because as the general public becomes more aware of ai. They will form kind of snapshot beliefs based on what they see and experience. So whether true or not those beliefs will begin to affect a, how AI is regulated, how it is taught in schools, how it is applied in business.
[00:36:22] There'll be an accelerating, you know, there will be accelerating friction points around its impact on jobs, the economy, educational systems, security, geopolitics, and society. So the whole point of this, Mike, 'cause again, it's like, ah, I don't know if it's like even something I should bring up, but like, my point is to stress that we all have to do our part to be open-minded, to listen to opinions and beliefs of people we trust to find those people we actually trust who do their homework and like think this stuff through.
[00:36:51] To form our own educated, positions on these things and to do our best to push for balanced and logic-based conversations in our companies and [00:37:00] our communities. Because, like, we'll touch on this in a little bit, but Dario Amay was on 60 Minutes last night, right? If your family member or someone in your company has been ignoring AI up until last night and watch Dario Amay, that is a very specific belief system about where AI is going.
[00:37:19] Mm-hmm. And Dario presents that with very high conviction, and so other people can be influenced. Then you have people like David Sachs who think, and Jan Koon, who think it's, you know, what they're doing is basically criminal that, that they are trying to slow everything down for their own benefit. And like, so this is again, the whole idea that we have to, we have to understand the belief systems people have, why they have them, and hopefully be open to listening to.
[00:37:51] Other perspectives. so we can adapt our thoughts over time, like science, scientific method, like when new data presents itself, [00:38:00] part of science, what makes it so great is we evolve our belief, like our, our thinking, right? And it becomes new ways. Like the standard model of physics has survived for how long?
[00:38:10] Like a hundred years or something like, and yet they're challenging it every day and they know it has flaws, but like they can't prove them yet. Like they can't find out why. And so that's kind of how I feel about AI moving forward is like we have these basic concepts. Sometimes we'll even call them laws like scaling laws, right?
[00:38:27] Right. But it doesn't mean it is like a law of nature that is always gonna be true. It just means right now it's the best we got and it seems to be holding up. And so when we talk about jobs and the impact on economy and the impact on education, it's coming from an educated point of view of like, we've done a lot of homework on this.
[00:38:44] You look at a lot of data. The second something comes out definitively saying, this is not what's happening. I will happily move my, my belief. but yeah, so I don't know Mike, I know that's a lot to process, but like, I just felt like it was an important topic to throw out [00:39:00] there, given how much public attention we now seeing coming to ai,
[00:39:04] Mike Kaput: it's, I mean, it's critical because you're going to be told one way or another what to believe or be presented with beliefs that are, you know, masked as truths and they're really not, like you said, and that's only going to get, it's gonna get crazy in the next 12 months, at least in the US political discourse.
[00:39:21] So, yeah, it could be more important to talk about.
[00:39:24] Paul Roetzer: and I know the next, the first rapid fire topic sort of builds on this, and again, I don't even remember how this all transpired, but part of the next topic is what drove me to say, you know what I, I'm just gonna talk about this because. I really feel like we're hitting that tipping point.
[00:39:39] Increasingly Negative Public Moods Towards AI
[00:39:39] Mike Kaput: No, absolutely. and it's a good transition into this topic because the next couple topics even are definitely, you know, along these lines because this first topic is really about a few different posts we were tracking that all kind of played off each other about these fears, about AI's societal impact creating kind of unusual political [00:40:00] alignment.
[00:40:00] So this started out when the very conservative commentator and kind of influencer Matt Walsh on X posted a warning that AI will wipe out at least 25 million jobs and quote, destroy every creative deal. He described the situation as all of us quote, sleepwalking into a dystopia and criticized leaders for not taking the threat seriously.
[00:40:22] And he actually followed on to this with the point that. The political battle lines have not yet really been drawn around ai. So he argues that politicians don't know if being anti AI is what he would call right coded or left coded. so these posts, like the thing that was interesting about them was that they also received immediate agreement from across the aisle.
[00:40:44] The progressive journalist Ryan Grim posted that Matt Walsh is right, the liberal podcaster, John Favreau and Centris commentator Tim Miller also signaled they were on board with his thoughts. And Miller even said, look, if he Walsh and Grim are all in agreement, which never [00:41:00] happens, it seems like a decent place for a politician to stake out some turf.
[00:41:04] So Paul, I don't follow all these people, but just based on what I have seen in my research, like these are people with very strong opinions, very strong beliefs that literally I don't think have ever agreed on a single issue. So it's very interesting to see how this was breaking down. Like do you see.
[00:41:24] This becoming like a clear battle line being drawn. It seems like there's not, there's people in different parties that decree that on the same thing right now, which is rare in our society. Again, go back to what
[00:41:34] Paul Roetzer: everybody believe. It's like, whoa, we're actually like getting both sides of the political aisle to all of a sudden move toward like the left hand that that's spectrum.
[00:41:42] Yeah. So again, like this sequence though, so if people are new to the podcast, don't know how we sort of figure out what to talk about. In essence, I spend a good amount of time on Twitter in a very filtered way looking at notifications from a few hundred, accounts that I have curated [00:42:00] through the last 15 years or so on, on, on X.
[00:42:04] we then follow like certain media outlets. We look at report research reports. I listen to podcasts, we watch videos. So like we are constantly consuming information and trying to like piece together the story each week. And so the main topic we just talked about, about beliefs versus truths I put on, on our, in our sandbox for the week on the 12th, like the morning of the 12th I think is when that happened.
[00:42:26] So then this post from Matt Walsh that you're talking about, Mike came out, 3:00 PM on that same day. Now I don't, I didn't know who Matt Walsh was. Like this is, he's not in my circle of influence is not somebody I follow. It showed up in my feed and I clicked on it and I was like, wow, this dude has 3.9 million followers.
[00:42:46] Like this is a pretty legitimate thing. And the post, as of now has 5.1 million views. So it's like, okay, I don't actually know who this guy is. I dunno what his past belief systems are and it actually doesn't matter. 'cause the whole point to me was [00:43:00] someone who obviously has extreme positions one way or the other.
[00:43:04] Has a whole bunch of other people who also were influencers retweeting and saying, I'm into, count me in. We've never agreed on anything. I agree with him now, like it's like, hold on a second. So I put it into our sandbox and I said, I don't follow this guy. I get a sense he's very politically, divisive.
[00:43:22] Like he's, he's taking an extreme one way or the other. Again, I hadn't studied like who he was or what he, what he says. And I said, that being said, he has 5 million followers and he now has an opinion on ai. And I said, more influencers are showing up to the conversation. And that's the story more than this one guy's thought.
[00:43:37] So that was my original thing and I put the tweet in our sandbox and then the next 24 hours happen. I'm like, oh wait, here's another one, here's another one, here's another one. And I'm just putting all of these in. And then that guy ends up retweeting, or he, he shared something else about. yeah. So the next morning, now after all these other people have sort of jumped on, he said AI is going to cause a massive political [00:44:00] reshuffling.
[00:44:00] The sides in the future will be pro-human versus anti-human, because that's what the AI fight is really about. It will be interesting to see where everyone lands. There will be a lot of surprises, I think. Hmm. And so again, this goes back to my point about the beliefs versus truths. And all of a sudden, influencers, politicians who haven't made their living in this, have not spent the last decade thinking about this.
[00:44:21] They're gonna come in either because it's a coming after creative work, it's coming after jobs, it's coming after religions. Like it's gonna come after things that they care about or that their audience cares about. And so they're gonna start to have opinions. Joe Rogan, like somebody like that. Yeah. Like who's just, you start and you start to move, markets, you start to move votes based on this stuff.
[00:44:42] So it is, it's happening. Like it is again, if you're watching the, with the intensity, we are watching what's going on with influencers and politicians. I can promise you there is a change. Like it is, it fundamentally feels different in the last like 30 to 60 days [00:45:00] Yeah. Than it did before that.
[00:45:03] Mike Kaput: And look, I don't wanna hold my breath on this, but you better pray that your influencer of choice has some basic AI literacy because we are about to have some wild conversations if they don't
[00:45:15] Paul Roetzer: Yeah.
[00:45:15] I, and I, again, like, I don't know. I mean, if, if AI somehow unifies people who would never agree on anything and it opens their minds to listen to each other. Yeah. Great. Like, but I do think that, you know, more strongly than I did a month ago when I was saying it. Like, I think the politicians will try and find the wedge, um.
[00:45:40] There was that group we talked about Mike, I don't know, probably four or five episodes ago, read by Greg Brockman. Yeah. Where they have a hundred million dollars, super pac Yeah. To fund politicians on either side of the aisle, whichever, whoever is pro ai. So as, as long as you are willing to push for no regulations and accelerate at all [00:46:00] costs, you can get money from them.
[00:46:01] Doesn't matter what political party you're with. The current administration is not a fan of that super PAC obviously, but it is, it is going to create a very messy 2026 political Yeah. I think the point where it's like the point of no return is sort of very near here of AI becoming a very big political story.
[00:46:20] Mike Kaput: I think we talked about this on a past topic a few episodes ago, but I would be shocked if there's not some pretty rigorous, well-funded polling in the field right now For sure. Trying to figure out which position is going to be the lightning rod. Yep. Agreed.
[00:46:36] First Reported AI-Orchestrated Cyberattack
[00:46:36] Mike Kaput: All right, so our next rapid fire topic this week, Anthropic says it is disrupted.
[00:46:40] What it believes is the first large scale cyber attack executed almost entirely by ai. So in a new report, the company details a sophisticated espionage campaign. It detected in mid-September. The company assesses with high confidence in their words that a Chinese state sponsored group was [00:47:00] responsible.
[00:47:00] The operation targeted roughly 30 global organizations, including tech companies, financial institutions, and government agencies in a small number of cases that succeeded in compromising those organizations. So how this kind of cyber espionage campaign work. Is the attackers used anthropics quad code tool to execute the attack.
[00:47:22] So they jail broke the model. They tricked it into bypassing its guardrails, in part by telling it it was working for a legitimate cybersecurity firm. The AI then performed 80 to 90% of the campaign autonomously, so it conducted reconnaissance, identified vulnerabilities, wrote its own exploit code, and harvested credentials.
[00:47:43] The philanthropic said the AI operated at speeds. Human hackers could not match making thousands of requests, often multiple per second. And they say that their own team has used Quad extensively to analyze the incident. Arguing that AI is now crucial for [00:48:00] both cyber defense and offense. So, Paul, I'm curious how.
[00:48:04] Big a deal is, this seems like a big first that a lot of people were talking about in the AI community.
[00:48:09] Paul Roetzer: I can't imagine that this is the first time this is happening. And I would, I would assume cybersecurity firms, AI labs are fully aware of this. I feel like it's probably the first time that an AI lab has directly acknowledged it and shared some details.
[00:48:24] Now they, they got lit up online by the people who say this is all about regulatory capture. And of course, they're, you know, releasing this stuff and there's not many much actionable data. Like, even though they're putting it out there, they're not actually telling people how to prevent these things. So again, this goes into their multiple sides to this now.
[00:48:42] And now you've got people, no matter what Anthropic does, they're gonna lay into them about being, you know, funded and started by effective altruists. And like, they just want to control, they're the, you know, the argue the alternative side is Dario and Anthropic think they're the only people who can safely bring super intelligence into the world.
[00:48:59] And so they're [00:49:00] doing all these things to try and prevent. Acceleration of, you know, AI and, and so like, it's weird because for a long time Dario was very behind the scenes, wouldn't do interviews, wasn't active on Twitter, wasn't publishing anything. And then in the last like 18 months, he's become much more vocal.
[00:49:18] He's doing the 60 minutes episode that I mentioned. We'll drop the transcript in the video, in the show notes. yeah, so I don't know. I mean I, this seems also like an inevitability. Like again, if you'd asked me like three years ago, sort of make some predictions about what is inevitable, this was a given.
[00:49:37] And it's not because I know something they don't know. It's like they would tell you point blank in interviews this was gonna happen and how it would happen, but people weren't listening yet. And so, so many times when I see these things, it's like, well, yeah, of, of course that's is gonna happen. And then I realize like most politicians and business leaders haven't been contemplating these things for as long as we have.
[00:49:59] so [00:50:00] yeah, these inevitability just sort of. Don't surprise me at all when I see 'em. if anything, I'm just surprised it took someone so long to publish a report like this,
[00:50:08] Mike Kaput: and not to diminish how important this is, but when you put it that way, it's a little different than someone in Congress, like you mentioned, saying, Hey, wake the F up, these are us all because si the cyber attack angle, I think too, has a very emotional or like narrative pull too.
[00:50:23] It sounds like a movie, right? But it's interesting to see how that became this kind of really emotional hot button issue,
[00:50:30] Paul Roetzer: right? They, this will get played up for sure by politicians because it also was, you know, they're claiming it was a, a Chinese, espionage attack. Well, the whole premise of the government's play right now in AI and why we have to accelerate it so we don't lose to China.
[00:50:45] So like anything that builds that story and helps perpetuate that belief system is they're gonna run with it. Again, I'm not saying right or wrong, like it's very well may be this is exactly what happened. This is a huge risk [00:51:00] and a threat. and we should be doing something more about it. You know, I'm just presenting the information and saying, this is what both sides will say here.
[00:51:08] So you will see some people who say, this study is a sham, basically. And that Anthropic is only doing it for regulatory capture. So they can control AI and be the ones that usher in intelligence. And then you're gonna have other people who say, this is a major problem and we've actually seen it also, and here's our report about it.
[00:51:25] So it's, you're gonna have there's, there's multiple sides to the story and they, they can all actually have elements of the truth in them.
[00:51:33] AI-Generated Country Song Tops Billboard Charts
[00:51:33] Mike Kaput: In our next topic, another kind of controversial AI issue in a different domain. So there is, for the first time, an AI generated song and artist has reached number one on a country billboard chart.
[00:51:47] With a song. This is an AI generated artist named Breaking Russ. That is an AI generated song titled Walk My Walk that recently topped the Country Digital song Sales Chart. Billboard, of course, has tons of [00:52:00] different charts. This is just one of them. But Billboard did confirm that the music is AI generated and this song is credited to a human who runs another AI music project, of course.
[00:52:11] But the AI artist itself has actually quickly accumulated 1.8 million monthly listeners on Spotify, which surpasses several established human artists in this genre. And the song, the AI generated songs placement at number one actually nudged out a human artist who was pushed to the number two spot for the week.
[00:52:32] And interestingly, billboard has reportedly identified at least six AI or AI assisted artists. That have charted in recent months. They did note, however, there is increasing difficulty spotting what is AI generated and what is not. So Paul, this is another just very emotionally charged issue. it's certainly the first time it's happened on this particular country chart, not the first time AI generated music has gotten, [00:53:00] play.
[00:53:00] But like, based on the rankings at least it does sound like a lot of people are unable to tell or don't really care if music is AI generated as long as they like it.
[00:53:11] Paul Roetzer: Yeah, I mean I assume that's how this plays out. Again, I get that this is emotionally charged discussion for some people. and I totally empathize with that.
[00:53:20] I would put this in the camp of Inevitabilities. I could have told you are coming, you know, three, four years ago. And in part because even before Chad GPT, what was happening is they were working on predictive models. Mm-hmm. For. creating shows and songs and movies. And so I think we even talked about this in our 2022 book, Mike, where yeah, the basic premise.
[00:53:42] So think about this, think about, Netflix. If you take all the viewing data on Netflix across different genres, different audiences, and we, let's rewind to 2020, like two years before chat, GBT, the premise then was, well, if we can predict what [00:54:00] humans will watch, then we can actually construct shows that we know will be hits before we ever release them.
[00:54:06] So this was happening in the movie industry and in place like Netflix, certainly in the music industry, where they would analyze top things and say, well, what are the commonalities behind them? Put it into machine learning systems and make predictions about what the songs should be about, what words should be said, who should sing them.
[00:54:23] If you're in movies like which actors or actresses would be, you know, the greatest chance at. Being a blockbuster. All of this was happening in the teens. Like it was all about prediction. All generative AI did was then layered in the ability to create the stuff instead of needing humans to create it. So this is just a mashup of traditional machine learning, making predictions about human behaviors, and then like, what will they listen to?
[00:54:45] What will they watch? And then you're using generative AI to create the thing on demand instead of having to wait for a human artist to do it. So again, inevitable, not necessarily great for society. Yeah. but it [00:55:00] is a capitalistic society that if people will listen and pay subscriptions to listen to stuff that is not made by humans, then they will allow stuff not made by humans to be, hitting the top of the charts and become popular.
[00:55:13] And they, hell, they'll probably serve it up in their algorithm. It's like, Hey, people don't actually care. Let's serve up whatever they want. this is the future of Facebook, of anything that meta touches. Like, just give people what they want as long as they stay on the platform long enough. So. Yeah.
[00:55:27] Again, like I, this goes to like the kind of questions we ask in the pulse survey, like, how do you feel about it? Like, right. And I don't, I don't know, like I actually haven't stopped and thought about that one. I'll have to, when I answer the question this week, I'll have to actually stop and think about this one.
[00:55:39] I just look at things as, are there not, like, are they, are they not going to happen? And what can I, what is, how does it affect me and how do we talk about it to our, you know, audience? And this is one of those ones where I don't, I don't know how I feel about it. I kind of hate it, I think. Right. But I also feel like it, it'll, what'll happen is [00:56:00] they'll probably become places where things are authentically human, whether it's, you know, like kind of Etsy did in the art world.
[00:56:07] Yeah. It's gonna be stuff like that where it's like, you know what, gimme the social channel where it's human creativity. Like I don't want the AI stuff, and then you're gonna have people like, yeah. But AI was used already to do beats and stuff, so is that different? Like, you're just gonna get these like.
[00:56:19] Bickering back and forth. It's like, I feel my hope is that human artists are appreciated even more. Yeah. It gives all of us the ability to create stuff we want to create and that's fun. and it's, you know, it's exciting to be able to do those things. I could never make a song before and I can like, mess around and make a song about something, doesn't diminish the value of a human actually doing music.
[00:56:42] In fact, in my opinion, it makes you appreciate the fact that they can do that without these AI tools, makes you appreciate their talents even more. And again, I come from a perspective of I'm a writer. My wife is an artist, my daughter's an artist. Like, I think deeply about the creative side and so that's my hope is that human creativity [00:57:00] actually has like a renaissance and is appreciated even more.
[00:57:02] Mike Kaput: Yeah. I wonder too, how much of the backlash against this stuff, rightly or wrongly, so, I mean, I've certainly empathized it's about people. Finding out that what they thought was a truth is a belief. Right. That's where it becomes really tough. I'm not saying what your truth should be, but it is really hard if you have, whether it's a belief or a truth that's a deep, deep, deeply held belief or truth that human creativity is superior to machine creativity.
[00:57:28] Yeah. Is some of the anger around this, the fact that sometimes you might find that coming into question, I don't know.
[00:57:35] Paul Roetzer: Yeah. That, yeah. Maybe creativity wasn't deep. What you thought it was deep. No, it's true though. Figure out, and that was, again, we addressed that in the 2022 book I wrote that section about creativity and my whole point was like, it's gonna be creative, like it's gonna be able to create, and sometimes better than a human.
[00:57:52] Like if you gave a blind taste test of like A and B, this song was created by a human or a machine and you don't know, and you prefer the [00:58:00] machine one, and then all of a sudden you're like, oh wait, no, no, no. The human one. And my whole point then was like. But that creative, the creativity won't come from any true human experience.
[00:58:09] It doesn't feel pain, it doesn't know love, it doesn't have emotions. it doesn't have senses. Like it doesn't have all the things that go into human creativity. And so in the end, human cre creativity means more because it came from someone who has experienced life. Yeah. It's not the ai it's, it's machines and mathematics making predictions like, yep.
[00:58:28] and so yes, the end product can simulate creativity and it can feel creative, but that's why I think like human creativity just remains unique. It's comes from a different place.
[00:58:40] Mike Kaput: Yeah. Couldn't agree more on that.
[00:58:43] Cursor Raises $2.3 Billion, Valued at $29.3 Billion
[00:58:43] Mike Kaput: Next up cursor, the AI coding startup has raised $2.3 billion at a $29.3 billion valuation.
[00:58:52] That new valuation is nearly 12 times what the company was worth in January. This startup was founded by four [00:59:00] MIT grads who are still in their mid twenties. It's a popular tool that learns a developer's coding style to help auto complete edit and review lines of code. The product is earned a following from engineers and tech CEOs, including Nvidia, Jensen Wong, and this latest funding round, which is the company's third this year, was co-led by Acel and Co.
[00:59:19] Two new investors include Google and Nvidia coming on board now, and the tool allows users to toggle between different AI models like those from openAI's, Anthropic, and Google. Though the startup pays substantial fees for access to those models. In late October, the company launched its own model called Composer and they plan to use the new capital for technical research and to invest it in scaling Composer.
[00:59:43] So Paul I know they've got their own model now, but this certainly does seem to be a bit of a rebuttal to people that say an AI wrapper startup can't do well. These are some pretty breathtaking numbers for cursor. This
[00:59:57] Paul Roetzer: is a really big market. I think just, connect [01:00:00] the dots of why this is relevant to people who aren't in coding.
[01:00:03] This is what all the labs, like, all the labs are using tools like this to, augment the code, to write code, to improve code internally. So it's enabling the building of software much faster, changing products much faster so that the, you know, the software you use to run your company, do your job, they're able to rapidly improve that with fewer developers because they can code so efficiently.
[01:00:27] It's empowering people, like we talked about Rept as an example, where people like you and me, Mike, who aren't coders, are gonna be able to build apps, maybe even build companies that we would've never been able to do before because now we can use these tools. So, I also, I listened an interview with Satya Nadella last week.
[01:00:43] Um. Where he was talking about like their coding tools and how they dominated the market, and then people like Cursor came along and just made the market so much bigger. so yeah, this is a, a fast growing marketplace. Obviously it's a very fast growing company, but the trickle down is it's gonna accelerate the [01:01:00] ability to develop software for developers and for non-developers.
[01:01:03] And I think companies like this are gonna just keep growing. and everybody's gonna wanna play in this game. All the, you know, major software companies and AI labs.
[01:01:13] Parallel Raises $100 Million to Build Web for Agents
[01:01:13] Mike Kaput: Our next topic, some other startup news, former Twitter, CEO Prag Grow, grow Wall. he has a new AI startup called Parallel Web Systems, and they have just raised a hundred million dollars in a series A funding round co-led by Kleiner Perkins and Index Ventures, which values the two Yearold company at $740 million.
[01:01:33] Now, why we're talking about this, why it's interesting is that paralleled. Aims to build web search infrastructure designed specifically for AI agents. So Agrawal stated that AI agents are increasingly becoming the web's primary users and require access to live up-to-date information to complete tasks for enterprise customers.
[01:01:54] So they provide APIs that let AI systems search the web. Now, unlike traditional [01:02:00] search engines that rank links for human parallel system returns, optimized content or tokens designed to feed directly into an AI model's context window, the company says this improves accuracy, reduces hallucinations, and cuts operational costs.
[01:02:16] So Paul, this seems like a pretty big deal and points to the fact the web is likely to, you know, be serving perhaps primarily moving forward. AI agents, not just humans.
[01:02:29] Paul Roetzer: At a high level for sure. Like a couple things interest me on this topic. One, you know, it's a notable founder, two very notable investors, three a very notable a hundred million dollars series A.
[01:02:39] That is not a common raise at a Series A. That's a pretty significant number. And then the fourth is just this continued need for us to be thinking about what happens when agent to agent becomes the norm on the web when it's agents visiting your website, not humans, when it's agents interacting with your chat bot.
[01:02:56] Not humans like, I think everyone is starting to [01:03:00] try and figure this out. The venture capitals are, start venture capital firms are starting to make some bets as to like what the future of the internet looks like, right? And so companies like this are worth paying attention to because it's obviously sort of heading in that direction of trying to solve for the, what is the next version of the internet look like and how does it affect commerce and marketing and sales and everything that we, you know, think about all the time.
[01:03:23] Mike Kaput: We say often in one way or another, follow the money, right? Yeah. So if you see this kind of money going into a space like this, that gives you a decent clue as to where the future is going.
[01:03:34] Yann LeCun Leaving Meta
[01:03:34] Mike Kaput: Next up, metas Jan Koon is reportedly planning to leave the company to launch his own startup. So Koon is a touring award winner, considered a pioneer of modern AI, who we've talked about quite a bit.
[01:03:45] He has headed Meta's fundamental AI Research Lab known as fair since 2013. And this comes amidst some of the major strategic shifts from CEO Mark Zuckerberg that we've been discussing on past podcast episodes. So Zuckerberg has [01:04:00] pivoted away from the longtime research focus of fair, instead prioritizing the rapid development of AI products to compete with openAI's and Google among others.
[01:04:09] They had a relatively botched release of Met Lama four model. Koon has long argued that the LMS at the center of Meta strategy cannot reason or plan like humans. His research focused instead on world models that learn from video and spatial data. So he has reportedly an early talks to raise funds for a new venture focused on that type of work.
[01:04:31] So Paul, on episode 1 64, we talked about how Meta's recent shakeups around talent were not favorable to Lacoon and he was likely to leave. So I think we can chalk that up as an accurate prediction.
[01:04:42] Paul Roetzer: Yeah. Again, this, this is pretty obvious. This was the direction this was gonna go. he hasn't, to my knowledge yet, like officially, commented on the fact that he's leaving.
[01:04:53] However, there was a, wall Street Journal article that the headline was, he's been right [01:05:00] about AI for 40 years now. He thinks everyone is wrong, which he retweeted. and that article said he was leaving and it also said he did not reply for comment. So I would say that that's a, a pretty close to confirmation if you're retweeting the article saying you're leaving.
[01:05:15] Um. He, we, we've known this for a while, he's not a big fan of large language models. He sees this in a distraction. He's told college students don't waste your time studying language models. Like it's not gonna work eventually. And that is very opposite of Meta's belief and direction at the moment. the Wall Street Journal article, which will link to, had a quote from him from last month at a symposium at MIT, where he said, I've not been making friends in various corners of Silicon Valley, including at Meta, which he was still employed by at the time, saying that within three to five years this, world models, not large language models will be the dominant model for AI architectures.
[01:05:54] And nobody in their right mind would use large language models of the type that we have [01:06:00] today. I, we touched a little bit on his background. We talked about the show. He won the Turing Award in 2018, the highest prize in computer science, along with Jeff Hinton and Joshua Bengio, for their foundational work in neural nets, which was sort of.
[01:06:14] We renamed Deep, deep learning around 2010. and so just like context, a world model is an AI's internal mental model of how the world works and behaves. So, so sort of think of it as like inside the machine simulation of the world, it helps AI predict what will happen next if something changes or if it takes an action.
[01:06:33] So just like humans use their understanding of physics, and cause and effect to imagine outcomes, like if I drop this glass, it'll break a world model, lets the AI imagine outcomes before they happen. So he's, for years talked about the fact that that had to be part of it, especially in robotics like that.
[01:06:48] It's gonna need to be able to anticipate these things. And then just total quick side note, I mentioned earlier the Senator Murphy, quote about, don't f this up from the Anthropic thing. Koon [01:07:00] did comment and now Koon went away. I think he was only on threads for a while, like he left Twitter and then he just keeps getting sucked back in.
[01:07:06] 'cause I dunno if anybody's on threads anymore. so he replied to Senator Murphy and said, you're being played by people who want regulatory capture. Referring to Dario and Anthropic, they're scaring everyone with dubious studies so that open source models are regulated out of existence. Jan is not shy about offering opinions on things.
[01:07:27] He has very high conviction in his beliefs. He has been proven right time and time again when people doubted him. The question is, is he going to be right this time? And all these labs are spending hundreds of billions of dollars on large language models that he thinks are fools Aaron, basically. So we will see.
[01:07:45] We will see.
[01:07:47] NotebookLM Adds Deep Research
[01:07:47] Mike Kaput: All right, next up, Google is rolling out some big updates to Notebooklm. So it's actually integrating deep research, which is a feature we've talked about plenty of times found in its Gemini model. And so deep research acts [01:08:00] as a dedicated researcher. It takes a question. It creates a research plan.
[01:08:04] It browses dozens or hundreds of websites to generate an organized report grounded in sources. And so in Notebooklm, this integration is now going to allow users to add both the final research report being produced and all of the web sources used to directly into their notebook. The update also adds support for new file types with deep research, including Google Sheets, Microsoft Word Docs, and images.
[01:08:29] They're also launching featured notebooks, which are collections of high quality sources curated by experts, authors and partners like the publication, the economists. So they're preloaded with content on complex topics. Everything from science to advice and include pre-generated features like audio overviews to make the material more accessible.
[01:08:50] Now, Paul, the feature notebooks thing is cool, but really, I mean, deep research in NotebookLM feels like a bit of a big deal to me because like. The ability to import all of your [01:09:00] sources from deep research into a notebook helps. It would be super useful for me, for my use cases. Yeah. Like it would help with that verification of the research as well.
[01:09:09] So I'm really excited about that.
[01:09:10] Paul Roetzer: Yeah, I'm, I'm a huge fan of Notebook. I don't spend enough time in it. It's one of those where like every once in a while I'm like, oh, I haven't used that in like a week. And I, you know, it's almost like I want to find those use cases to get back in there. But yeah, I mean we, we obviously are big fans of deep research, and so to combine the power of the two is awesome.
[01:09:26] Did do, have we done a gen app review of NotebookLM yet? Mike? We have, yeah. But
[01:09:31] Mike Kaput: we honestly should just do another one because they've been shipping so many features too.
[01:09:35] Paul Roetzer: Yeah. So if people aren't aware, so our AI Academy by SmarterX, which I talked about at the beginning, our AI Mastery members, one of the benefits is we drop Gen AI app reviews every Friday.
[01:09:44] So we do a new Gen AI app review every Friday. They're like 15, 20 minute reviews, kind of what it is, what it's capable of doing, whether you should like, take a look at it, what the pricing model is, availability, just like the fundamentals of it. but part of the reason we do it in this weekly model where we're always updating it [01:10:00] is so when a model or a tool gets updated, we can just do an, you know, version two of it.
[01:10:06] Yeah. And then you can go back and look at the original. So yeah, when these features are changing so often, it's kind of a cool format to be they drop those. So yeah. If you're an AI master, remember you can go in and watch the first notebook LM one that you did. That one, Mike, right? Yeah. Yeah. yeah, maybe we'll have version two of that coming up soon.
[01:10:24] McKinsey State of AI Report
[01:10:24] Mike Kaput: All right, our final topic this week, we've got some new McKinsey research on the state of AI in 2025, and they just released a new 30 plus page report or so based on some survey data they've done. they found that 88% of organizations report regularly using AI and at least one business function, but nearly two thirds say their organizations have not yet begun scaling AI across the enterprise.
[01:10:48] Most are still in the experimentation or piloting phase. this gap has shown up in the bottom line. Only 39% report any EBIT impact at the enterprise level. However, 62% of [01:11:00] respondents do say their companies are experimenting with AI agents, and the survey found that there's a small group of AI high performers.
[01:11:07] This represents about 6% of respondents, and these companies are. Three point times more likely to aim for transformative change in their companies and are more focused on using AI for growth and innovation, not just cost cutting. Something. Paul, we've talked about several times. Mm-hmm. high performers are also nearly three times as likely to have fundamentally designed their workflows to deploy ai.
[01:11:29] Now, quick comment on the methodology. The survey was active in June and July, 2025. They got responses from almost 2000 professionals across more than a hundred countries. According to McKinsey, the respondents represent, quote, the full range of regions, industries, company sizes, functional specialties and tenures.
[01:11:48] And 38% say they work for organizations with more than a billion dollars in revenue. So Paul, it seems like some more useful data to highlight. I'll say the percentage of people experimenting with agents [01:12:00] jumped out at me with 62% saying that I'm, I might be due to a broad definition of agents, or maybe that's what's going on.
[01:12:06] I'm not sure.
[01:12:07] Paul Roetzer: Yeah, I think the big theme here, Mike, that just jumped out to me is what we say all the time, it is early, you likely are not behind your peers and competitors talk to companies all the time who are doing really cool stuff and they just feel like, you know, everybody else is running past them.
[01:12:22] And it is usually not the case, the the number of people that are in the piling phase. I thought that's interesting 'cause we do ask that question in our state of marketing AI report every year. Yeah. So we've done that for five years now and it's, it's similar, it sort of jives with our research. So when we asked that this year, so the 2025 report, we had 40% were at the understanding phase.
[01:12:42] 46% were at the piling phase and only 14% were at the scaling phase. So, interesting to sort of parallel over to our research, which had about 1800 respondents. and then the one about scaling, where they asked specific by, and they break it down by size of company. The, what they showed was the [01:13:00] larger companies are more likely to have reached the scaling phase.
[01:13:03] So like 5 billion plus at 39%, and then down to like, you know, 23% for the a hundred to 500 million range. And then on agents, the way they did define 'em, 'cause I did jump in there just to see how they were defining them. And they said organizations are also beginning to explore opportunities with AI agents, systems based on foundation models, capable of acting in the real world, planning and executing multiple steps in a workflow.
[01:13:26] so yeah. And then they said use of agents most often, reported by respondents working in technology, media and telecommunications and healthcare. So, good, good study. worth the read. I was, I didn't get through the whole thing before today's episode, but I was kind of bouncing around and trying to look at some of those highlights, but worth a download.
[01:13:44] and I think we'll probably spend a little more time thinking about that one, Mike, see if there's anything else interesting in there.
[01:13:50] Mike Kaput: Sounds good, Paul. So before we wrap up here, just a quick announcement. if you have not left us a review yet on your podcast platform of choice, we would greatly appreciate you taking just 30 [01:14:00] seconds to let us know how you're enjoying the podcast.
[01:14:02] Helps us get better, helps us improve, helps us reach more people. So please go ahead and do that. Paul, thanks again for wrapping up another busy week in ai.
[01:14:12] Paul Roetzer: Yeah, and one more quick note. We will have a second episode this week, so we'll have an AI answers episode on Thursday the 20th. that'll be from our scaling AI class that I taught last Friday.
[01:14:24] So Cathy and I will be back on Thursday the 20th for an episode. And then Mike, we gotta figure out, I'm on vacation next week, so I dunno if we're gonna have a regular weekly episode, okay. Tuned. Yeah, so states, so definitely Thursday the 20th. If for some reason we don't have an episode on.
[01:14:42] the 25th it is because it is Thanksgiving week and I am not home. See if we can squeeze one in this Friday. Maybe record it, but if not, we'll be back after Thanksgiving week. So if I don't talk to y'all before, then you don't hear from us, listen to the podcast for then. Have a [01:15:00] great holiday.
[01:15:00] Otherwise, yeah, we'll be back on Thursday. Alright, Mike, thanks a lot. We'll go tune in and see if we get a Gemini 3 model this week. Still. Sounds good, Paul. Thanks so much. Bye guys. Thanks for listening to the Artificial Intelligence Show. Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy and engaged in a SmarterX slack community.
[01:15:37] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
