The Artificial Intelligence Show Blog

[The AI Show Episode 183]: AI Job Automation, Is There an AI Bubble?, AI Political Divides, ChatGPT Turns 3, Claude Opus 4.5, Google vs. Nvidia & DeepSeek V3.2

Written by Claire Prudhomme | Dec 2, 2025 1:15:00 PM

A new MIT study, Project Iceberg, finds that 11.7% of the US workforce could be replaced by current AI systems, while a separate McKinsey report estimates 57% of current work hours could be automated. Meanwhile, famed investor Michael Burry is betting against the AI industry, arguing the sector is in a bubble comparable to the dot-com era.

On this week’s episode, Paul and Mike go deeper on those topics and other top news this week, including Claude Opus 4.5, DeepSeek V3.2, Runway Gen-4.5, and the third anniversary of ChatGPT.

Listen or watch below—and see below for show notes and the transcript.

This Week's AI Pulse

Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI. 

If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.

Click here to take this week's AI Pulse.

Listen Now

Watch the Video

 

 

 

Timestamps

00:00:00 — Intro

00:04:19 — AI Pulse

00:08:04 — MIT Study: AI Can Already Replace 11.7% of US Workforce

00:23:55 — Is There an AI Bubble?

00:33:24 — Political Divides Over AI Are Getting Worse

00:40:27 — ChatGPT Turns 3

00:46:59 — Claude Opus 4.5

00:49:19 — ChatGPT Shopping Research

00:52:30 — Google Encroaches on Nvidia’s Chip Dominance

00:55:58 — Suno Embraces Training on Licensed Music

00:58:48 — Insurers Retreat from Covering AI Risks

01:02:21 — Dwarkesh Podcast with Ilya Sutskever

01:08:41 — “AI 2027” Revises Forecasts to 2030

01:12:26 — The Thinking Game and AlphaFold

01:18:38 — DeepSeek V3.2

01:20:27 — Runway Gen-4.5

This episode is brought to you by AI Academy by SmarterX.

AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off an individual purchase or a membership by using code POD100 at academy.smarterx.ai.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: If we stopped development of AI models today, if we shut off all the AI labs and all we had was today's current models, everything changes anyway. Like people don't comprehend how disruptive the tech we already have is. Welcome to the Artificial intelligence show, the podcast that helps your business grow smarter by making AI approachable and actionable.

[00:00:25] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:46] Join us as we accelerate AI literacy for all.

[00:00:53] Welcome to episode 183 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike, put. [00:01:00] We are recording December 1st, Monday, December 1st, 11:00 AM a day after the anniversary of ChatGPT Mike, which we will talk about three year anniversary of ChatGPT. new models dropping already.

[00:01:14] We had two drop this morning, so we will touch on both of those. probably not in great depth 'cause neither Mike nor I have had a chance to get into DeepSeek and then the new, video generation model from runway, but those already happen. There are lots more releases coming. I think we still, like last year, Mike, we had the 12 days of ship this Yep.

[00:01:34] I believe is what Open the eye called it, and I anticipate we are gonna get that again. some variation of that. So I don't know, buckle up. I think the first 20 days of December are going to be very, very busy with model releases. so lots to talk about. We've got some macro level stuff to talk about related to, you know, the AI bubble, what's going on in politics.

[00:01:55] A couple of new reports from MIT and McKinsey. So a ton to get [00:02:00] into. hope everyone had a great holiday week. I know. I was out for the week. myself. I'm like, I did you guys, you guys travel too, right? Yeah, we did as 

[00:02:06] Mike Kaput: well. 

[00:02:07] Paul Roetzer: Yeah, it was nice. It was good. I actually, I mean outside of keeping track of everything for the podcast, I really unplugged for like four or five days, which was just amazing.

[00:02:15] So, it's good to be back though. Good to be catching up with everything. and starting to kind of wind down for the, for the year, I'm trying to do a ton of 2026 planning and I'm leaning on AI heavily to assist me in that planning. So, I dunno, maybe I'll share some of that stuff as we're going in the next couple weeks too.

[00:02:33] Some of the cool things we're doing. Alright. So this episode is brought to us by AI Academy, by SmarterX AI Academy helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and an AI powered learning platform that we just launched. I guess it was, was it last month?

[00:02:52] Yeah, last month, right? Yes. Yeah, we've seen track of months. there are nine professional certificate course series [00:03:00] available on demand now with more being added each month. One of those featured certificate course series that I teach is piloting ai. This is actually the original thing we launched.

[00:03:10] This is the third edition of piloting ai. So we've been doing this each year. So I did a completely updated version for, this year. So the piloting AI series is a hands-on guide to moving from AI theory to tangible business impact. In the series, you will master a complete methodology for identifying, prioritizing, and building high impact AI pilot projects.

[00:03:30] There are four courses in this series, course one is piloting AI in business course two teaches the use case model for prioritizing, use cases. Course three is the problem based model, which TA teaches you how to take a different view of challenges within your organization and then use I AI to solve them more intelligently.

[00:03:48] And then course four is how to build your Co X. So it's about creating AI assistance that augment, human potential. So I walked through how I created CO CEO and then we show how to apply that to other areas. So you can learn more [00:04:00] about AI Academy and our AI mastery membership program at Academy.SmarterX.ai.

[00:04:06] You can also use POD100 for a hundred dollars off either an individual course series or the annual AI mastery membership. So again, use POD100, at academy do SmarterX.ai. 

[00:04:19] AI Pulse

[00:04:19] Paul Roetzer: Alright, Mike, the AI pulse surveys. This is our third week of doing these. Third or fourth week, 

[00:04:25] Mike Kaput: Our fourth

[00:04:25] Paul Roetzer: Yeah.

[00:04:26] Okay. Our fourth week. So, last week we asked how frequently do you currently use Google Gemini? Any model of Google Gemini in your professional workflow. This one was surprising. So again, these are informal polls of our audience. We had 95 responses to this one during a holiday week, which is great. We appreciate everybody taking the time to do it.

[00:04:46] 46% say they use Google Gemini Daily. Mike, that was, 

[00:04:52] Mike Kaput: I don't know. I mean, surprising. 

[00:04:53] Paul Roetzer: Yeah. I don't, I don't know. Like, that's crazy. We're not comparing this to like, versus ChatGPT or anything like that. but [00:05:00] apparently our audience, our big Google Gemini users, so daily is 46%, 22% is weekly. And then, only 16% say rarely or never, and 16% said occasionally.

[00:05:12] So yeah, I guess we have a big Google Gemini user base in our audience, which is cool. Yeah. And then the second one from last week was US Senator Mark Warner has warned that AI could spike unemployment for recent grads. We asked, has your company changed its hiring strategy for entry level roles? now this one Mike, I I think like we, we maybe don't have a lot of HR people like in our audience.

[00:05:37] Yeah. Like a dominant, so there's a chance that this one is just like, they just don't know. And we actually had. What did we have? I'm, I'm not sure it's option. I'm, yeah, 23%. Yeah, which is probably like, they have no insight into it. 46% said no, our hiring plans have not changed, but 23% said yes, we are hiring fewer entry level staff.

[00:05:56] Yeah, 

[00:05:56] Mike Kaput: that's pretty significant, I feel like. 

[00:05:58] Paul Roetzer: Yeah, because even if they're not in hr, they might be [00:06:00] hiring managers in their departments and so they, they know this. So I, yeah, I, that's interesting. again, like our, our intention with these surveys is right now they're informal polls, but once we get to a threshold of say, I don't know, four or 500 responses, you know, it starts to become more projectable data based on our audience, our listeners.

[00:06:18] but right now just kind of like more sentiment and indicators of where things are going. So that's pretty interesting to see. okay, so this week's pulse's survey, you can go to SmarterX dot ai slash pulse and you can, participate in this survey. We have two really interesting questions this week.

[00:06:35] the first is with recent reports of AI related layoffs. How secure do you feel about your specific role over the next 12 months? I'm gonna be really interested to see the responses to that one. And then the second question is, do you believe we are currently in an AI investment bubble? Now that question is gonna become much more relevant once we go through the main topics.

[00:06:58] Today we're gonna get [00:07:00] more into this idea of an AI bubble. We touched on it in, episode 182, but we're gonna kind of expand on that line of thinking in today's episode. So again, how are the recent AI layoffs affecting your view of your role over the next 12 months? And then how do you feel about the, you know, if we are or not in this AI investment bubble?

[00:07:20] Go to SmarterX dot ai slash pulse. Again, these are informal polls. We are not collecting email addresses. This is not a marketing thing. This is purely research to get a sense of where, where our audience feels about different topics that we're talking about. Alright, so let's kick off the main topics today.

[00:07:38] If you're new to the show, again, I know we have new listeners each week, so it's probably good to do A quick reminder. These weeklies that Mike and I do, we go through three main topics and then we usually go through about seven to 10 rapid fire items. So that's the format. And today's first main topic, kicks off with a recent study from MIT that got a lot of headlines last week, Mike, as well as [00:08:00] some, background information as well on a new McKinsey study.

[00:08:04] MIT Study: AI Can Already Replace 11.7% of US Workforce

[00:08:04] Mike Kaput: All right, Paul? Yeah. So first up we had a study from MIT and Oak Ridge National Laboratory that utilizes a simulation tool that created called the Iceberg Index. And this model. Found among many other things that current AI systems can replace. Today, 11.7% of the US workforce representing approximately $1.2 trillion in wages, and that's what's getting the headline.

[00:08:30] And the researchers note that while the tech layoffs in tech are visible, their significant exposure that lies beneath the surface to AI in logistics, finance, and human resources, hence the name of this iceberg index. Now, separately at the same time, a report from the McKinsey Global Institute estimates that demonstrated AI technologies could theoretically automate 57% of current US work hours, and the firm projects that by 2030 this [00:09:00] shift could unlock $2.9 trillion in e annual economic value.

[00:09:04] Now, McKinsey frames the transition not as pure displacement, but more as a move towards. What they call skill partnerships between humans and agents. They note that 72% of skills are required for both automatable and non automateable work. Interestingly, as a result, they find that consequently employer demand for what they call AI fluency has increased sevenfold over the past two years.

[00:09:32] So, Paul, that's just kind of a very high level look at what these two big studies, two big pieces of research are looking. I'm curious what jumped out to you here, because like you said, the MIT one's getting quite a bit of headlines, the McKinsey one's flying a little bit more under the radar, but has some really interesting stuff in it.

[00:09:48] Paul Roetzer: Yeah, both make for great headlines, that's for sure. you know, the ones you kind of really want to dig into. So I, I, you know, last night when I was getting ready for this, I wanted to go through both the reports as we always guide [00:10:00] people, like look at the methodology, you know, kind of see where the reports are coming from.

[00:10:03] We had the other MIT report we talked about a couple months ago that. You know, didn't really hold up. in, in terms of its methodology. This is a different, research approach. I mean, this is a very thorough, research effort, from MIT, the project iceberg is honestly a bit confusing. So I did download the full report and I actually read the whole thing and I was trying to comprehend what exactly this data point was saying.

[00:10:34] and, and also just what they were trying to do with this project. So the website itself, again, we'll put the website address and there's just iceberg.mi.edu. You can go and look at this yourself. It, it's, there's not a lot to the site yet. I was hoping when I went there, I was gonna be able to like, explore this data.

[00:10:53] Like they talk about all this analysis they did, and very deep analysis, and I couldn't figure out how to actually get to the data. So I couldn't [00:11:00] actually assess these skills they were looking at and things like that. But basically it, it, it's, the best way I can explain this is it seems to be a sandbox for policy evaluation.

[00:11:11] So they're actually trying to work with like state level officials to guide on, what the, these AI models mean to workforces. And so they're looking at identifying re-skilling and training priorities and then evaluating investments across occupations, industries, and communities. So it is more around like policy and economic development, workforce development, it seems.

[00:11:34] so the research report itself actually starts off with really strong validation as to why this research is important. So I'm gonna, I'll, I'll go through a couple of elements here. So, they said evidence, and I think this is actually from the abstract to the research, if I remember correctly, evidence suggests workforce change is occurring faster than planning cycles can accommodate.

[00:11:56] Payroll data covering millions of workers shows a 13% [00:12:00] relative decline in early career employment, ages 22 to 25, which we've talked about that research on the podcast recently. for AI exposed occupations, analysis of job postings across 285,000 firms for 62 million workers reinforces the pattern. The demand for entry level positions have subdued while the focus has shifted to hiring for experienced roles, these shifts alongside widespread restructuring in the technology sector indicate that the pace of change is accelerating across the economy.

[00:12:33] States are committing billions to workforce programs while key workforce dynamics remain invisible to traditional planning tools. So, there's a lot to unpack there, but this is definitely reinforcing key topics and data points that we have touched on in the last few months on the podcast. A lot. but this, this angle that we haven't really got into Mike, that I really like the direction of where they're going here is when you look at like economic developments like in, [00:13:00] in, in Ohio we have jobs, Ohio, and they look at this, you know, training and re-skilling of, of, of, of employees and trying to prepare for, well, where is the economy going?

[00:13:09] So we can prepare people in our states for that type of work. And their point here is, if you don't understand what AI models are capable of and where the crossover is of what these models are capable of doing, then your state isn't gonna be prepared to upskill your workforce. So it's actually a very, very important point, and which is why I kept like, okay, I wanna really dig into what they're doing here.

[00:13:31] So then the abstract continued. By the time these changes appear in official statistics, policy makers may already be reacting to yesterday's disruptions, committing billions to programs that target skills already displaced. Without forward looking capability to test strategies before implementation, states cannot distinguish investments that prepare workers from those that arrive too late.

[00:13:55] Existing workforce planning frameworks were designed for human only [00:14:00] economies. They track employment, wages and productivity, but were not designed to measure where AI capabilities overlap with human skills before adoption, reshapes occupational structure. So I'll give a little bit more context in a moment, but Mike, as I'm saying this out loud, it actually, I see the educational components of this as well.

[00:14:21] Like this is the kind of stuff at a high school level and higher ed. We should be doing the exact same things like what the majors that we're offering that we're having kids going through four years, six years of college for are the skills they're leaving school with, even relevant to the current workforce or where these models take us.

[00:14:38] So, yeah, it's 

[00:14:40] Mike Kaput: interesting. I don't know about you, but as I've talked with schools about designing curriculums, especially around ai, it's like they're in such a hard position where it's like you have to plan this curriculum out y years in advance, at least a year in advance. Yeah. It's really, really difficult to do.

[00:14:54] And they kind of speak to that in this research about how we're all just kind of playing catch up. Yep. Using traditional [00:15:00] methods. 

[00:15:00] Paul Roetzer: Yeah. And even, you know, again, as we think about 2026 planning and you think about staffing for your company, these same things hold true. This is why I created jobsGPT, was to try and like project this stuff out.

[00:15:10] So like are our staffing plans for next year even relevant or are the AI models so capable of some of the skills we're hiring for? We just don't need as many people as we did previously. So, okay. A little bit more on, on the index, because again, now that I'm like talking out loud about this, I'm actually seeing a lot more relevance to this research.

[00:15:30] So it says, the index measures where AI systems overlap with skills used in each occupation. and this is really important related to the CNBC headline, Mike, that you led with. Yep. A score reflects the share of wage value linked to skills where current AI systems show technical capability. For example, a score of 12%, which 11.4 is what the headline said means AI overlaps with skills representing 12% of that occupation's wage value, [00:16:00] not 12% of jobs.

[00:16:01] So it's all about skill overlap, not job displacement. And this is where I started getting confused. It's like, what exactly is the headline saying? And yeah, you know, what are we trying to convey here? so then they actually have an FAQ on the iceberg page, and it says, does the index predict job loss or displacement?

[00:16:19] No. The index reports technical skill overlap with ai. It does not estimate job loss, workforce reductions, adoption timelines, or net employment effects. So again, they're stressing, like this is all about trying to educate policy makers on where these overlaps exist. and then they're doing this through, and this is where I was like, I wish I could learn more about this part of it.

[00:16:42] So here's how they say they do this. They use what they call large population models to simulate the human AI labor market representing 151 million workers as autonomous agents executing over 32,000 skills across 3000 [00:17:00] counties. So counties being within the States and the United States and interacting with thousands of AI tools.

[00:17:05] I don't know if I missed it, Mike, but like, that's the part I really wanted to say. How in the hell did you do that? Like what, what is the technology behind it? 

[00:17:13] Mike Kaput: They're bearing the lead here, I feel. Yeah, because the 11.7% is really interesting and important. But like, wait a second. We have something that can model this.

[00:17:22] Paul Roetzer: Where, where is this technology? Can I use it in advertising? Right. What is this? So yeah, that was the part that jumped out to me. And again, I may have just not been looking at the right stuff. I looked at the addendums, like the appendices and the report. I looked on the site. So I guess stay tuned.

[00:17:37] We'll, we will try and figure out more about this, but that's where the headline comes from. This idea that this 11.4, 11.7%, and 1.2 trillion in wages, that's what it means is there's an overlap of AI capabilities and skills. So then a second one, Mike, that you touched on is this McKinsey study. This is a really solid report, lots of data.

[00:17:58] It does get pretty technical. [00:18:00] They also were taking a look at the exposure of jobs and skills to AI models, and they used what they called a skill change index, which shows which skills will be most and least exposed to automation in the next five years. again, I think there's a couple of just highlight data points here.

[00:18:20] I would encourage you if you do want to like, dig into this one, do throw it in notebook, lm play around with it. Look at the data. it just gets kind of dense. And I honestly, like, I love McKinsey re research. I had the hardest time reading their graphs. I don't know if it's just me and like, I'm not like a researcher by trade, but like I have to really, really spend time on their charts to figure out what the hell they're showing me.

[00:18:43] so I often just lean on whatever their call out says, the chart is telling me. Otherwise it's like I have no idea what I'm looking at. so the demand for AI fluency, as you mentioned, Mike, the ability to use and manage AI tools has grown sevenfold. faster than any other skill in US job postings, that's a pretty [00:19:00] big deal.

[00:19:00] I like they call this out, they said integrating AI will not be a simple technology rollout, but a re-imagining of work itself redesigning processes, roles, skills, culture and metrics so people, agents, and robots create more value together. That is such a critical point. It is something we stress all the time.

[00:19:20] You cannot treat AI adoption and scaling of AI as a technical problem. It is a people problem. It is a change management issue. And so it does take all of this deeper thinking. So I really like that they, they touched on that, this 2.9 trillion of economic value by 2030. Again, I'm not a hundred percent sure where they're getting that data point from.

[00:19:39] I've, I've seen similar data points from them before, so it may actually be referring to a previous study, but they then do say if organizations prepare, their people redesign workflows rather than individual tasks around people, agents and robot. you mentioned, I think this highlight about 57% of US work hours could basically be done today, with a lot of [00:20:00] automation that is currently capable, and they actually break that out then into 44% of US work hours today that are more knowledge based stuff, and then 13% that are more manual or robot based.

[00:20:11] they did do a cool breakdown. They showed these seven archetypes, that I would recommend people just again, this, this was actually a pretty understandable graph. so they show less automatable and then more auto automatable on a spectrum basically. And so they break it into seven. They have people centric, which is future work, done mostly by people, people, agent, which is future work, done mostly by people with agents, and that's gonna be 21% according to them.

[00:20:37] 34% is the people centric. And then agent centric is future work done mostly by agents. That's 30% of current workforce. And then they get into people, robot, robot centric, and then people Agent robot and agent robot. So again, they're looking at the people, the agents and the robots and, and trying to look at the future of all work.

[00:20:57] This isn't just knowledge work. [00:21:00] So, quick kind of synopsis here. This is, this touches on a lot of what I covered in my Move 37 moment for knowledge workers keynote at Macon. So I just, here, I'll pull in a couple of key points from that talk that are sort of like supported by this data. So the fir fu the first thing is the future of work hypothesis.

[00:21:17] So what I've been working on since spring of this this year is, this hypothesis that is within one to two years, AI model advancements and agent capabilities will force a radical transformation of talent teams and organizational structures. Leaders face conflict conflicting pressures to take a responsible human-centered approach to AI adoption while leveraging AI for near term gains and efficiency, productivity, creativity, and profitability.

[00:21:42] The second point is we do not need to reach AGI, however we define it, and we'll touch on this a little bit more in a couple of upcoming topics today. to completely transform business, the economy and society companies will put a premium on AI literacy, which we just talked about. AI fluency demand, sevenfold increase, [00:22:00] interpersonal communications, creativity, critical thinking, curiosity, emotional intelligence, imagination and adaptability.

[00:22:07] These are the kinds of things we think are become more important in the hiring process and the development of talent. Two critical aspects of everyone's work, especially in knowledge work, knowing what questions to ask of the AI systems, and knowing what to do with the answers, and then knowing how to talk to collaborate with and learn from ai.

[00:22:25] One other key point here, Mike, that we've talked a lot about is this idea of an economic turing test. So while we look at this iceberg index and the skills index and all these things, at the end of the day when things really start to change is when. Businesses make a decision to hire an AI agent or a collection of agents working together instead of a person.

[00:22:46] Not just for tasks and projects, but for full jobs. And I, that's the part where we're like anxiously watching for this to become the thing. Right now, the job loss we're seeing is because humans can get work done [00:23:00] more efficiently. They can produce more, not because AI agents are actually doing all the work that would replace an actual human right.

[00:23:07] It's just efficiency. So that's why it becomes extremely important to consider exposure as the models get smarter and more generally capable. Which is sort of what the McKinsey report is starting to look at is agents and robots. So again, really important research. They seem like well done both of these reports, both, you know, noteworthy for sure.

[00:23:26] but yeah, like kind of dig into 'em. But at a high level, I think those are some of the key things for people to think about. Again, as we enter 2026 planning. 

[00:23:33] Mike Kaput: Yeah. And I would just encourage our audience too, if you're, you know, a knowledge worker like me, not, you know, an entrepreneur owning a business, invert the economic Turing test for yourself and say, why would someone hire you over an AI AI agent or a swarm of connected agents instead of, just hiring you?

[00:23:49] I mean, that's really helpful in a useful question to ask as you kind of evolve your own career. Yep. 

[00:23:55] Is There an AI Bubble?

[00:23:55] Mike Kaput: Alright, next up, famed investor, Michael Brewery, best known for predicting the 2008 housing market collapse, has launched a public campaign betting against the AI industry. After Deregistering, his hedge fund, Scion Asset Management Brewery is using a new newsletter to argue that the sector is in a bubble comparable to the.com era.

[00:24:15] His central thesis developed with researcher Phil Clifton posits that the massive capital expenditure on AI infrastructure, which is projected to reach trillions over the next five years. Far outpaces actual end user demand. Bar specifically challenges accounting practices at companies like Nvidia, alleging they overstate the useful life of chips that quickly become obsolete in order to justify the costs.

[00:24:41] Now, Nvidia has actually aggressively pushed back against this. They sent a seven page memo to analysts and defended their accounting and stock-based compensation as consistent with industry peers. Palantir, CEO, Alex Karp, took a more informal approach and dismissed bur short positions as batshit [00:25:00] crazy.

[00:25:00] According to him, Barry, however, maintains the industry faces a reckoning similar to telecom Cisco's crash in the late 1990s. So, Paul, I guess the question here is like, is there a bubble? And then also this jumped out at me in some of the reporting on this. This is a point from Brewery and Clifton and their research.

[00:25:19] Saying the investment world is expecting far more economic importance out of this technology than is likely to be provided. Just because the technology is good for society or revolutionizes the world doesn't mean that it's a good business proposition. What would you say to all that? 

[00:25:35] Paul Roetzer: Yeah, so I don't know.

[00:25:36] We touched a little bit on this on episode 180 2 when we were talking about NVIDIA's earnings, and I think I even mentioned, bur at, at that time. Yeah. I, I, I'll just in case you didn't listen to episode 180 2, I'll just do a a really quick recap of the points I was making there. So one, we're on the leading edge of an intelligence explosion.

[00:25:53] AI is going to be everywhere and in everything that, that to me just isn't inevitable. NVIDIA's [00:26:00] rise and, and a lot of the related stocks and certainly the valuations of some of these AI startups, has, has really come from these foundational models. Now, Nvidia in particular, from the training of these models.

[00:26:11] But the bet that all of these companies are making, and, and this gets to the demand, will the demand be there as Burry and his partner are, are questioning, comes down to the amount of intelligence that society will demand and, and specifically inference. So again, the models are trained and then they go through post training.

[00:26:32] And so the Nvidia chips are used to do all these things and the build out of data centers and the requirements for all this energy, all of it has largely been about building the models moving forward. It's all about delivering those models to all of us through devices, through AI assistance. But those models aren't just doing text anymore.

[00:26:52] They're doing reasoning. They're doing image generation, video generation, audio, 3D worlds, AI agents, robotics. All of these things are [00:27:00] gonna require dramatically more computing power, dramatically more energy than what is currently available in the world. Which is why they're all spending tens of billions, hundreds of billions on the build out.

[00:27:13] Hmm. So most people I talk to, and I talk to a lot of very savvy investors, and I get asked all the time about different stocks and, and, and the general response I always give is like, people, even very savvy investors, have no concept of where this all goes, how these models advance, what the demand is gonna be for this sort of stuff.

[00:27:33] So. As I've said there, there'll be losers in this. There will be multi-billion dollar companies that just go away. there will be dramatic drops in stock prices. We'll touch a little bit, I think on NVIDIA's drop last week when it came out that, you know, Google was doing deals for their TPU chips. so there's gonna be all kinds of stuff happening, but you have to play the long game.

[00:27:55] You have to look at where this is. Like, I I think back to like the doubt, [00:28:00] investors had in Google after Chad Chip be tears released, like Google got destroyed for a while. Like there was just all these questions about can Google survive? It's like. You forget who, who created all this. Like this is why I was always like, why would you bet against Google?

[00:28:13] I'd never understood this. They were better positioned than everybody. but everything has become about the next earnings call. And so that's why these AI bubble, all this talk is like, everyone always anxiously awaits the Nvidia earnings. Call Microsoft, Amazon Meta Google. They're waiting to see are they gonna keep building out?

[00:28:32] Are they gonna keep investing in the future? Do they have high conviction about where this goes? So specific to Burry, I have no idea. I mean, obviously the guy bet right? In 2008, he has bet wrong numerous times since then. He obviously knows more about this stuff than I could ever know in terms of like the actual accounting side of everything, the finances.

[00:28:53] he picked a fight with Elon Musk this morning, like he's right. He's just going after everybody. So, who [00:29:00] knows if he ends up being right on some of this? But I don't know, like I've, I have pretty high conviction about the overall premise of where this all goes. without getting into the fine details of balance sheets and accounting methods, people are using to, you know, to look at the depreciation of chips over time and like things like that, that Barry's assessing.

[00:29:23] I've just always looked at things like I'm not smart enough as an investor to challenge people on that level of thinking. For me, it's about the companies I'm exposed to, the people I talk to, my belief in, like where this all goes. So the advice I would, I'd take here and sort of like my big takeaway on this AI bubble thing is you have to consider your own position from a personal investing perspective, but also from a career and business perspective.

[00:29:48] To your point, Mike, about if you're an employee or if you're a business owner, an entrepreneur, or if you're just thinking, where do I invest my retirement money? I think what you have to do is have a long-term thesis [00:30:00] about ai. And then you have to decide what your level of conviction is. So again, I always stress, I'm not giving anyone investing advice on any stocks.

[00:30:09] I don't, I get asked by friends, family, coworkers, like I don't give advice on specific stocks. But for me, I, I'm 47, I bet everything personally and professionally, starting in 2016, that AI would change the world. Every business and every industry would become AI native, AI emergent, or obsolete. So at that time, I personally started betting on companies that I thought were best positioned to grow when demand for intelligence scaled.

[00:30:36] And then I built my own companies to help professionals and businesses accelerate their AI understanding and transformation. So personally, from a personal investing perspective, from a businesses I'm building perspective, from a, where I'm betting my own, you know, wellbeing and financial strength on, 

[00:30:52] I believe we are on the leading edge of an intelligence explosion. I think it's just starting. I don't think we're at this like, oh, [00:31:00] NVIDIA's screwed because people are buying TPUs from Google. Like, okay, it's, it's not like their business model became irrelevant overnight. So that doesn't mean that every AI company survives.

[00:31:10] As I said earlier, there's gonna be winners and losers. There's gonna be ups and downs in the stock market. There will be variables that are out of our control that will impact the markets in the months and years ahead. So in my AI timeline course that I teach as part of AI Academy, I have this, this breakdown of like, what accelerates progress, like what actually moves this faster?

[00:31:28] and then what slows it down. So on the, what slows it down part, there could be a breakdown in the AI compute supply chain due to like earthquakes, hurricanes, si physical or cyber sabotage. Like there's things like, we just have no idea. There could be catastrophic events that are blamed on ai, which gets into the political side of all this stuff.

[00:31:47] There could be lack of value created enterprises, like maybe it just doesn't work. I don't think that's gonna happen, but that, that's a possibility. Landmark IP lawsuits that impact access to training data, legality of existing [00:32:00] legal, existing AI models, restrictive laws and regulations. We talk a lot about state regulations, societal revolt against AI due to job loss politics, perceptions, fears, unexpected collapse in scaling laws.

[00:32:11] I don't think, but that's talked about a lot. And then voluntary or involuntary halt on model advancements due to catastrophic risks where the labs say, Hey, we gotta stop. Like, it's, it's getting too good, too powerful. so at a very, very high level, I think there's a far greater risk in sitting back and assuming this is all a bubble than positioning yourself and your company to thrive in the H of ai.

[00:32:36] That doesn't mean there's no risk. It doesn't mean that there aren't gonna be bubble like events where companies just disappear or valuations drop 10%. And so you, you have to, again, if, if you're late in your career, if you're sitting on a retirement fund and you're like, should I keep it all Nvidia? That is on you and your investment advisor to talk about the risks you're taking.

[00:32:59] As I [00:33:00] said, I'm kind of middle of the road here. I'm 47, I got a long, you know, investing career ahead of me. I got a long business career ahead of me. I'm in it for the long game and I'm placing my bets accordingly. but everybody's gotta decide their own thing. But again, have a hypothesis about where the future goes, and then decide what your level of conviction is in that hypothesis.

[00:33:19] And let that then guide your decisions. 

[00:33:21] Mike Kaput: I love that. That's great advice. 

[00:33:24] Political Divides Over AI Are Getting Worse

[00:33:24] Mike Kaput: So our third big topic this week, the ongoing political battles around AI are getting a bit more intense this week based on a number of stories that we're tracking. So first, the New York Times reports that a new network of Super PACS is seeking $50 million to back candidates who prioritize AI safety regulation.

[00:33:43] This effort aims to counter a super PAC we've talked about in the past, leading the future, a a hundred million dollars group funded by Andresen Horowitz and openAI's Insiders that targets regulation friendly lawmakers, a b, C news notes also that there is some fracture within [00:34:00] the Trump's, MAGA movement around ai.

[00:34:03] So while the Trump administration Andis are David Sachs advocate for deregulation to win the AI race against China, former strategist Steve Bannon has been very vocal about urging the base to resist AI acceleration, citing threats to working class jobs. There's also a big policy fight happening around federal preemption.

[00:34:25] So TechCrunch reports that industry lobbyists are pushing for a federal standard for AI that would nullify state level safety laws. We've referenced that a little bit on past episodes, and this is a move currently being blocked by a bipartisan coalition of lawmakers and state attorneys general. Now, on top of all this, the White House just released an executive order titled Launching the Genesis Mission, which is a coordinated national effort to accelerate scientific discovery through ai.

[00:34:53] They described this in the text as an undertaking, comparable and urgency to the World War II era Manhattan Project and the mission [00:35:00] aim to harness massive federal data sets to train scientific foundation models and create AI agents capable of automating research workflow. So, Paul, a number of different threads here happening in the last week or so.

[00:35:13] We've got. Certain elements of, say, the Republican or conservative macca movement going all in on acceleration through the White House, but also many, many people pushing back in that movement. And then also on the other side of the aisle, pro-regulation super pacs getting lined up here. What's your kind of, how are you sorting the signal from the noise here?

[00:35:33] Paul Roetzer: You know, it's funny, like I think the 2026, maybe the year of like AI and UAPs, it's like aliens might be here and UFOs are following that story. Like to watch Age of Disclosure. It's wild. But like, it's so fascinating because all the sudden there's like these bipartisan sides, like these factions being formed where Democrats, republicans, like half of 'em like want disclosure of all this stuff.

[00:35:59] Half of 'em [00:36:00] want state regulated, like it seems to just cross the aisle. Like no one can decide what their side stands for when it comes to these major issues. So I, I'm like half joking about the UAP thing, but I actually do think that that is. There's a really good chance you're gonna get something, relatively soon from the administration on that.

[00:36:18] so I mentioned to Mike in our sandbox last week. I said we should probably just start doing a political roundup because I feel like the floodgates just open. Yeah. I don't know, like the last like six weeks where there's this endless tweets from senators, there's these different missions being pursued, executive orders, like it is becoming very political.

[00:36:38] for sure. And I think because it's starting, like politicians are starting to realize, AI touches so many areas, jobs, the economy, geopolitics, the regulation issue is a huge thing. The environment, you don't hear as much about that, but that's, you know, sort of sitting there still, impact on defense and autonomous weapons, which is gonna become a major issue.

[00:36:58] Scientific [00:37:00] advancement, which is what the Genesis mission is focused on. So there's just so much going on. David Sachs, you mentioned, we talk a lot about David Sachs. He's sort of, the ais are. right now there was a bit of a hit piece on him in New York Times that got a ton of, at least mentions in in my feed over the weekend, said Silicon Valley's man and the White House is benefiting himself and his friends.

[00:37:23] So David Sachs is one of the all in hosts and, but it's basically about how he's controlling the future of AI and as a result, like benefiting his own companies and his friends' companies and things like that. so just for context, again, I'm not, not making a statement one way or the other. On, on David Sachs.

[00:37:40] I don't, I don't know him. the side of people who are like, Hey, this is what we need is actually people who are very successful, who are willing to sacrifice their own, careers and investments to go do things for the government. Like, this is the best of us, is kind of that. Then the other side is like, oh, he is just doing it, the profit and everything.

[00:37:58] So just for [00:38:00] context on what's going on, he's a, he's a polarizing figure right now, I would say in the government and in ai. So it's worth, kind of following that story. So I don't know the genesis mission. I love the idea of this. yeah. The areas that they're focused on are advanced manufacturing, biotechnology critical materials, nuclear fission fusion energy, quantum information science, and semiconductors and microelectronics.

[00:38:25] I would imagine the one that's not on that list that I could see being added that list is, space exploration. And there's just so much talk, and we haven't touched on it much in this, in this podcast yet. But there is so much about, I think the mining of materials, In space is still decades off.

[00:38:47] Yeah. But the thing that's all of a sudden got everyone talking is this idea of data centers in lower earth orbit basically, that they, within like this decade, you have X AI and Elon Musk with SpaceX, you have Google talking about it. You have Amazon [00:39:00] talking about like all the labs are all of a sudden talking about like, let's get the data centers off earth because then we have the sun as the energy source.

[00:39:07] and so I think you're gonna, that that would be, again, it's sort of like a missing component here that sort of touches like probably three or four of these areas. But I think that's the kind of thing we're gonna probably be hearing a lot more about next year too. 

[00:39:20] Mike Kaput: Yeah. And also what jumped out at me reading a number of these articles was just a few narrative threads that seemed to be popping up.

[00:39:27] We'll see how long lived these are, but what jumped out at me is like, especially within conservative circles, there's a lot of talk about. Pro-family and child safety movements related to AI and as well as a religious component because people start feeling like we're potentially losing our humanity.

[00:39:44] Might hear more about that. Yep. I thought the worker thing was interesting on the right. It's more about working class families. On the left, it's more about like affordability and data centers become this kind of weird lightning rod as well around a lot of that stuff, which is fascinating. So I'm curious to [00:40:00] see which of those have legs going into the midterms 

[00:40:02] Paul Roetzer: and, and again, I think what we're seeing, Mike, is just this like trial balloons.

[00:40:07] Yeah. Like you're just, both sides just sort of floating these talking points and trying to see, okay, are there any votes sitting behind this idea? 

[00:40:14] 

[00:40:15] Paul Roetzer: And yeah. And then once you, once someone finds the wedge, it's like, okay, now, now let's go and let's build the campaigns around these things. But right now I think they're, yeah, it's just like testing the waters and trying to see what, what people care about.

[00:40:27] ChatGPT Turns 3

[00:40:27] Mike Kaput: All right, let's dive into rapid fire topics for this week. So first up, ChatGPT officially marked its third anniversary this weekend, three years after its November 30th, 2022. Release ushered in a breakthrough time for the AI industry. So since then, OpenAI's chat bot has grown from a hundred million weekly users and its first few months to 800 million active weekly users today.

[00:40:49] Data from the Pew Research Center indicates that about one third of all American adults have now used the tool, which nearly doubles to 60% when you look at adults under the [00:41:00] age of 30. And Paul, interestingly, this episode will go live on December 2nd, and on December 2nd, 2022, you wrote a blog post on the Marketing AI Institute website where you said, quote, I just tested ChatGPT from openAI's.

[00:41:14] My immediate reaction after five minutes is that the marketing profession, business world, and society are not even close to ready for what is about to happen. As a result of rapid advancements in ai. So do you still believe that what's happened since you wrote that? 

[00:41:29] Paul Roetzer: Yeah, I would. I think the, I put that on LinkedIn, yesterday, said like, these words ring true today.

[00:41:36] Still. I'm kidding. Okay. yeah, I don't interesting context here. I think for people who, for so many people, ChatGPT was the moment when they woke up to AI that they realized, oh my gosh, there's this thing and it's gonna start affecting us. but like I started the AI Institute in 2016, started researching AI in 2011.

[00:41:53] Like we've been thinking about this stuff for a long time. And so I I just to put in context that moment three years [00:42:00] ago, 'cause it did change everything. you have to remember, ChatGPT was a research preview. Like they had no idea if it could work. I shared on LinkedIn yesterday, Sam Altman's tweet and it said, today we launched ChatGPT try talking with it here.

[00:42:13] And he gave a link and then he said, language interfaces are going to be a big deal. I think. Talk to the computer voice or text and get what you want for increasingly complex definitions of, of want. This is an early demo of what's possible. Still a lot of limitations. It's very much a research release.

[00:42:31] And then he said, soon you'll be able to have helpful assistance that talk to you, answer questions, and give advice. Later. You can have something that goes off and does tasks for you. Eventually you can have something that goes off and discovers new knowledge for you. 

[00:42:46] 

[00:42:46] Paul Roetzer: So imagine here we are like those are all true.

[00:42:50] And so part of this is like explaining to people sometimes the future is hiding in plain sight. Like at that moment Google had similar tech. You [00:43:00] could actually experiment with this stuff in openAI's AI studio. Like what they released was just a user interface. Like the tech was already there. Yeah. And so like, I actually went back and I grabbed, if you're watching on YouTube, like cover of our book, I'm showing, 

[00:43:15] Mike and I wrote Marketing Artificial Intelligence, the book in what, spring of 22, Mike, does that sound about right? Yep. Came out like summer, or really final edits in spring of 22. We started writing it in 2021, so you know, at least a year, probably before ChatGPT. And there's a section in the book that I wrote that says, what happens to marketing when AI can write like humans?

[00:43:35] So again, keep in mind this is a year before chat gpt came out. This is the things we knew, says There is a race to train AI systems to generate human language at scale when achieved The implications both good and bad are immense openAI's and AI research company, originally backed by billionaire technology leaders like Elon Musk, Peter Thiel, and Reid Hoffman builds AI models to do just that.

[00:43:57] It started with generative pre-trained transformers [00:44:00] called GPT and GPT two. These are AI language generation models that automatically produce human sounding language at scale. GT two wowed the world when it was released in 2019, three years before Jet gpt, with its ability to construct long form content in different styles, using huge amounts of content from the internet.

[00:44:22] and it says yet in, may of 2022 or 2020, openAI's introduced a dramatically more powerful model called GPT-3. So again, if, if you're not aware, like GPT-3 was in the world for two years and you could use it in their AI studio, it was able to produce human-like text. In early experiments, the model was used to produce things such as coherent blog posts, press releases, and technical man manuals, often with a high degree of accuracy.

[00:44:51] And then we wrote, GPT-3 is still in its early days as of 2022, and the validity of the model has not been fully explored. But the speed of [00:45:00] improvement in opening eyes, language models should be top of mind for every marketer, writer and business leader. So Mike, my point in saying this is like, this is why you listen to podcasts like this.

[00:45:11] This is why you pay attention to ai. Like sometimes we can see around the corner now, we were seeing years around the corner when we created the AI Institute, when we wrote the book, and maybe years around the corner is, is kind of hard right now. But I mean, go look at the studies we started with the MIT study.

[00:45:32] The McKinsey study where they're looking at, okay, what are human agents and humans and robots like, how do they work together? That's the point of all this. Like when ChatGPT dropped and I tested it a day later, there was nothing it did that I wasn't aware was already going to be possible. I had been advising my journalism school, I graduated from three years earlier to prepare for a world when AI could write like humans.

[00:45:57] And people laughed at it. They thought it was ludicrous. [00:46:00] So my point here is. You have to see around the corner. AI bubble, no AI bubble. Like you, you have to have conviction about what happens and you have to pay close attention to what these labs are doing and saying, because it often provides a preview of where we're going, six months, 12 months, two years out.

[00:46:19] and that's the opportunity and the advantage you have by being one of these kind of early adopters who's paying attention to this stuff and trying to figure it out. 

[00:46:28] Mike Kaput: Yeah, I love that. We've talked a number of times on the podcast about, Hey, go read Sam Altman's essays. He tends to predict the future.

[00:46:34] So I think you, this stuff is not a secret. It's very interesting to see that it is in plain sight if you know what to read and what to look at. 

[00:46:42] Paul Roetzer: Yeah. And we'll talk about Ilya, made an appearance. Yep. And so we'll touch on that later. But again, the reason we talk about these things, they seem abstract at the moment, but it's all connected and you can actually start to see the future a little bit when, when you zoom out and like see all these pieces.

[00:46:59] Claude Opus 4.5

[00:46:59] Mike Kaput: All [00:47:00] right. Next up, Anthropic has released Claude Opus 4.5, a new frontier model that they describe as the mo. It's mo their most intelligent system to date for coding agents and computer use. According to the company, the model scores higher than any human candidate on anthros internal performance engineering exam when it was constrained to a two hour time limit.

[00:47:21] It also tops benchmarks in seven out of eight programming languages on SWE bench multilingual. And it also introduces now a configurable effort parameter for the API. So that allows developers to prioritize either speed or maximum capability when using the model. So Paul, this is seemingly, from what I've seen online, based on the feedback, it's another hit from Anthropic.

[00:47:43] People seem to love this model. I mean, as a non coder, I would say don't sleep on Quad. It's a pretty unique and incredible model. I did find it interesting, they're testing this on real take home exams for engineering candidates. They have leaned really hard into. Claude either [00:48:00] augmenting or automating, depending on who you talk to.

[00:48:01] Software engineers. 

[00:48:03] Paul Roetzer: Yeah. They're, they're all in on the AI researcher and then using the AI researcher to take off, you know, the more powerful ai well, we know from interviews with Dario and others at philanthropic is this is not their most powerful model. Like, they, they may be the baseline model, but this is, this is not state of the art in terms of what this, and that being said, I've seen the same things as you, Mike, that people are loving the model.

[00:48:26] Yeah. compares very favorably to other state-of-the-art models. But again, with all of them, whether it's Google or openAI's or Thoro or others, what we're getting is, is not the best they have. I don't know how else to stress that. these models are capable of far more than what you and I are gonna be able to do with them.

[00:48:47] It, it may just be that they're not safe enough to, to do those things with them. And Anthropic in particular. has shown great restraint in releasing their most powerful [00:49:00] models, due to alignment and safety concerns, 

[00:49:03] Mike Kaput: which also adds another layer of sincerity to when some of these leaders are warning about impacts on the economy or how this is going to impact society as they are seeing what is actually possible, not just what we all have access to.

[00:49:16] Correct. All right. 

[00:49:19] ChatGPT Shopping Research

[00:49:19] Mike Kaput: Next up, OpenAI has introduced shopping research. This is a new ChatGPT experience designed to automate the creation of personalized buyer guides. So, unlike a standard search query, this feature engages users in a conversation to determine specific constraints about, a product or item they're looking for, like budget or usage requirements.

[00:49:40] It then scans the internet for reviews, prices, availability, and then the resulting guide. It produces outlines, top products, key differences and trade-offs based on the users' specific needs. Users can interact with these results through a visual interface marking items as not interested or requesting similar options to [00:50:00] refine the research in real time.

[00:50:02] This is rolling out now to all logged in users across Free Plus and Pro Plans, and openAI's is offering nearly unlimited usage of it through the upcoming holiday season. So, Paul, I tested this out really, really briefly and you know, I found this really helpful actually in just a few minutes. I came up with a, almost like a mini deep research report, specifically tailored to comparing products that made it pretty useful for me and easy for me to make a decision that each one has kind of a big visual card.

[00:50:32] It's linked to the vendor. So kind of an interesting feature here that might have some bigger implications for e-commerce. 

[00:50:39] Paul Roetzer: One of the big questions, Mike, we've talked about numerous times moving forward is how does consumer buying behavior change? And this is certainly. You know, heading in that direction where it just starts to change the way you seek out information, you evaluate products, you make purchasing decisions in some cases directly from the chat interface you're in.[00:51:00] 

[00:51:00] So I think that's one of the big things to watch moving into next year, is how buying behaviors change and how these AI assistance that we spend all our time in just start to more and more become the place where you just do everything. And then the other thing I've been starting to hear a lot more, sort of murmurs about online that I would think are sort of related to this is, I would not be surprised at all if we don't see ChatGPT ads as part of December release schedule.

[00:51:26] I think there's, elegant ways they could integrate ads into this kind of stuff. And I wouldn't be surprised at all, like knowing how much revenue they need to generate. They, they gotta find a way to do that outside of just the $20 a month subscriptions. 

[00:51:41] Mike Kaput: Yep. 

[00:51:41] Paul Roetzer: And ads seems like the, you know.

[00:51:44] Multi-billion dollar thing that's just sitting in front of them to be solved. So, and, and you know, I think from a business perspective, we're all trying to figure out, well, how do we show up in, in language models? How do, yeah, how does, how do we get our brands found when people are looking for things?

[00:51:59] [00:52:00] And so I think that there's value in that from a business perspective to be able to enable a platform to people to get there without hopefully alienating users who don't want ads. You gotta find a way to do it in a very creative and, value added way. So, yeah, a couple things to watch where buying behavior changes and ads integrated into AI results, Google's certainly playing in that world as well.

[00:52:22] Mike Kaput: In the meantime, just go try it out. You just click the little plus sign in ChatGPT, and you can trigger shopping research. Super easy to use. 

[00:52:30] Google Encroaches on Nvidia’s Chip Dominance

[00:52:30] Mike Kaput: Next step, Google is intensifying its competition with Nvidia by pitching a new way for companies to access its custom AI chips. So traditionally customers could only rent Google's tensor processing units or TPUs.

[00:52:43] By accessing them through Google's own cloud servers. But now the information is reporting that Google has begun negotiating with major clients, including meta and large financial institutions, to let them run TPUs directly inside their own data centers. According to [00:53:00] this report, meta is currently in talks to spend billions to deploy these chips on-premise by 2027, alongside renting additional capacity next year, Google executives reportedly aim to capture up to 10% of NVIDIA's revenue through this expansion, telling customers that on-premise TPUs can better meet strict security and compliance needs.

[00:53:22] Now to support this, Google has developed software called TPU Command Center, which is also designed to chip away at NVIDIA's dominance and developer tool. So in response, Jensen and Nvidia, CEO Jensen Wong is ramped up investments in customers like openAI's and Anthropic to secure future hardware commitments.

[00:53:41] So I'm curious, Paul, how worried should NVIDIA be about Google encroaching on its core business here? 

[00:53:47] Paul Roetzer: this is what I was referring to earlier with like NVIDIA's stock drop to the point where they, yeah, they, they released this tweet. This is so not Nvidia. Like it was a very bizarre tweet. So I get alerts from NVIDIA's newsroom is how I saw [00:54:00] this.

[00:54:01] they actually tweeted at, at when I think it was like the next day when their stock dropped like 5%. we're delighted by Google's success. They've made great advances in AI and we continue to supply Google so people know Google is a massive Nvidia customer. Nvidia is a generation ahead of the industry.

[00:54:19] It's the only platform that runs every AI model and does it everywhere Computing is done. Nvidia offers greater performance, versatility and fungibility than Asics, which are designed for specific AI frameworks or functions. That was the whole tweet. It was such a bizarre tweet. They got roasted for it.

[00:54:37] It was just like instant meme worthy. But I don't, I mean, again, this is one of those, it been in plain sight forever. Like TPUs have been used internally since 2015. They were made available in 2018. It's no secret that they have these, and that they use them themselves, and that there's a massive opportunity for them to take a piece of the market, which I expect, based on my, my own [00:55:00] personal hypothesis, to get much, much larger.

[00:55:02] they're both great companies. Like I I, this, this is where that, overreaction to short term news is like, it, it, the markets are often just illogical when it comes to this. But again, they're illogical. If you think long term, they're, they're logical if it's about, you know, day trading and, you know, trying to, you know, beat the market and things like that.

[00:55:22] So, I don't know. They're both great companies. See how it plays out. I just, I still feel pretty good about NVIDIA's business model and I think Google is a great company that is going to do extremely well. It was interesting, I saw an interview with Elon Musk that came out, I think yesterday.

[00:55:39] And they asked him about like, you know, which AI companies he's believes in or would invest in. Yeah. And he's like, well, I don't really in invest personally, but like, Google's gonna be worth a lot of money. Yeah. And so it's like the fact that he came out and just was like, they got a lot going for him.

[00:55:53] That, you know, I think both companies gonna do really well in the long run. 

[00:55:58] Suno Embraces Training on Licensed Music

[00:55:58] Mike Kaput: In our next news [00:56:00] item this week, the AI Music Generation startup Suno has announced a partnership with Warner Music Group that will transition its AI platform towards models that are trained on licensed audio. According to Suno, the collaboration will support a new generation of models built using Warner's Music catalog.

[00:56:17] This deal also creates a system for Warner artists to opt into the platform, allowing users to generate tracks using their specific voices, likenesses and styles in exchange for compensation. Now, SUNO frames this deal as a product evolution, but. Ed Newton Rex and oftentimes generative AI like, usage critic in terms of copyright and IP characterizes this as a huge win for creatives.

[00:56:41] He notes that the agreement effectively forces suno to shut down its old unlicensed models and shift entirely to systems trained on the licensed audio. He argued that this outcome validates the efficacy of copyright litigation, emphasizing there was only a lawsuit against Suno that compelled them to admit that they had [00:57:00] originally trained on musician's work without permission.

[00:57:03] He does warn though, against premature celebration. He points out that Suno still faces a bunch of other active lawsuits from other major record labels and independent musicians whose catalogs are not covered in this agreement with Warner. So I thought this was a pretty interesting development and interesting perspective here.

[00:57:21] Paul from Ed Newton Rex, and he said, actually, so when AI boosters tell you it's too late to do anything about the exploitation at the heart of generative. When they tell you Pandora's box is open, point them to this settlement, what did you make of this? 

[00:57:34] Paul Roetzer: Yeah, we've talked a lot about these different lawsuits related to the intellectual properties, specifically in this case, to like the training data and things like that.

[00:57:41] this is kind of how I always assume this plays out, is there'll be these massive lawsuits. There might be some wins in court from both sides, but at the end of the day, the artists, the media companies, they're gonna see the opportunity for [00:58:00] the revenue from this. And I just think a lot of licensing deals are gonna be signed.

[00:58:05] I think the model companies eventually find ways to train on less data, more highly curated data through these licensing deals get similar, you know, capabilities from the models. I don't know. I mean, I think this is, it, it's probably an a preview of how a lot of these lawsuits end with, with licensing deals.

[00:58:26] But it doesn't solve everything as a new Rex was pointing out, like there's all these independents that are left behind and like, you know, is it really worth it to these artists? Like how much money can you actually make versus, you know, the traditional model? Like I don't think this really solves anything, but it is definitely a direction.

[00:58:43] I see more and more of these lawsuits probably going 

[00:58:48] Insurers Retreat from Covering AI Risks

[00:58:48] Mike Kaput: next up. Major insurance providers, including a IG and WR Berkeley, are seeking regulatory permission to exclude AI liabilities from standard [00:59:00] corporate policies. Now, according to the financial Times, the insurance industry is moving to limit exposure to what IT views as unpredictable and opaque technology.

[00:59:09] So one proposed exclusion from WR Berkeley would bar claims involving any actual or alleged use of ai, including products sold by a company that merely incorporate the tools. Now the shift follows several costly incidents, including a deep fake scam that cost engineering group ARP $25 million, and a tribunal ruling that forced Air Canada to honor a discount invented by its customer service chatbot.

[00:59:35] Now, while some insurers are introducing specific what they call endorsements to cover AI risks, these add-ons often come with strict caps. For example, the insurance company Chubb, has agreed to cover some risks, but specifically excludes widespread incidents where a single model failure affects many clients at once.

[00:59:56] Now, Paul, this can get kind of a little wonky. It's still a pretty [01:00:00] early trend, but definitely seems like this could have big ripple effects over time. If insurers won't protect firms from AI risks, you would imagine the demands from companies related to the reliability of things like AI agents would just be so sky high.

[01:00:15] It'd be there'd be such room for error that they would be very, very gun shy about adopting. 

[01:00:21] Paul Roetzer: I find this one really fascinating. So I, you know, Mike, you and I worked together at my agency for a long time, but my agency that I own for 16 years, we, we did a lot of work in the insurance industry, with, on the, on the commercial side with carriers and, and as well as agents, independent agent networks.

[01:00:41] And so I spent a lot of time thinking about, the insurance industry for well over a decade. I honestly hadn't really stopped and thought deeply about the implications of AI on insurance policies. Hmm. But now that I saw this topic and kind of read through [01:01:00] this, my mind is kind of racing with this one.

[01:01:05] I wanna like talk to some of my friends in the insurance industry and get some insights as to what, what is actually going on here. I think we actually have on, Mike, we have a AI for insurance, course that we're series we're thinking of doing and some blueprints. Yeah, this is a, this is something we probably wanna, like, dig into on the research side.

[01:01:23] So yeah. More to come. I mean, if you're in the insurance space or if you deal with contracts for your company, this is something that's probably very near term for you. it is, is an area I had not really contemplated, deeply when I was thinking about things that could slow AI progress down.

[01:01:41] Yeah. But there are definitely risks, especially as we start getting more and more into the agent side of this that I would imagine most businesses have not contemplated yet in relation to their insurance. So yeah, this is a fascinating one. I, I'd be interested to keep this conversation going next 

[01:01:58] Mike Kaput: year.

[01:01:58] Yeah. I'd be interested too, if [01:02:00] anyone working for vendors of AI technology that feels inclined to reach out. I'd be curious how you answered these questions when you get them from enterprises, if you're getting 'em yet. Yeah. 

[01:02:10] Paul Roetzer: Even I, even on the model side, like the, yeah. The insurance for these model companies has to be just absurd right now.

[01:02:17] Yeah. Yeah. I don't know. My mind's going with a lot of different directions on this one. 

[01:02:21] Dwarkesh Podcast with Ilya Sutskever

[01:02:21] Mike Kaput: All right. Next up. We referenced this briefly. Former openAI's, chief scientist Ilya Sutskever, is offering a rare look into the strategy behind his new safe super intelligence or SSI. He just did a new interview on the Dwarkesh podcast where he argued the AI industry is moving from a quote age of scaling back to a quote, age of research.

[01:02:42] He contends that the era of simply adding more compute and data is reaching its limits because pre training data is finite. Instead, SSI is focusing on quote, reliable generalization, aiming to replicate the sample efficiency of human learning rather than just increasing model size. [01:03:00] So backed by 3 billion in funding.

[01:03:01] The company is considering what they call a straight shot approach to development, so. This is Paul. I know something you were excited about because, you know, big Dwarkesh podcast fan, big Ilia fan. When they tease us coming out, like what did you take away from this episode? 

[01:03:18] Paul Roetzer: I thought I was gonna take away more, honestly.

[01:03:20] Yeah. So when I, because I think I saw the day before Darkesh tweeted that this was coming. I was like, oh man, like Ilya iss gonna finally like, say what's going on with safe super intelligence. It was not that like, there, it, so I would say it's, it's pretty technical. I don't know that I would like advise everybody go listen to this episode.

[01:03:40] I don't think you're gonna get a ton out of it. If you wanna understand Ilia more, he is obviously an extremely important figure in everything we're going through right now in AI and probably where it goes from here. So it's good to hear him talk. he does not do this often, which is why I was so anxious to see this.

[01:03:56] He, he has tweeted three times since July, [01:04:00] and two of them was in the last like 72 hours. So he, he doesn't talk at all. Like he hasn't really, to my knowledge, I don't think he's given an interview on Safe Super Intelligence, so I don't think so. Yeah, and he, again, if you don't know who he is, he was the central figure.

[01:04:14] We talked about him recently. when Sam Altman got fired, Ilya was the board member who, who led the firing, the pursuit of that firing. so he is a very influential figure. A a few things I'll, I'll note that, did jump out to me a little bit. still no clarity on revenue plans. Dwarkesh said, how will Safe Super Intelligence make money?

[01:04:35] He said, my answer to this became a meme right away too. My answer to this question is something like this. Right now, we just focus on the research and then the answer to that question will reveal itself. I think there will be lots of possible answers. So there's some great memes with that quote. he said, is the plan still a straight shot to super intelligence?

[01:04:53] Now, this answer I found very intriguing. Ilya said maybe, I think there is merit to [01:05:00] it. I think there's a lot of merit because it's very nice to not be affected by the day-to-day market competition. But I think there are two reasons that may cause us to change the plan, which is interesting. One is pragmatic.

[01:05:13] If timelines turned out to be long, which they might second. I think there's a lot of value in the best and most powerful AI being out there impacting the world. I think this is a meaningly full meaningfully valuable thing. I think on this point, even in this straight shot scenario, you would still do a gradual release of it, which is what OpenAI does, this iterative deployment.

[01:05:36] That's how I would imagine it. Gradualism would be an inherent component of any plan. It's just a question of what is the first thing that you get out the door. That's number one. So that's really interesting, Mike, because that is a variation of a straight shot to super intelligence. I would say. We were told from the beginning.

[01:05:55] We're not releasing anything until we're there basically and we're ensure it's safe. And now [01:06:00] he's sort of hedging saying, yeah, well maybe the safe way to do it is actually iterative deployment like openAI's is doing, which would change the entire dynamic of what that company was built to do. So then Esh on this idea of continual learning says you're suggesting that the thing you're pointing out with super intelligence is not some finished mind, which knows how to do every single job in the economy.

[01:06:20] Because the way, say the original OpenAI charter or whatever defines AGI is that it can do every single job, every single thing a human can do. You're proposing instead a mind that can learn to do every single job. And that is super intelligence to which Ilya said yes. Hmm. Again, this is very, very different.

[01:06:38] So this actually maybe is like the one thing worth listening to this podcast for. So Dwarkesh said, but once you have the learning algorithm, it gets deployed into the world the same way a human laborer might join an organization to, which said Exactly. Then Ilya expand on this, he said. There has been one idea that everyone has been locked into, which is the self-improving ai.

[01:06:59] Why did [01:07:00] this happen? He asks, because there are fewer ideas than companies, but I maintain that there is something that's better to build, and I think that everyone will want that. It's the AI that robustly aligned to care about sentient life specifically. I think in particular, there's a case to be made that it will be easier to build an AI that cares about sentient life than an AI that cares about human life alone because the AI itself will be sentient.

[01:07:24] So this is, I guess, 0.2, it was worth listening to. so then he said, I think it would be really materially helpful if the power of the most powerful super intelligence was somehow capped, because it would address a lot of these concerns. The question of how to do it, I'm not sure, but I think that would be materially helpful when you're talking about really, really powerful AI systems.

[01:07:45] Dke said, speaking of forecast, what are your forecasts for this system you're describing which can learn as well as a human and subsequent, subsequently as a result become superhuman? So again, the whole premise is develop a model that continually learns, [01:08:00] put it out into the world in an iterative way, and then allow it to learn like a human would on the job or a teenager that like is capable of learning many things.

[01:08:09] So Ilya said I think like five to 20 years. and then Ilya did tweet as a follow up to someone summarizing his things. He said scaling the current thing will keep leading to improvements. So this idea of like scaling laws are broken and we're entering the age of research in particular, it won't stall, but something important will continue to be missing, which is echoing what Demis Hassabis, Yann LeCunn have been saying is like there's missing stuff to the get to the super intelligence, but the scaling laws are continuing to build smarter models.

[01:08:41] “AI 2027” Revises Forecasts to 2030

[01:08:41] Mike Kaput: All right, so next up, the AI 2027 report is a prominent forecasting scenario that we talked about on a previous episode that predicted that AGI could arrive within the next few years, but now the project's authors are revising their estimates, acknowledging that progress appears to be moving [01:09:00] slower than their original model predicted.

[01:09:02] So co-author Daniel Colo recently stated that his personal timeline for AGI has shifted to around 2030, and he notes there's still lots of uncertainty involved. Fellow author Eli Lund clarified that while 2027 remains a possible arrival date, their median forecast has moved back roughly to 2030. So this created a bit of a firestorm from AI skeptics, including prominent skeptic, Gary Marcus, who argues that this doomsday scenario has been officially postponed.

[01:09:34] He noted the original aggressive timelines were influencing high level policy discussions, including comments from Vice President JD Vance and White House Advisors. And he actually contends that significant economic and national strategies are currently being built around what he calls a fantasy that is no longer supported by its own creators.

[01:09:54] Now Paul, we talked about this project first on episode 1 43, and this revision of the timeline has [01:10:00] drawn some flack online. They, we got White House Senior Policy advisor, Erra Krishnan, essentially arguing the whole project as fearmongering. He's advocating the author should retract or rename the project to better reflect now what they say they were trying to do.

[01:10:15] So what is, why is this important? does that, what does this mean for kind the overall timelines or expected arrival, if any of AGI? I 

[01:10:26] Paul Roetzer: think that one, it just highlights the uncertainty around all of this that no one really knows. that it may be 2030, it may be sooner, it may be later. Like there's just so many variables, some of which are known and, and many of which are probably unknown honestly.

[01:10:40] I would say if people want to dig into this, we did spend quite a bit of time talking about it on that episode 143, I believe. Episode 141 is also when I did my AI timeline. Yeah, like the road day GI timeline, I would say go listen to both of those if you want the context here. And if you're an AI Academy member, go take the AI timeline course that I just did in September.

[01:10:59] So that's like [01:11:00] a fresher look at that. this is why when I talk about a time is I include ranges. It's like, I don't, AGI is probably like between 20 26, 20 27 and 2030. Like we, we just don't know. And it also depends on how you define it. Yeah. But the whole premise, and again, this goes back to the point I made earlier, whatever they're defining as this thing that's gonna happen in 2030.

[01:11:21] It really doesn't matter to, to you, to your company. If we stopped development of AI models today, if we shut off all the AI labs and all we had was today's current models, everything changes anyway. Like, did the mo, like people don't comprehend how disruptive the tech we already have is. And so like I wouldn't get too caught up in these like, oh, it's now 2030.

[01:11:48] Okay, like, I've got a few years now. It's like, no. Like, just, just move forward with a sense of urgency to figure this stuff out and get ahead of everybody else and then pull them along with you because otherwise, when it [01:12:00] does show up, you're gonna have your ChatGPT moment where like we knew for years it was coming and then like, and then it just shows up and you're like, what is this?

[01:12:07] Just be prepared. Like, I don't know. It is interesting conversations. It's good to go back and listen to, you know, the context around it from the last episode and 1 43 and we talked about it, but, Don't get too caught up in it. There's just lots of uncertainty in these forecasts. 

[01:12:22] Mike Kaput: It's a good mantra for 2026.

[01:12:24] Be prepared. Yeah. All right. 

[01:12:26] The Thinking Game and AlphaFold

[01:12:26] Mike Kaput: Next up, Google DeepMind has released a documentary offering an inside look at the lab's pursuit of AGI titled The Thinking Game. This film is now available for free on YouTube to celebrate the fifth anniversary of Alpha Fold, the company's breakthrough biology model. This documentary is filmed over the course of five years by the same award-winning team that produced the film Alpha Go.

[01:12:48] It centers on DeepMind co-founder Dennis Saba. It traces how his early life shaped the company's mission to unravel the mysteries of intelligence and life itself. Now following its world premier, at the Tribeca [01:13:00] Festival and a subsequent international tour, the film is now being released publicly. So Paul, I was curious why is the thinking game and its subject matter worth paying attention to here?

[01:13:11] Paul Roetzer: It was so good. you know, I was talking a lot about AlphaGo documentary and that was the basis for the Move 37 Moment keynote. I mentioned earlier that I did it make on this year. I would absolutely watch this. It, it's, it's fascinating on numerous levels. One is a personal story about Demis, two, just like behind the scenes of the conviction he had for, for over a decade.

[01:13:34] Like, you know, 20 years probably from the time he was little, that he was gonna change the world. With ai, it's just so fascinating. Like solve intelligence and then solve everything else. So I know for me, like for years, I mean, going back to like 2016 when I was doing public speaking on, on AI and doing keynotes about ai, I would use the definition des gave of AI as the science of making machines smart.

[01:13:57] And I would pull people, like how many people have heard of Demis Hassabis? And I [01:14:00] was like, lucky to get one or two hands in the room, even as late as like last year I would do this. And like people just don't know who he is. And I would be like, he's gonna win multiple Nobel Prizes. He will probably be the most consequential person of our generation.

[01:14:12] And like, no one knows the guy. It was wild to me. So, I don't know, a a few quick points I'll, I'll make about the movie that I think are worth, noteworthy Again, the whole thing is awesome. But, one, they start off testing Project Astro, they talked to it as Alpha. That was fascinating. I I had not seen the inside story of how they did that, but they created this whole room to test their, their v their vision agent basically.

[01:14:36] so that was awesome to see that done. The early iterations of what became Project Astro that they now is living within the Google Gemini app, you can actually use that technology, his decision to sell DeepMind. I'd never heard him talk about that. That was awesome. So when he decided to sell Google, so again, there was multiple founders, Shane, Shane Legg, Mustafa Suleyman, Peter Theil was their original investor.

[01:14:59] [01:15:00] And I assume he was referring to Peter Theil when he said this, but like their investors didn't want them to sell to Google. Mm. they sold for, I think it was reportedly about 650 million US dollars. Which if you think about the context of today, it's like, oh my god, safe super intelligence from ilias worth 32 billion.

[01:15:16] Yeah. And they have nothing, no product roadmap, no plan for anything. And here we have maybe the most consequential AI company in human history sold for 650 million. but there's a, Eric Schmidt, the former CEO of Google, is talking in the documentary. He says, after the acquisition, I started mentoring and spending time with Demis and just listening to him.

[01:15:37] And this is a person who fundamentally is a scientist and a natural scientist. He wants science to solve every problem in the world, and he believes it can do so that is not a normal person you find in a tech company. So then they go to a excerpt where he's riding in the back of a cab in London. And he says, we were able to not only join Google, but run independently in London, build our culture, which was optimized for breakthroughs and not deal [01:16:00] with products, do pure research.

[01:16:01] That has since changed. But, our investors didn't want to sell, but we decided that this was the best thing for the mission. This is, this is the part where I was like, I want you almost get chills. He said, in many senses we were underselling in terms of value before it more matured and you could have sold it for a lot more money.

[01:16:19] And the reason is because there's no time to waste. There's so many things that got to be cracked while the brain is still in gear. You know, while I'm still alive. There's all these things that have to be done. So you haven't got, I mean, how many more billion would you trade another five years of your life for to do what you set out to do?

[01:16:38] Okay, all of a sudden we got this massive scale compute available to us. What can we do with that? So the whole point was like, yeah, I could have like made billions more. But like if I can buy myself five years of having access to all of Google's infrastructure to go solve intelligence, like what is that really worth to me?

[01:16:55] So it was just, it was so fascinating. And then you saw the human side where when they finally cracked [01:17:00] protein folding, which is alpha fold, he's sitting in a conference room and they're like, Hey, we did it like we finally figured out how to do this grand challenge in biology. What do we do now? Like this is probably worth trillions of dollars.

[01:17:13] And somebody said something like, well, we could just open source. He goes, well, yeah, do that. And like, walks outta the room, like literally gave humanity these predictive models for protein folding, which advances medicine by probably decades. And the dude just like, yeah, do it. Go like, doesn't even contemplate the alternative.

[01:17:33] It was so cool. So, I don't know. Like I always say, if, if you have to bet on one lab, and this is, is not really if a commentary on anybody, I don't know any of these people personally. Yeah. But if you think about the people who are leading the charge to build this now, super intelligence, we're sort of moving past the AGI.

[01:17:50] I think you have Zuckerberg, you have Elon Musk, Dario Amedee, Sam Altman, Satya d probably throw in there and de Saba. And [01:18:00] then you ask yourself, who do I actually want controlling this? Like, who, who, who would be the person that if you could pick someone to solve this, who do you want that to be? And for me, I want the pure scientist who is doing this because he believes intelligence solves everything else.

[01:18:18] And when you watch this documentary, you realize from the age of eight, that is what this dude has been doing. And so like, you just have a different, you have a different understanding of him. and, and why, Google doing well in this scenario may be a really good thing for humanity, I guess is one way to think about it.

[01:18:38] DeepSeek V3.2

[01:18:38] Mike Kaput: All right, we've got a couple more updates this week. First DeepSeek, a Chinese research lab focused on open source model development has launched DeepSeek a V 3.2, and a high compute variant of that model. So the standard V 3.2 model introduces a new architecture known as DeepSeek sparse attention, which is designed to significantly reduce computational [01:19:00] complexity while maintaining performance in long context scenarios.

[01:19:04] The company highlights that this model harmonizes efficiency with reasoning, and notably, it integrates a thinking process directly into tool use tasks. And according to its technical report, it performs comparably to gpt five on reasoning benchmarks. It is now available via the company's app web and API and Paul.

[01:19:24] This seems like a pretty powerful open source. Openly available model at from Deeps seek. They keep, they keep churning 'em out. 

[01:19:32] Paul Roetzer: Yeah. They're, they're a player. I mean, that's the thing is, you know, I'm talking about different labs and I was talking about the US based labs. you know, obviously Deeps Seek is, is a major player in this and can be a disruptive force to what the models in, in the AI labs are doing in the us.

[01:19:46] We obviously, Mike haven't had time to play around with this. It came out like, you know, six hours before we came on today. But, definitely noteworthy and it's, I would imagine this affects, you know, stocks. I was just trying to glance real quick at stocks today, but I [01:20:00] don't know what other variables are playing out.

[01:20:01] But usually when DeepSeek does something, it has an immediate, trickle down effect into the US stock market. 

[01:20:07] Mike Kaput: Yeah. And perhaps one of those reasons Meta has really gone cold on open source models. 

[01:20:11] Paul Roetzer: Yeah, that's, it's honestly, yeah, you're, you're probably right. Like that's, that's probably the biggest threat would be this DeepSeek is one upping Zuckerberg at what he intended to do, which was like commoditize the model market with open source models and they, they've sort of beat them to it multiple times now.

[01:20:27] Runway Gen-4.5

[01:20:27] Mike Kaput: All right, our last topic this week, runway has introduced Gen 4.5, a new video generation model designed to deliver higher visual fidelity and precise creative control. According to runway, the model represents significant advancements in pre-training, data efficiency, and post-training techniques currently holds the top position on the artificial analysis text to video benchmark, and it's developed entirely on Nvidia GPUs.

[01:20:52] And the model aims to improve physical accuracy, ensuring objects move with realistic weight and liquids flow with [01:21:00] proper dynamics. the company states Gen 4.5 will support existing control modes like image to video and key frames, and is available at pricing comparable to previous subscription plans.

[01:21:12] So access is being rolled out gradually to all users over the coming days. Now, just another example, Paul, of like some of the amazing video generation tech we are getting in the last several months. 

[01:21:24] Paul Roetzer: Yeah, this one was getting some buzz because there was a, you know, what was it, what's it called? Whisper thunder, or what was that?

[01:21:29] I think it is some code name for it, but it was like tops in the charts and people were like, which one, which model is this? Is the soa, is it the next vo It ends up it was runway, which is sort of like an OG of, of video gen. I remember my first keynote, make on 2019. Yeah. I featured runway as like an example of technology that was coming.

[01:21:48] So they've been around, they, you know, they've, I think when SOA and VO showed up, a lot of people started to kind of forget about runway. but they're, they're a player. They're doing really cool things. I, I, I, again, [01:22:00] no insight information here. I just think these guys get acquired at some point. Like, yeah, I don't know how they continue to compete, as more and more goes into video generation.

[01:22:07] So I would think someone would swoop in and, and grab these. It could be an Adobe Google openAI's, you know, whatever. It's, there's just good tech here. Probably a lot of talent. I don't know that they can sustain, you know, competing as these models keep getting more powerful. but they're doing really cool things.

[01:22:23] so yeah, and I think. There might be some more video stuff still to come in December I think. Yeah, nonetheless, we're hearing of video generation models, but certainly 2026 is gonna be a huge year for, for video. 

[01:22:36] Mike Kaput: Alright, a couple final announcements here, Paul, as we wrap up. First step, if you have not left us a review on your podcast platform of choice, please do so.

[01:22:44] It takes only a minute or two and really helps us out, helps us improve the show and reach more listeners. Also, as a reminder, the AI pulse survey will be live when you listen to this, so go to SmarterX dot ai slash pulse and check that out. We'd really appreciate your [01:23:00] participation. And Paul, as always, thanks for breaking down this week in ai.

[01:23:04] Paul Roetzer: Yeah, thanks for being with us everyone. We will be back again next week and probably have some more model news for you to digest. have a great week. Thanks for listening to the Artificial Intelligence show. Visit SmarterX dot AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events.

[01:23:31] Take in online AI courses and earned professional certificates from our AI Academy and engaged in the SmarterX slack community. Until next time, stay curious and explore ai.