Is 2026 the year society finally pushes back against AI? In this years final episode, Paul Roetzer and Mike Kaput explore the immediate future of AGI, analyzing Demis Hassabis’s warning of a shift ten times larger than the Industrial Revolution and Shane Legg’s prediction of human-level intelligence by 2028.
Our hosts break down critical developments, including Google’s Gemini 3 Flash, OpenAI’s staggering valuation talks, and the rise of world models that simulate physical reality.
Listen or watch below—and see below for show notes and the transcript.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Click here to take this week's AI Pulse.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:03:27 — AI Pulse
00:07:05 — AI Trends to Watch in 2026
00:31:59 — Demis Hassabis on the Future of Intelligence
- The Future of Intelligence with Demis Hassabis (Co-founder and CEO of DeepMind) - Google DeepMind: The Podcast - Apple Podcasts
- The future of intelligence - Google Deepmind YouTube
00:42:35 — DeepMind Co-Founder on the Arrival of AGI
- The Arrival of AGI with Shane Legg (co-founder of DeepMind) - Google DeepMind: The Podcast - Apple Podcasts
- The arrival of AGI - Google Deepmind YouTube
- Big ideas begin here: Sergey Brin at Stanford - Stanford University School of Engineering YouTube
00:47:53 — Are AI Job Fears Overblown?
- LinkedIn Post from Molly Kinder
- All-In Podcast
- Evaluating the Impact of AI on the Labor Market: Current State of Affairs - The Budget Lab
- AI and the Future of Work: What You Need to Know - Your Undivided Attention Podcast
00:56:05 — Gemini 3 Flash
- Gemini 3 Flash: frontier intelligence built for speed - Google Blog
- X Post from Sundar Pichai
- X Post from Jeff Dean
- X Post from LMArena
00:59:38 — OpenAI Eyes Billions in Fresh Funding
- OpenAI Has Discussed Raising Tens of Billions at Valuation Around $750 Billion - The Information
- OpenAI in Talks to Raise At Least $10 Billion From Amazon and Use Its AI Chips - The Information
01:02:19 — OpenAI Releases New ChatGPT Images
01:04:18 — Karen Hao Issues AI Book Correction
- X Post from Karen Hao
- X Post from David Sacks
- X Post from Sriram Krishnan
- Empire of AI is wildly misleading about AI water use - Andy Masley Substack
01:08:18 — AI Keeps Getting Political (Roundup)
- X Post from Ron DeSantis
- X Post from Bernie Sanders
- Data centers have a political problem and Big Tech wants to fix it - Politico
- A Roadmap for Federal AI Legislation: Protect People, Empower Builders, Win the Future - a16z
01:12:51 — AI World Models
- A new world model startup is quietly raising big money - Sources
- Universal World Simulator - RunwayML
01:17:31 — US Government Launches Tech Force
- Trump admin to hire 1,000 specialists for ‘Tech Force’ to build AI, finance projects - CNBC
- Tech for the American People - Tech Force
- X Post from Tech Force
- X Post from Scott Kupor
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off an individual purchase or a membership by using code POD100 at academy.smarterx.ai.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Mike Kaput: I just have this nagging feeling. 2026 is the year where the societal backlash against AI becomes real. Yeah. I think we're gonna 12 months from now, be having conversations about people that looked at us a weird way because of what we, what we end up doing. You know,
[00:00:17] Paul Roetzer: welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:25] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career, join us as we accelerate AI literacy for all.
[00:00:53] Welcome to episode 180 8 of the Artificial Intelligence Show. I am your host, Paul Roetzer, along with my co-host [00:01:00] Mike, put, we are coming to you, I guess it's like Christmas week, right? It, it is the holidays. We are recording this on Friday, December 19th. 'cause Mike and I are both trying to be outta the office, I would say the next two weeks largely.
[00:01:12] So, we thought we'd get one more episode in, so this will be the final episode of the year. So thanks to everyone who's been with us throughout the year, we appreciate it. We will be back January 6th, Mike, I think is our next one. Is that right? Yes. It the next weekly. So you have one week without the weekly updates.
[00:01:29] So, again, just appreciate everybody, all the, you know, amazing stuff we've been hearing from people, especially the last couple weeks, sharing, you know, their experiences with the podcast throughout the year. So thanks for being with us. We will be back bigger and better in 2026. So, before we get into, I'll say again probably at the end, but happy holidays, happy new Year, and, looking forward to being a part of your AI journey starting next year as well.
[00:01:53] All right, so this episode is brought to us by AI Academy, by SmarterX. AI Academy helps individuals and businesses accelerate their [00:02:00] AI literacy and transformation through personalized learning journeys and an AI powered learning platform. There are nine professional certificate course series available on demand now with more being added each month in addition to the Gen AI app.
[00:02:13] I know, Mike, you just dropped one on 5.2, right? Yep. Do we have Gen AI app reviews come out?
[00:02:17] Mike Kaput: 5.2. I went live. Live the day We're recording Friday. Awesome. December 19th.
[00:02:21] Paul Roetzer: Yeah, so if you haven't checked out those, if you're part of a AI mastery membership, those are exclusive to the AI mastery memberships. And we have, there's probably like 14 of 'em already.
[00:02:29] I think we've knocked out. I mean, it's a lot.
[00:02:31] Mike Kaput: You know, I think Jess was telling me the other day it might be over 20 at this. Is that right? Yeah, she's, I'm losing crap. I've been cranking them out.
[00:02:37] Paul Roetzer: Yeah, so definitely check that out. If you're, again, if you're an AI master member, you have access to all of those.
[00:02:42] You can go in and knock those out like 15, 20 minute, reviews. It's really cool format. Then the, the certificate series is, you know, the longer, multiple courses grouped together. So today we'll spotlight the AI for department collection. So right now we have three, AI for department course series with [00:03:00] professional certificates.
[00:03:01] There's AI for marketing, which is obviously wildly popular. One, AI for sales and AI for customer Success. So those series are available for individual purchase, but if you are a mastery member, they're included in that mastery membership. So you can go to academy dot SmarterX.AI, learn more about all of the courses and AI mastery membership program.
[00:03:21] Be a great way to kick off, the new year for you or your team, in your organization.
[00:03:27] AI-Pulse Survey
[00:03:27] Paul Roetzer: Alright, AI pulse, Mike, we'll take a quick look. Now, this has only been live for a few days, so we have fewer responses than normal. Again, these are informal polls of our audience, just to get feedback, sentiment of what's, you know, how people are thinking about the things that we're talking about on the podcast.
[00:03:40] So last week we asked, does the new Disney opening ideal change your perspective on using AI video tools like Sora. For creative or business projects, 62% said, no. My opinion hasn't changed by far, the the biggest answer. and then a mix of, yes, it'll make me more confident in the technology's legitimacy.
[00:03:58] Only 13%. [00:04:00] Another 13%, I'm not sure. And then 12% it raises more concerns about creative rights. The second question was regarding the new executive order on AI regulation. And again, if you're not, if these aren't familiar topics, you just go back and listen to episode 180 6. We talked about both of these at length.
[00:04:15] You can just scan the timestamps and go right to that topic if you want. So regarding the new executive order on AI regulation, do you believe a single federal standard is better than individual state laws? Now, again, informal poll. but 63% say yes, a single federal standard is better for consistency.
[00:04:34] So that is far and away the the dominant answer here. Alright, and then this week, Mike, we've got a couple more questions so you can go to. Gimme the URL again.
[00:04:45] Mike Kaput: SmarterX.ai/pulse.
[00:04:46] Paul Roetzer: There we go. so this week, two questions. a new report suggests generative AI hasn't significantly disrupted employment yet.
[00:04:56] What is your personal experience with AI and job security [00:05:00] so far? We're gonna talk about that report in today's episode, and then another one we're gonna touch on today amid growing political pushback and calls for a moratorium on new data centers. That was Bernie Sanders, Senator Bernie Sanders, who is calling for that.
[00:05:13] Do you believe governments should pause AI infrastructure expansion to address resource concerns? So again, you can go to SmarterX.AI slash pulse and answer those questions and be a part of the results when we talk about those on the next episode, which January 6th. Alright, so, mike and I were just sort of debating, again, we're recording this on like three, three days.
[00:05:36] This short work, short week. things are starting to wind down it seems, from the labs. I think most people have gotten most of what they're gonna release out before the end of this week. And so the way we decided to approach it was, sort of take a look at what are the trends moving forward. So if you're a, part of our AI Mastery membership, Mike and I actually do quarterly trends, briefings for our mastery members.
[00:05:56] It's a feature that's an exclusive to mastery members, and we [00:06:00] basically get together and we do like 10 things from that quarter that are key. And then we do ask me anything session with our members as part of it. So what we thought we would do here is just like a, I would say this is a little bit more informal because it's kind of like thoughts I've been having for a while.
[00:06:14] It's not really tied directly to the podcast per se, and like exact trends we've been touching on, it was more of I sat down and was like, I wonder what, like what's gonna happen next year? And so I would not call. This first segment predictions necessarily. they're really just the things that I've been looking at that seem to me what I would be watching for next year from a personal perspective, a business perspective, a political perspective.
[00:06:41] And so that's what we wanted to kick off today is we're gonna get into these trends, and then we're gonna talk about a couple of really interesting podcast episodes from, key leaders of Google DeepMind. That, and so really these main topics today are gonna be kind of forward looking as to where this, this all kind of goes over the next 12 months maybe.
[00:06:59] And then our [00:07:00] rapid fire is gonna be all the stuff that happened this week. So Mike, I'll, I'll kick it back over to you and lead us in that conversation.
[00:07:05] AI Trends to Watch in 2026
[00:07:05] Mike Kaput: Yeah, for sure. Paul. So, like you had mentioned, we wanted to talk through. What are AI trends to watch in 2026? I know you've got quite a few, Paul, I'm not gonna step on your toes here, but after you say them on it, do wanna share?
[00:07:18] I asked Notebooklm, which is loaded with all of our podcast episodes. What it thought based on our conversations were some things worth noting. I think you're actually gonna touch on most of 'em. Okay,
[00:07:28] Paul Roetzer: cool.
[00:07:28] Mike Kaput: But I would love to hear what's been on your mind and your progression of where you think we're headed, what we should be paying attention to.
[00:07:36] Paul Roetzer: Alright, cool. Yeah, and so, like Mike mentioned this, but we do have a notebookLM, that Mike maintains that has every like, episode in it. So it's kind of a cool application of AI to be able to go in. And I think, Mike, you use that quite frequently, right? Oh gosh. Like
[00:07:48] Mike Kaput: 10 times a day. Right. And our team is using it too as well to see when things have vendors have been mentioned, partners, things like that.
[00:07:56] Oh, it's cool.
[00:07:56] Paul Roetzer: Yeah, it's a fun application. I dabble in [00:08:00] it time to time, but I don't, I don't spend a ton of time in it, but it's a really cool way to do it. all right, so again, this is a, like I said, it's pretty informal. I did not spend a lot of time like building these out. I would say these are more of like instinct and just overall sort of feeling of, of what's really gonna matter moving forward.
[00:08:19] And so I just broke these down. and again, Mike, if there's any of these that you want to like stop and drill into, like, just, just stop me along the way. Yeah, but I broke it into technology itself. So what's happening with AI technology in the enterprise, or like the business implications of these things, and then, in society.
[00:08:38] So I'm just gonna kind of like high level these and like I said, we can zoom into any of 'em, that seem interesting. And then a lot of these do end up being things that Demis and Shane Legg talk about in their podcast episodes. So we'll have a chance to revisit those. so from a technology perspective, one of the things that I, I've been watching throughout this year, and I think becomes more real next year as we [00:09:00] move into more of this agent economy and the agent start becoming more real and more reliable is agent to agent communications and commerce.
[00:09:07] So as businesses, something we have talked about on the podcast is this idea that the people or the, the things visiting your website might not be humans. They might be people's agents, the agents you have on your site to interact with visitors. My agent might be talking to your. Agents might be buying from agents, like we're gonna start to move into this phase where agent to agent becomes more commonplace.
[00:09:33] I would not say next year is like the tipping point where like 50% or more of visitors to your site are agents and things like that. But I think that this idea becomes more real as the agents become more reliable, more autonomous to a degree. So I think marketers in particular, customer service, certainly sales, like all those facets of the organization, I think you're gonna have more and more conversations around this agent to [00:10:00] agent idea.
[00:10:01] Um, another thing that's gonna happen from a technology perspective is. Personalization of AI assistance we've known for a while. This is a focus area specifically for openAI's. Demis talks a little bit about this in the episode we're gonna touch on next. so it's this idea that the AI labs are gonna give people the ability to personalize their experience with the AI assistance, the tone, the style, their personal preferences, maybe political leanings, religious leanings.
[00:10:28] Like you're gonna be able to sort of customize these AI assistants to understand you better. And so Mike and I might use the exact same model, the exact same assistant, but our experience with them may be completely different because it's learned our different styles and preferences and traits and what's interesting to us and what we research and like.
[00:10:49] All of that is gonna come into the personalization. Now, Mike, that's one we've talked about a lot. I'll stop there and see like if you have any other thoughts on the personalization side, just based on things we've looked at ourselves.
[00:10:59] Mike Kaput: Yeah, I [00:11:00] think this becomes both so, so important and such a differentiator.
[00:11:04] It's going to be basically at some point, the only thing that differentiates between the models, they're all within what, three to six months of each other? Yeah. Anyway, at least at the current pace. Not to say there aren't real differences between how the tools work, but I think you're just going to get so attached to whatever tool has the most context and personal information about you.
[00:11:24] Also interesting, just as a quick, wrap up here that ties into your previous topic. As businesses are thinking about, agent to agent communication or showing up, for instance in ai, so like a EO is a big thing right now. People often forget your tool is personalized to you. So like when you say, oh my gosh, chat, GPT is mentioning us, when I ask for like best marketing agency in Cleveland, like no, it knows you work for the that marketing agency.
[00:11:53] Right. So just a word of caution, be careful. Personalization makes a lot of things wonky as well as useful [00:12:00] in your chat, GPT or Gemini account.
[00:12:02] Paul Roetzer: No doubt. yeah, so I I agree and I think that one of the big debates is gonna be how sticky is that personalization. So if like, let's say you're a power user of chatGPT and it gets really, really good at knowing you and adjusting to you, does that eventually prevent you from trying the other platforms more and you just like, ah, like Gemini's amazing.
[00:12:24] But the switching factor is so high for me because ChatGPT just gets me that'll be interesting. I don't know that we're there yet. I don't know that that keeps people from moving from one a AI system to the other. Yeah. But there are definitely times I can even see it for myself where. I'll have a use case that I need help with and it's like ChatGPT just has more knowledge about our company.
[00:12:45] I'll just jump into co CEO because it's already there. It's personalized. And so I do find myself making decisions. 'cause I use Gemini and ChatGPT probably 50 50 right now, I would say. and sometimes I do default from a personalization perspective of just like the [00:13:00] one that's like, just better at that planning.
[00:13:01] And I've like developed a rapport almost with it for that use case. Yeah. another one on the technology side, and we talked a lot about this this year, is the reliability of agents on long horizon tasks. Meaning, you know, if you need something that's instant, it's information retrieval, it's an a, a quick output.
[00:13:18] Like I think what Chat calls it fast. I don't remember. Yeah, yeah. Everybody's got like Gemini and ChatGPT both have these like immediate like, Hey, if you just need something right away, use this version of the model. Yep. But if you need something that's gonna take time, you know, you know, it could be 10 minutes, it could be eventually an hour, like.
[00:13:35] The, that ability to use these things to do those longer horizon tasks, which then by, as a byproduct starts to really affect knowledge, work and jobs. I think they're going to become more and more reliable there. Now again, a lot of the early work the labs are doing is in AI research and, you know, coding and things, but that'll start trickling into other areas of the [00:14:00] economy.
[00:14:00] So reliability of agents on long horizon tasks is something that's critical. Meter is a organization we mentioned numerous times. They have a kind of a, I guess like an emerging scaling law and a way, where they say this was from March of 2025. their research showed AI models had a 50% chance of successfully completing a task that would take an expert human one hour, and that was doubling every seven months.
[00:14:26] Now again, their research is specific into coding. But the AI labs, I noticed by fall of this year, we're referencing that research a lot.
[00:14:37] Mike Kaput: Yeah.
[00:14:37] Paul Roetzer: And I think that that same level of research will start happening in like the legal profession, hr, marketing, sales, where you start to say, okay, like here's this thing it couldn't do before.
[00:14:47] Like imagine like in a marketing, we're gonna do a product launch, building an entire strategy, and then developing all the components of that strategy, emails, landing pages ads, media buying plan, all those things that [00:15:00] might take a human expert hours. You're gonna start to have these research reports that'll look at the ability of these agents to do these longer horizon tasks.
[00:15:08] So I just, I think that that's a, a area that the labs are going to watch and I think individual industries are gonna start to pay much closer attention to that one. Mike.
[00:15:18] Mike Kaput: Yeah. My God, I feel like we've talked about this before, but boy, is AI verification going to be such an in demand need and skill and maybe even job moving forward?
[00:15:28] We, you already know it's an issue, but. With agents actively doing great work and going and able to do all sorts of this kind of longer horizon work, the ability to your success will determine, be determined by your speed and ability to verify those outputs. I feel like.
[00:15:45] Paul Roetzer: Yeah, and I mean that, it definitely gets into that impact on business, but I'll, I'll just say like straight up on our end, like we just made a hire for this exact thing.
[00:15:53] Yeah. So we just brought someone in who will start in January because we think that there's a whole new age of [00:16:00] research to be done and verification. To your point, Mike is fundamental to that. Like we can perform research tasks using deep research in 10 to 15 minutes. That would've probably taken 10 to 15 hours, but verification of that information is critical.
[00:16:16] And so until we staffed and had a vision for how to do that, we just couldn't apply deep research to what we were doing on a daily basis. Yeah. So that was a gap we identified and. I do think that, yeah, you're just gonna start to have roles where that's people's jobs is just to oversee, like to guide, to develop the idea, to give the project to the agent, but then to oversee the thought process, like analyze the chain of thought, it went through, verify citations, things like that.
[00:16:42] So you could definitely see that applying where more and more people's jobs is managing the agents that are doing the long horizon tasks.
[00:16:49] Mike Kaput: Yeah. I find my own work already falling into that category. We're not doing a ton with, you know, agents that are truly autonomous. But even just so much of my work now is [00:17:00] verifying AI outputs and being really smart about how I'm allocating AI resources and orchestrating that work rather than creating the work myself, you know?
[00:17:08] Yep.
[00:17:09] Paul Roetzer: So again, building on that, some other dimensions of progress that these AI models, things to watch for that we've seen a lot of progress this year, or at least it, you know, increasingly becoming part of the conversation. multimodality, we know text image, video, audio, like those are core areas.
[00:17:25] They're starting to unify them into single models. Reasoning capabilities are, are fundamental and they continue to improve. we, we've just saw like the updates from both, chatGPT with new model from openAI's and Gemini. The continued ability to do more reasoning world models is a huge topic.
[00:17:43] The ability to understand and interact with the world around you, it's gonna be fundamental. We'll talk a little bit more about that one with Demist recursive self-improvement. We talked a lot about on, I think it was episode 180 6, where I was explaining that one continual learning is one you hear, every interview you listen to with the leading researchers at Top [00:18:00] Labs, they all mention continual learning now, which is, I think I mentioned this one too, and.
[00:18:05] Episode 186. It's the idea that you, put a model out into the world and it actually gets smarter based on experience, kinda like a human would. You know, you think about a teenager and they're experiencing the world and they're constantly, you know, updating their own understanding and their own capabilities.
[00:18:19] Models don't do that naturally. They kind of like, you train 'em and then they have a stopping point and then they don't really adapt until a new training is happening or fine tuning happens, context, windows getting bigger, more reliable. So imagine being able to hold your entire corpus of knowledge of your company within a single context window and the AI assistance with high degrees of accuracy, being able to interact with that knowledge base and accurately output things.
[00:18:45] So there's very few hallucinations, though the context windows become critical. And then in a sort of a, a related area memory. So the ability to remember every interaction, all these things, and that enables like personalization. few other things. As the [00:19:00] multimodality becomes increasingly better.
[00:19:01] We've seen this with Veo and and imagine from, from Google. we're seeing it with, Chat's capabilities, mid journey, runway image, video voice, all becoming indistinguishable from reality like that. I think we're basically on the cusp of that with all of those modalities. And I think next year we have to deal with this as a society of like, how do we, how do we go from here?
[00:19:26] robotics, tons of talk on robotics. I was actually just talking with Jeremiah o Yang, ow Yang yesterday, who was a speaker at Macon and he was sharing some of the things he's seeing out in Silicon Valley with robotics. It's just, I don't think most people have any clue how fast robotics is moving and how many companies are working in that space and what's going on.
[00:19:49] So I do think that, robotics is gonna be huge and then. Consumer hardware is another, the infusion of this stuff into not only your phones, but into, glasses [00:20:00] and other ways we're gonna interact. So you have Google and Apple openAI's is obviously making a big push into, hardware, so that's gonna be key.
[00:20:08] And then two other ones, just on the technology front, Mike, is frontier models on device as the models become more efficient. So even we saw, we'll talk about Gemini three Flash that came out, just yesterday. and I think I said this on this podcast, but it might have been an AI answers thing.
[00:20:26] I don't, I don't remember, but I think what Apple is doing, and I'll just reiterate this for a minute, is, apple I think is making a bet that the frontier models of today, one year from now will be able to be served up on device without having to go to the cloud. So while, while all of Apple's missteps and sort of embarrassments related to being unable to figure out this AI thing for the last couple years and Siri still not being useful and things like that, I think there's a reasonable chance that the, within the next 12 [00:21:00] months, they find a way to serve up very, very powerful models on the device itself.
[00:21:05] Which means you can have high degrees of privacy and control without having to go out to the cloud and go out to ChatGPT for inference or, you know, Google Gemini. And so, you know, imagine like a very efficient version of Google Gemini living within your phone and being able to, now all of a sudden we, we achieve what we envision for Siri, basically.
[00:21:27] And so I think that's gonna become a key thing. And then just continued progress toward AGI and beyond into super intelligence. So, I'll stop there on the technology front, Mike, and see if you've got anything else to throw in on the, on the tech side.
[00:21:41] Mike Kaput: No, I think that's extremely comprehensive. I love it.
[00:21:44] Okay,
[00:21:44] Paul Roetzer: so then in, in, from a business standpoint, what does this all mean? you know, we talked last week on episode 180 6 about this idea that everyone's sort of like figuring this out and every organization is scaling this and how untrue that is. Like in certain bubbles, maybe, [00:22:00] depending on the research report and who they interviewed and which companies they looked at, it could appear as though everybody solved this.
[00:22:07] But then we had the Gallup poll we talked about. It was like, was it like, like 10% of knowledge are actually using it daily?
[00:22:13] Mike Kaput: Yeah,
[00:22:13] Paul Roetzer: so we're still so early in the adoption curve. but I do think that next year, 2026, we do start to see a lot more organizations moving from piloting to scaling. You know, truly integrating this in every aspect.
[00:22:26] Thinking about it from a change management perspective. One of the ways you do that and kind of the next trend I would look for is personalization of AI use cases across organizations. So when you give the thousands of copilot licenses or Gemini licenses, you don't just hand them over and say, go figure it out.
[00:22:45] You hand them over and say, okay, sales, here's seven core use cases for this technology and we've prebuilt, copilots or GPTs for you to do these seven things. So this idea that we actually think about adoption and [00:23:00] engagement with these tools through a personal lens, and we don't just hand out these licenses and think people will figure it out, greater adoption of reasoning capabilities.
[00:23:10] So we've talked about this numerous times throughout this year where people are getting these licenses for these different AI assistant, but they're just using the most basic functionality for, you know, writing emails or summarization of meetings, which is fine. Like those are nice productivity boost.
[00:23:27] But the, the, the, the using the models that can do actual chain of thought and can apply to these longer horizon tasks, these higher cognitive level tasks. My interpretation has been through many, many conversations that most organizations don't even know that that's a feature within these models. Yeah.
[00:23:45] Like they don't even understand this process. And I often would figure this out by asking who's used deep research within one of these tools? And you would, like, I would stand in front of hundreds of people and ask this question. You get like one or two hands every time. So we knew that wasn't happening.
[00:23:59] [00:24:00] Um, another would be investment in AI literacy. We've talked about this, a lot throughout the year. This is why we're building AI Academy and investing so much of our time and energy. It's why we have the AI literacy project. Everyone has come around to the fact that we need an AI literate workforce.
[00:24:15] We need students in schools to be AI literate that we need. so businesses, schools, governments, everyone is prioritizing this. It's the number one skill on LinkedIn right now. It's being increasingly asked for within job descriptions. So it's, it's just gonna be, I think by this time next year, it'll be assumed that if you want to have a job, you better be AI literate and you better be investing in that.
[00:24:37] Um, something we've talked quite a bit about recently is this idea of shifting, shifting our mindset from optimization through AI tools to innovation. So the, the phrase I used, the slide I used at my make on keynote this year was optimization is 10% thinking innovation is 10 x thinking. And so we have to think about these AI tools as problem solvers and as innovators, as [00:25:00] assistance to those things.
[00:25:01] And not just how do we optimize an existing workflow by 10%. one other one I'll throw out here is something we've also talked a lot about is no longer thinking about IQ test as evals to say, oh, okay, GPT 5.2 came out and it increased like 10 percentage points on this IQ test. That has no relevance to me as a business leader.
[00:25:24] What we're gonna need to have, and what I hope we'll see more of within organizations is custom evals tied to that organization, that industry, and then an individual of like people's own workflows and projects. So if a new model comes out, we can run our own internal eval and say, wow, 5.2 is dramatically better than five or 5.1 for this specific thing that we do as an organization, or this thing I do as a person.
[00:25:50] And so I've mentioned this a couple times, but like, we're working on plans to try and help people develop those evals. I, I've got some ideas of how to do that. So hopefully in Q1 and next year [00:26:00] we'll make some progress there. So then a couple other quick notes just on the business side, the future of work hypothesis, which I've shared a few times publicly, but I'll, I'll repeat it here again.
[00:26:09] This is my, my current hypothesis of what happens within one to two years. AI model advancements and agent capabilities will force a radical transformation of talent teams and organizational structures. Leaders face conflicting pressures to take a responsible human-centered approach to AI adoption while leveraging AI for near-term gains and efficiency, productivity, creativity, and profitability.
[00:26:32] So that's basically the premise. And as part of that, businesses have to deal with all these unknowns, shifts in consumer behavior. there's gonna be an explosion of entrepreneurship, which is gonna create competition coming from everywhere. There's gonna be fewer people needed to do the same work. So again, if we achieve 20, 30% efficiency gains, you don't need as many people doing the same work.
[00:26:53] Automation of tasks everywhere. Everyone becomes a creator and a developer. There's no like code needed, [00:27:00] no technical abilities needed to build things premium on data and distribution as differentiators in organizations and in industries. Increase in capacity to produce more goods and services, increase in competition, increase in productivity and efficiency, creativity and innovation, profitability hopefully.
[00:27:18] But it comes with job loss. And so these are all these like things that are gonna happen. and so I'll stop there from a business perspective, Mike. And then I got a few notes on the societal to see if there's anything else you wanna add on the business front.
[00:27:30] Mike Kaput: Yeah, I couldn't agree more. I would just say overall on the business front and kind of some of the things we talked about on the enterprise, you know, with your own evals, to evaluate your workflows, to evaluate AI's impact on your job, you also have to have workflows.
[00:27:44] You have to actually know what you do in your job. And I realize that sounds obvious, but I think you would be very well benefited in 2026 by starting off. Really documenting the steps you take to do stuff, because having that ready to go makes it so much [00:28:00] easier to really hit the gas on AI adoption and integration.
[00:28:05] Yep.
[00:28:06] Paul Roetzer: And we've got a series of tools, we'll drop a link in there. We're like jobs, GPT is one that can be very helpful to understand what you do and you can even use that one to help you, like kind of visualize workflows potentially. so we've, yeah, we've got some tools that are meant to try and help assess these impacts and look out ahead.
[00:28:22] All right. And then the final section is, just society. overall, we've talked a lot recently about these issues related to regulations, the 1200 plus, in the US at least 1200 plus, pieces of legislation that are different stages just across all the states. The Trump administration's efforts to preempt those state laws with a federal policy.
[00:28:45] Which again was the pulse question we asked last week. All of this is, is building into this massive political friction point where it's, and we'll touch more on some of these items today. It's like we can't go a week now without [00:29:00] multiple, government leaders from both sides of the aisle taking some position on a, a, a potentially controversial AI topics.
[00:29:09] So it's just gonna become more and more next year. AI's impact on jobs. We're seeing it, we'll talk about this Brooking Institute, study, that's maybe saying like, okay, it's not yet, we're not there, but that doesn't mean it's not gonna happen. so we're gonna see a lot more of that. Again, I, I, I, I, I'm, it's, this is one of those ones I would really love to be wrong, but I have a very, very high degree of confidence that we are gonna see some pretty significant, job disruption in early 2026 as a result of ai.
[00:29:41] And then I think that it's gonna be a very challenging period for entry level workers that are graduating in the spring of 26. It might be when we really start to hear a lot of firsthand data about how challenging it's becoming to, to find jobs and then, AI's outweigh impact on the [00:30:00] economy. Meaning we touched recently on this idea that if it wasn't for the AI infrastructure spend right now and everything that's going into this, that we would likely be in a recession.
[00:30:09] I mean, it is accounting for a disproportionate amount of the growth in the economy right now. And then the other one, I'll kind of end with on, just like big picture society and economy, is IPOs next year. I mean, we have potential for philanthropic, SpaceX, openAI's. I mean, we could be seeing three to five massive IPOs next year in the AI and AI related space.
[00:30:33] And then I guess the other one I would throw out there is just society kind of trying to come to grips with where we are, like parents having to try and figure this out. schools having to try and solve for it, businesses having to deal with it. It's just gonna start to become very real in 2026. And a lot of these issues we've either been ignoring or touched on, I think they're gonna come to the forefront and it's gonna be very important that we have more conversations and we try and solve some of these things.
[00:30:59] Mike Kaput: Yeah. That, [00:31:00] yeah, I agree. I tend to think, I just have this nagging feeling 2026 is the year where the societal backlash against AI becomes real. Yeah. I think we're gonna 12 months from now be having conversations about people that looked at us a weird way because of what we, what we end up doing, you know?
[00:31:18] So that'll be interesting to see how that plays out.
[00:31:21] Paul Roetzer: Yep. And I, and I would say just at a very high level, like I'm still super optimistic. I think we're also gonna see incredible innovation in science and, you know, advancements in medicine and like, there's gonna be all these amazing byproducts of this.
[00:31:35] And a lot of times on the show, we, we, we deal with like the challenges and things like that, and maybe we don't talk enough about the opportunities and the near term positive outcomes. That's, that's something we'll see too, is there's gonna be a lot of advancements made. I think sometimes it just might be overshadowed by the negatives 'cause that's what's, you know, draws more eyeballs and clicks and impressions, I guess, for sure.
[00:31:59] Demis Hassabis on the Future of Intelligence
[00:31:59] Mike Kaput: All right, so our [00:32:00] next big topic this week, Google DeepMind, CEO Demis Hassabis, did an interview on the Google DeepMind podcast, and he basically outlined a philosophical and scientific roadmap for the future of ai. So, Hassabis touched on a lot of stuff in this episode, but basically was arguing that the path to AGI requires moving beyond language models to world models.
[00:32:22] And those are systems capable of simulating physical cause and effect. He posits also that if the human mind is in fact, computable AGI could serve as a comparative simulation to isolate unique human traits like creativity and emotion. Also looking forward, he warns that the transition to AGI will likely be, in his words, 10 times bigger and happen 10 times faster than the industrial revolution.
[00:32:49] He also said DeepMind is currently allegating half of its resources to scaling and half to the pure innovation needed to bridge the gap to something like AGI. So Paul, [00:33:00] I know longtime Demi watchers here on the pod, but what did you take away from this conversation? I think it was their last episode in their series for the year.
[00:33:09] Paul Roetzer: Yeah, so Hannah Fry is amazing. Professor Hannah Fry, brilliant mathematician, author. She's, you know, wonderful book. and she does a great job with this series. I think this was their fifth season maybe. Yeah. so I've always been a huge fan of the DeepMind series. Great to go back and listen to these episodes.
[00:33:25] A few that I'll highlight that jumped out at me. I actually listened to this twice, once just listening in the second, taking notes while I was listening. So he touched on this idea of jagged intelligence, which we hear Ethan Molik is, is, you may have been the first one, I think to coin like this idea of jagged intelligence.
[00:33:42] And the basic premise there is that we have this like PhD level AI at times. There's like things you could use Gemini for that you're like, okay, I am at PhD or beyond in its capabilities. And then there's other things like it's not even at a high school level. Like it'll just make stupid mistakes.
[00:33:58] And that jagged [00:34:00] intelligence is really what prevents them from having confidence. Like, okay, we are at or very, very close to true AGI until we solve this jaggedness. I guess he talked about different dimensions of progress. One was consistency, meaning it's like error rate drops dramatically. Reasoning and thinking, you know, which we touched on already.
[00:34:21] And then he got into large language models. he said like, they basically start with all of human knowledge and then you try and compress it down. And so I'll just, I'll read an excerpt here because this relates to one of the dimensions we talked about. He said, I think the main issue at the moment is we don't know how to use those systems in a reliable way, fully yet in the way we did with AlphaGo.
[00:34:40] So they talked on AlphaGo, we talked about that in recent episode, the, the AlphaGo documentary. but of course that was a lot easier because it was a game. So in AlphaGo there was like these known outcomes, there was known what you're trying to achieve, whereas language models are sort of meant to be general in solving many things at once.
[00:34:54] He said, I think once you have AlphaGo, you could go back just like we did in the Alpha series and do an alpha [00:35:00] zero where it starts discovering knowledge for itself. So he's saying these language models are programmed through training processes and they don't really discover anything new on their own.
[00:35:09] But we could probably get to that point where we can have an alpha zero like, model that starts to learn on its own. He said, I think that would be the next step. But that's obviously harder. And so I think it's good to try and create the first step with some kind of AlphaGo like system. And then when we think about an Alpha Zero like system, meaning it starts and learns on its own.
[00:35:29] But that is also one of the things missing from today's systems, is it's the ability to, learn and continually learn. So learn online and continually learn. So we train these systems, we balance them, we post train them, and then they're out in the world, but they don't continue to learn out in the world like we would.
[00:35:44] So this is that continual learning idea I shared earlier. and then he finished. I think that's another critical missing piece for these systems that will be needed for AGI. He touched briefly on the scaling laws, and throughout the year there was different people saying, oh, they're slowing down, or, we hit a wall.
[00:35:59] [00:36:00] He said, I think we've never really seen any wall of ourselves. He said, there might be some diminishing returns, but you're not gonna like double every year, you know, continually. But that doesn't mean progress is necessarily slowing. you mentioned the research focus, and this is one of the areas where he's very confident in deep minds.
[00:36:17] Ability over others is that they, they can continue to invest heavily in research while productizing these models and introduced into the world where someone like an openAI's is having to divert. A lot of it's what would be, you know, kind of mid and long-term research efforts to focus on, you know, trying to figure out how to make money so that they, you know, can keep raising more money.
[00:36:40] So they, and again, Demis talks with such humility. Yeah. But it's just like stating facts. It's like, Hey, we have an advantage here. Like, we're not sacrificing the long-term innovation for productization. We're doing both. Well, one interesting thing, Hannah Fry asked about is this idea of like in [00:37:00] Alpha Go, when you know it's winning at the Game of Go, it had this ability to have a confidence score in its decision making.
[00:37:05] And this actually goes back to even IBM Watson back in 2011, when I first started studying ai, it was a similar concept where the AI had a confidence level of it's. It's, feeling that the probability of what it was gonna do was correct. And so she asked about the idea, could language models have that?
[00:37:23] Like could they actually have a confidence score that would actually enable them to like reduce hallucinations? And he seemed that that was like a viable thing. world models and simulations. He spent a ton of time talking about that. and that was the, you know, fundamental to eventually having these universal agents he would envision, that understands cause and effect and the mechanics of the world.
[00:37:45] It can intuit about physics and things like that. it got into AI bubble. He said it's over hyped in the short term possibly, but it's still under hyped in the medium and long term, which I agree a hundred percent with. I think that's the thing a lot of economists miss and investors miss [00:38:00] startup funding.
[00:38:00] Again, with all humility, sort of taking some, some veiled shots at like these labs. He's like, well. It's possible there's some bubbles where they're raising tens of millions of dollars in valuations for basically nothing. They have no sustainable business model. I don't think he was specifically talking about openAI's at this point, but just that, yes, there's definitely a bubble when it comes to that kind of stuff.
[00:38:21] But then he got into, Google's strength, which we've talked a lot about throughout the year. So he said if there's a retrenchment, meaning like maybe there is a little bit of bubble and things pull back, he said, that's fine. I think we're in a great position because we have our own stack with TPUs.
[00:38:36] They're custom chips. We also have all these incredible Google products and the profits that all that, makes to plug in our AI into. And we're doing that with search. It's totally revolutionized by our overviews, AI mode with Gemini under the hood. We're looking at Google Workspace, at email, at YouTube, and so there's all these amazing things, including Chrome, where they're building this in and like they have a wildly successful, profitable business that [00:39:00] even if there's this, like, you know, again, retrenchment.
[00:39:03] Um, he feels pretty good about where they're at, touched on nano banana and the role that maybe plays in their path to AGI. And it is always interesting when they talk about sort of unexpected outcomes from these models. Yeah. that even they are surprised sometimes by what these things are capable of doing.
[00:39:21] Um, got into the industrial revolution and how, you know, how that stuff happened and maybe the parallels to ai. And I think, you know, we'll talk a little bit about what Shane Leg had some similar concepts, which is basically like, it's gonna be 10 times bigger, 10 times faster. So yes, we can learn from that, but this is gonna happen so much faster and that we have to, as a society, really start talking more about these things and trying to figure out how we can work through this.
[00:39:48] Um, because he's very optimistic and you can't help but be optimistic as well when you hear him talk. He's also very realistic about, we haven't done enough as a society yet to discuss the disruption [00:40:00] that's coming in many different areas and and we need to do that more across disciplines.
[00:40:04] Mike Kaput: Yeah. And on that last note, I just keep revisiting his quote about this could be 10 times the impact and speed of the industrial revolution.
[00:40:14] And that sounds pretty grandiose, but I think that's really worth remembering because I hear from a lot of people that make the argument like, oh, you know, we had the industrial revolution. We had the internet revolution. We'll figure it out. We'll change. Yeah. Like, we'll adapt. That's all true. I'm not saying you're wrong, but things can happen a lot faster and at a bigger scale really quickly that throw all the rules out the window.
[00:40:36] Yeah. So I think it's worth considering speed and scale, not just the nature of disruption when you're thinking about how likely or unlikely it is. We're about to see some very strange times at.
[00:40:46] Paul Roetzer: Yeah. So I'll grab one more excerpt because he said I, is there everything I ever dreamed of When she asked about like, what it's like to be a part of this and like leading this, and he said, we're at the absolute frontier of science in so many ways.
[00:40:57] Applied science as well as machine learning. [00:41:00] And that's exhilarating as scientists know, that feeling of being at the frontier and discovering something new for the first time. And that's happening almost on a monthly basis for us, which is amazing. But then of course, we, meaning Shane and he and others who have been doing this for a long time understand it better than anybody, the enormity of what's coming.
[00:41:19] And this thing about, it's still under actually appreciated as, as we referenced, in fact, what's going to happen in more of a 10 year timescale, including to things like the philosophical, what it means to be human, what's important about that. All these questions are going to come up and it's a big responsibility to be figuring these things out.
[00:41:37] But yeah, he's basically saying like, in the next 10 years, we're gonna experience, in essence a hundred years of change. And it's hard to prepare mentally for that and prepare as a society and prepare your business since, again, that's why I say like five year plans and businesses other than like goals and vision of what you think is possible.
[00:41:55] But the how you get there, I don't really comprehend it. Like I laid out [00:42:00] a five year plan for our team recently. That's literally like one page of like, here's where I think we're gonna go and here's like the steps we'll take to do it. But the reality is we're gonna be reinventing this company every like six to 12 months, right?
[00:42:12] Sometimes faster cycles. New things become possible that we couldn't have even done before, like deep research, which I just said we just hired a role for, didn't exist when the year started. That's so crazy. And we're reimagining an entire
[00:42:25] Mike Kaput: research firm around it. Unreal. I didn't even think that. You're right.
[00:42:29] That, wow. It feels like this year, its been five years long. Yeah. Crazy.
[00:42:35] Deepmind Co-Founder on the Arrival of AGI
[00:42:35] Mike Kaput: All right, so in our third big topic, we are talking about another episode of the Google DeepMind podcast and the reason is it is also featured Shane Legg, who is co-founder and chief AGI scientist at DeepMind. And in it he reaffirmed his longstanding prediction regarding the arrival of AGI.
[00:42:52] So he said that he maintains his prediction of a 50 50 chance that AGI will arrive by [00:43:00] 2028. He first made this forecast way back in 2009 and he defines this as what he calls minimal AGI. So an artificial agent capable of performing the cognitive tasks atypical human can do. He also talked about some of the uneven performance they're seeing in models, that kind of jagged intelligence and looking ahead, he predicts full AGI could follow within a decade after that 2028 liftoff.
[00:43:26] So Paul, what jumped out to you here, I think is very interesting. They're ending the year on these AGI predictions from two of the biggest and best in ai.
[00:43:35] Paul Roetzer: Yeah. And I think it's because they know it's, it's near. so I would say a lot of this episode, again, Shane's amazing to listen to. He's been at the absolute frontier of this, you know, coined the phrase, or at least like, they, they later found, he told the story, like they later found a research paper from like 97 that talked about AGI.
[00:43:54] But, he, you know, they were unaware of that paper when he sort of coined the phrase AGI in relation to [00:44:00] what they were doing. so I would say a lot of this episode is really digging into that, like, well, what do you consider AGI? What do you consider super intelligence? How do we define these things?
[00:44:09] And so it's really fascinating to listen to, but it, it, for him, it centers heavily around generality, the ability for these things to do many tasks at or above human level. And he talked about, at one point, like when they got into definitions, he said, you could have this adversarial test, and this is more hypothetical.
[00:44:25] It wasn't something they like built this, but he said, you get a team of people, give them a month or two or whatever. They're, they're allowed to look inside at the ai, do whatever they like with it. Their job is to find something that we believe people can tick, typically do that's cognitive, where the AI fails at it.
[00:44:42] And if they can find that, then it by definition would fail the AGI definition. But if they can't after some given period of time, like everything, they give it every task they give it. A human could do it's capable of doing, then in essence, you're there. And so he, he basically said [00:45:00] like, he's since 2009 has predicted, 2028 is when we would get to this, like minimal AGI as you referenced.
[00:45:07] And he's, he hasn't changed that. He is like 50 50. and so he, he said, let's see, there's, yeah, still 50 50 by, by 2029. And then he talked just about this idea of like the ai. he got into like the, oh, here, it's the one thing where he used the exact like tangible example we give. He said, I think we'll see is sort the next few years is AI systems going from being very useful tools to actually taking on more of a load in terms of doing really economically valuable work.
[00:45:38] Paul Roetzer: He said, I think it'll be quite uneven. It'll happen in certain domains faster than others. So for example, in software engineering in a few years, the fraction of software being written by AI is going to go up. And so in a few years where prior you needed 100 software en engineers, maybe you need 20 using advanced CI tools.
[00:45:56] Yeah. And now I think that that's actually already happening. I [00:46:00] think that that is a realistic thing in 2026 probably, but that's the exact example we've talked about that over time. You just need fewer people doing the work. And that is gonna happen across every knowledge work profession. This is not limited to coding and AI research.
[00:46:14] It's gonna be every component. so yeah. I don't know. Again, just great interviews. They're both fascinating to listen to. If you haven't had a chance, Shane does fewer interviews than Demis. Yeah. So it's possible, like people out there maybe haven't heard an interview with Shane. And again, I go back to this idea like, I'm not trying to pick winners and losers here, but when you listen to Shane and Demis talk, you can't help but hope that they are at the front edge of this for society and humanity's sake.
[00:46:42] Like yeah, they are researchers through and through who have given. Their adult lives in endemics case, even part of his childhood, to pursuing this idea of AGI to truly solve intelligence and science and all the biggest problems in in humanity. And [00:47:00] yes, they're part of Google and they, they, they got in there and they, they're working on products too, but like, you just can't help but feel like they are truly in this for the, the right reasons.
[00:47:10] Um, and you want them to be at the front edge because I think we have the best chance of this going well if them and their team and clearly like Jeff Dean and some of these other amazing people at Google are a part of solving this. and Demis, I think as much as anybody truly wants to be collaborative and he, I think is the first person that's like sending the message out.
[00:47:34] Like, guys, we, we got there. Yeah. We need to work together. he's the one I would want that sees it first. That then brings everybody together. 'cause I think that. There's a chance that if some of these other labs get there first, they're, they're not making the call to the other labs telling 'em we got there and we should work together now.
[00:47:51] Hmm.
[00:47:53] Are AI Job Fears Overblown?
[00:47:53] Mike Kaput: Alright, so let's dive into our rapid fire this week. So first up, back in October, we had talked [00:48:00] Paul about this research out of the budget lab at Yale that basically was saying widespread adoption of generative AI has not yet caused significant upheaval in employment figures. Now there's some nuance to this report we'll get into, but basically they were saying that in the first 33 months following the release of chat GBT, there had been no discernible disruption in the labor market.
[00:48:21] Now this has now become a bit of a political talking point because venture capitalist and White House AI czar David Sachs this past week highlighted these findings during a recent discussion on the mega popular All in podcast, he basically was pointing to the fact that AI is not leading to job cuts and all this is overblown.
[00:48:41] He also pointed to data we covered on a previous episode from Challenger Gray and Christmas that showed a spike in AI attributed job cuts in October. But then he pointed out those spikes in job cuts went away in November and that AI is re is responsible for only a very small portion of [00:49:00] layoffs. So Sachs comes out with this, uses this study as basically a political talking point.
[00:49:06] Um, one of the researchers, Brookings Senior Fellow, Molly Kinder, then writes on LinkedIn that, you know, it's a good debate to have. She agrees with a lot of his perspective, but really her point is that she does disagree with him in the sense that she actually thinks that ai, he is right, that AI job fears are overblown in the short run.
[00:49:26] But she said, I think he underestimates AI's impacts in the medium to long run. So Paul, maybe walk me through what's going on here. Sachs is just, you know, understandably, the administration doesn't want to get. Own any type of job loss from ai. He makes a good point, I guess, that we're not seeing it yet in the data.
[00:49:45] Where do you fall on this?
[00:49:48] Paul Roetzer: Yeah. So we've talked many times about you can make data say whatever you want. so Molly's a great researcher. We, we've referenced her work a couple of times. Yeah. always follow her stuff. I think she did a very [00:50:00] good job posting on LinkedIn, kind of like addressing this situation.
[00:50:04] And we'll include that in the show notes. I, I'll read a excerpt here from what she said, 'cause I think this is probably the best way to, to handle this. so she said, David is right. AI job fears are overblown in the short run. Our data shows that AI has not yet in, in parentes caused widespread upheaval in the labor market, and it is unlikely to do so.
[00:50:25] Right away. I shared David's skepticism of the most aggressive predictions of the AGI by 2, 2, 20, 27 crowd. Then she went on to write. But we shouldn't expect these workforce changes to happen overnight. It's still very early days. AI adoption is slow and uneven. Even transformative technologies take time to reshape, labor markets.
[00:50:47] Here's where I disagree with David. I think he underestimates AI's impact in the medium long run. she said, consider the AI trajectories described this week by Demis and Shane Leg, which we just talked about. [00:51:00] she said, full AGI within a decade with potentially seismic economic and social consequences.
[00:51:04] Demis suggested AI's disruptive impact be 10 times bigger and faster than the industrial Revolution. both argued we need to take these futures seriously despite the uncertainty and do far more to prepare. I wholeheartedly agree and I bet the public does too. So again, this is still Molly's post.
[00:51:21] Americans are not just fretting about dramatic headlines today. They're nervous about what lies ahead to Tucker. She means Tucker Carlson, who's on the podcast. Points out the meaning of work. Americans want leaders to grasp just how much their livelihoods and their kids' futures opportunities mean to them.
[00:51:38] They want political leaders to have their backs and to have a plan, not just for ensuring America reaches the AGI or the AI frontiers first, but for helping Americans navigate what could be unprecedented, changes along the way. This won't happen if the message from the White House is quote, there is nothing to see here.
[00:51:57] The task ahead isn't to sell AI to a [00:52:00] skeptical public. It is to shape AI's design and deployment and our policy infrastructure, so it actually serves them. Credit to Jason Kalani for suggesting some novel ideas. Again, referring to the All In podcast, this isn't a right left issue. Americans care deeply regardless of party.
[00:52:17] Mm. So I couldn't have said it any better, so I just like figured out. I'll just read her thing. Again, I don't know David Sachs personally. I know he's doing a lot of positive things in the administration to accelerate AI development. I honestly take most of his tweets with a grain of salt, knowing he has to sort of carry this line and like has to put things out publicly that the administration wants to see.
[00:52:44] I cannot believe someone like David truly believes that this isn't going to disrupt society. Like it, it's, and I don't wanna say it's like gaslighting 'cause I don't know that it's necessarily meant to be, to [00:53:00] that degree. I do think it's selective use of data to, in the near term. Have a talking point that aligns with Trump and that the, you said the administration cannot own millions of jobs being lost in the next 12 months because of the midterms.
[00:53:19] Yeah. So they, they cannot admit that in two years time everything could change. They cannot, and they will not, like you, will not see that talking point from the administration no matter how much the data says it's happening. and so I think that's just where we have to arrive at, is we have to know why people would have the talking points they have.
[00:53:41] And we have to try and listen as best we can to the neutral parties like Molly and the Brookings Institution, which may have data that shows one thing right now, but they're very open to the fact that that might show something very different in 12 to 18 months. Whereas from a political standpoint, you gotta [00:54:00] toe the line of whatever it is that's gonna align with, you know, what you're trying to get out into the industry.
[00:54:04] So again. I don't, I think David probably knows, his chamath and yeah. Jason, like his friends aren't shy about telling him, man, you're, you're probably off the rails on this one. Right, right. and so I think he's hearing it and I would imagine he's aware of the data that probably says it's not gonna end the way he's currently tweeting it.
[00:54:26] It is,
[00:54:27] Mike Kaput: yeah. And we've said this till we're blue in the face, but it just bears repeating that you really just have to do your due diligence on research. Yeah. Kudos to Molly for breaking it down like that. Like Yeah. You could take her research and create that headline she admits fully and says, fully, look, the research is way more nuanced than just this headline.
[00:54:45] So you gotta be really careful with this stuff.
[00:54:47] Paul Roetzer: Yep. And that's a good point, Mike. Like we always say on this podcast, never read the headline. And it doesn't, again, it could come from C Nnn from Fox News. Yeah. From the booking institution, from a tweet. Never read the [00:55:00] headline at face value. Yeah. There is almost always.
[00:55:03] Context around that headline that is relevant to your perspective on this and your point of view? We do not, like, we try real hard to not like, give our like points of view beyond of trying to maintain that neutrality of like, let's see this from as many sides as we can and like allow you the listener Yeah.
[00:55:23] To arrive at your point of view. and I think this is one of those instances where Right, left doesn't matter. Like we all have jobs, we all have kids who we need to worry about. And like, it doesn't matter what your political affiliation is when it comes to that. We're just trying to deal with the reality of all the people who know, who are in the labs, who are seeing the future, looking at the models one generation ahead long before you and I see 'em.
[00:55:48] Yeah. They sure as hell think disruption is coming. Yeah. And I I'm gonna put my money on them. Like I believe they know what's gonna happen and I think we should, we should be preparing as though it's not gonna [00:56:00] be all sunshine and rainbows.
[00:56:03] Mike Kaput: I could not agree more.
[00:56:05] Gemini 3 Flash
[00:56:05] Mike Kaput: All right, next up, Google has released Gemini 3 Flash, which is a new version of Gemini designed to deliver, deliver high level reasoning capabilities at significantly faster speeds.
[00:56:16] The company is positioning this as an effort to bring frontier intelligence of the recently launched Gemini three architecture to high frequency tasks without the associated latency or costs. So this model is very smart, does very well in a number of benchmarks. And interestingly, Google reports that Gemini three flash outperforms the previous Gemini 2.5 pro model, which was very, very good while running three times faster and using on average 30% fewer tokens to complete standard tasks.
[00:56:47] So effective immediately Gemini three flash replaces the 2.5 version as the default model. For all users in the free Gemini app globally, it is also actually rolling out as the underlying engine for [00:57:00] AI mode in Google search. Now Paul, this is an awesome release. It looks like it's, I think, important to remember that we're not just getting bigger and B better models.
[00:57:10] That's not the only thing to be tracking here. We're getting much faster, smaller and cheaper models, which kind of plays into what you've said many times, which is we're gonna see intelligence everywhere because it is going to be small, fast, cheap, on device, maybe even basically free.
[00:57:26] Paul Roetzer: Yeah. And again, I, without getting all the technical details, you kind of highlighted it.
[00:57:30] I think the main takeaway for people without getting, again, overwhelmed by the technical side, the models keep getting more efficient. every six to 12 months, basically the thing that was the frontier today. So take. Three Pro or 5.2 Pro from Chad GBT assume in like six to 12 months, that model is gonna cost 10 times less and it's gonna be able to be served up 10 [00:58:00] times more efficiently.
[00:58:01] Like that's basically the trajectory around is they're able to keep taking the thing that is state, state-of-the-art today and in six to 12 months basically making that now available way cheaper, way faster, and eventually on devices. Yeah. and so I think that's what's we're gonna keep seeing continuing and what they've realized is what most people just don't need the state-of-the-art model.
[00:58:24] So, so like you don't need it, like even if you go to Gemini right now, so I'm looking at like the, the model choice you have Gemini three, so I'm in the Gemini app.
[00:58:32] Mike Kaput: Yeah.
[00:58:32] Paul Roetzer: It says Fast, which is answers quickly. Yeah. Thinking solves complex problems. And pro they specifically say for advanced math and code, so they're basically like, Hey, if you're not doing Advanced Stein stuff.
[00:58:44] You don't even need this model. Like you're probably not even gonna find value in it. It's not gonna be that different. And then when you go to chatGPT, they have auto, which decides how long to think, which is almost always gonna then give you the fastest one. Yeah. They have instant, which is answers right away thinking, thinks [00:59:00] longer for better answers.
[00:59:01] And then pro, which they define as research grade intelligence. So they're definitely just trying to like, get you to these lower models. 'cause they know like 99% of use cases don't need the pro. Which I actually, I do start to wonder like, what am I paying the 200 bucks more a month for? Because I don't really have a use case for the pro right now.
[00:59:19] So yeah, just good stuff. And I think, again, this is why I was saying like you could have a Siri in three to six months that actually does what it's supposed to do on your iPhone. Yeah. Because they could just embed Gemini three, flash into it and all of a sudden it's like, boom, you just, you just fixed it.
[00:59:36] That would be amazing. Yeah.
[00:59:38] OpenAI Eyes Billions in Fresh Funding
[00:59:38] Mike Kaput: All right. Next up, openAI's has initiated preliminary talks with investors regarding some massive new fundraising efforts. According to Reuters, the company is discussing raising funds at a valuation of approximately $750 billion. I think this is also are perhaps originally broken in the information as welks and if finalized, this would mark a roughly [01:00:00] 50% jump from the company's reported valuation just a few months ago.
[01:00:04] Their report indicates OpenAI could raise as much as a hundred billion in this round to support its unrelenting demand for computing power. Now, interesting wrinkle here, A key player in these discussions is Amazon and the information reports that Amazon is in talks to invest at least 10 billion in the company.
[01:00:20] And under these terms, OpenAI would use Amazon's proprietary training chips for AI workloads, and Amazon might potentially help sell enterprise versions of chatGPT. So Paul, like another week, another huge amount of money being raised by openAI's. Like, are they just gonna be raising money forever? Like I say that almost half jokingly, like where does that, what is the price tag on all this?
[01:00:45] Paul Roetzer: Yeah, I mean guess was trying to get to the IPO, but yeah, I mean I think this, the number you cited, the seven 50, I think that's pre-money. Yeah, maybe because the Wall Street Journal had it like 830 billion.
[01:00:57] Mike Kaput: Oh, I see. Yeah. Yeah. So
[01:00:58] Paul Roetzer: it's possible, it's [01:01:00] like a pre-money valuation. So a, again, if you don't follow like the, the kind of investing, like if the, if the company's valued at 750 million or seven 30 million today and then they raise a hundred billion, now, it's, now it's at 830 billion.
[01:01:12] Um, so I, you know, it's funny 'cause I actually. This might not be what's happening, but I'm almost convinced it is. Mm. We just talked about SpaceX's 850 billion and they're gonna, IPO probably in 2026. There is no way Sam Altman and Elon Musk aren't fully aware of the valuation of each other's companies.
[01:01:33] And that, like, you gotta think that it's, you know, if SpaceX raises that, or does the secondary internal at 850, that somehow magically the valuation of openAI's doesn't end up being 860 billion when said and done. And then, like, if openAI's is gonna IPO for, you know, 1.3 trillion, that's it. You know, it's a li I guess when you're worth, you know, half a trillion dollars, like you gotta [01:02:00] find your amusement somewhere, right?
[01:02:02] So yeah, again, like just crazy numbers and like I referenced earlier in terms of the trends for next year, these. These are things to watch. We're talking about just insane values for privately held companies that are all gonna go public in the next like 18 months.
[01:02:19] OpenAI Releases New ChatGPT Images
[01:02:19] Mike Kaput: So some other openAI's news as well.
[01:02:21] This past week, they have also released a major update to the visual generation capabilities. In chatGPT, there's a new version of chatGPT images powered by a flagship model called GPT Image 1.5. This release prioritizes granular control over your image editing. So according to openAI's, the model can now execute precise instructions.
[01:02:41] It can add, subtract, or blend specific elements. It'll preserve the original images, lighting composition, and subject likeness. The company notes that the system adheres more reliably to user intent and offers improved rendering for small and dense text. So Paul, it sounds [01:03:00] like openAI's. Just making their bid to catch up with Nano Banana Pro, which kind of sucked all the oxygen outta the ecosystem with how much people were raving in a great way about that.
[01:03:10] Paul Roetzer: I haven't tried it, but the stuff I was just monitoring on X is like openAI's got cooked by Nano Banana Pro and they, they have not caught up, basically. So it's not that it's not a good usable model. but they, like OpenAI knew that like the reason it's a 1.5 is that they know this is not, not on nano nano Pro level.
[01:03:30] Yeah. But they're obviously trying to catch up and I saw a couple articles talking about how, you know, they, they really had to like ramp up the investment in the image thing because of Nano Banana Pro. So Yeah. Again, you know, to end this year, the momentum has certainly swung to Google on basically every front The model.
[01:03:48] Yeah. You know, the language model, the image model, the, the video model I would imagine world models like Yeah. If you had to force rank who, who's ahead in the race, it, it's, it seems to be [01:04:00] Google. By, by quite a, quite a distance at the moment. Not to say open eye's, not gonna figure this out and, you know, get back in the game early in 2026.
[01:04:10] But, Google's looking pretty good right now.
[01:04:12] Mike Kaput: Yeah. My gosh. If you had looked at the headlines 12 months ago, that's not the story. Yeah. They were gone. that would've been telling.
[01:04:18] Karen Hao Issues AI Book Correction
[01:04:18] Mike Kaput: All right, next up there's been kind of an ongoing item we've been covering on this podcast about author Karen Howe and her book on ai.
[01:04:25] She has actually now issued a formal correction in her recently published book, empire of ai. This is, if you'll recall, from past segments regarding the water footprint of AI data centers. So a researcher, Andy Massly, published basically an analysis, in the last month or two of how's book, identifying basically a statistical error regarding a proposed Google data center in Chile.
[01:04:50] And the text originally claimed the data center would consume 1000 times the water volume of a nearby city, which is like an eye-catching fact in the book. Massly [01:05:00] realized that the actual usage would be roughly one quarter of what the city used. That's a huge discrepancy of 4,500 times. And it was caused by basically just an a unit mix up between leaders and cubic meters.
[01:05:12] So just this idea that basically there was a mix up and it resulted in saying that data centers use way, way, way more water in Chile than they actually do. Now here's where this gets amplified and politicized, because White House staffers, David Sachs and Siram Krishnan characterized the original claim as basically a hoax saying that AI concerns about AI water usage is a hoax and not real, and the books errors prove that.
[01:05:45] Now, how has since confirmed the unit error error in the source document, she's issued a correction. She detailed the updates on her website, massly updated his analysis to commend how for her direct engagement and correction of the text. So Paul, [01:06:00] it's good to see Karen issue the correction. Seems like she handled this like really well.
[01:06:05] so kudos to her. I can't shake the feeling this like debate though not only has really important issues, we should really be maybe questioning some of the narratives around data center, water usage, it sounds like, but also at the same time, this just went off the rails and showed off like the worst of the internet.
[01:06:22] Paul Roetzer: Yeah, I mean, people get ugly with this stuff. Yeah. Again, like you gotta, anytime it's political, you just have to understand why people would be doing what they're doing and saying what they're saying. So I just, I try to avoid getting like frustrated when it comes to politics. 'cause I understand enough how the game is played to, to just like not get into this stuff.
[01:06:45] I mean, I think Karen handled it with as much humility, as you can in journalistic integrity. No one wants to make a mistake. Mike, you and I have written books like you don't wanna Make a Mistake. And especially when you did your, did the hard work and you [01:07:00] went to reliable sources and the data just wasn't what you thought it was.
[01:07:04] and you know, like nobody wants to screw up that way. And this is a bestselling book. I would imagine it's probably sold over a hundred thousand copies. I don't know the exact number, right? But, that's a, it's an embarrassing thing to go through. But she, she owned it. she did the research, she looked it up, she made the fix.
[01:07:20] Like the a hundred thousand books that are out there are gonna have the wrong data point. And as an author, that sucks. Like again, you don't want to put something out there now if it's in the digital world. Like you go and make the change the edit, now it's in the way back machine, but like the site's different.
[01:07:34] And when he reads it's gonna see something different. So, yeah, I just have empathy for Karen as a, a good person. I know her. We had her speak at Macon. She's a good person, she's an amazing journalist, and she works hard, and I know she didn't make the air intentionally, like, right, right. So, you know, I want, I, I, and I just, I hate when people, when the bad side of people comes out and they feel like they can, they, they need to pile [01:08:00] on a good person who made an honest mistake and obviously had no intention of falsifying information to, to make a political point.
[01:08:08] Right. so yeah, again, I just hope that people can be understanding. Just the internet can be an ugly place sometimes, so,
[01:08:18] AI Keeps Getting Political (Roundup)
[01:08:18] Mike Kaput: well, unfortunately, our next bit is also about politics over here. We are basically increasingly having a regular bit here on the podcast. To round up with the latest political stories in AI happening right now.
[01:08:31] So this past week, this issue has kind of blown up a bit. AI infrastructure and governance are moving to the center of the debate. So first, there's a new report from Politico detailing how the tech industry is launching multimillion dollar lobbying campaigns to counter growing voter opposition to data centers, which critics argue strain local resources.
[01:08:52] This follows a statement from Senator Bernie Sanders calling for a moratorium on new data center construction to [01:09:00] ensure democracy can catch up to the technologies growth. State and industry leaders are also advancing competing regulatory visions. Florida Governor Ron DeSantis recently proposed an AI bill of rights to protect residents.
[01:09:14] The framework would prohibit government agencies from using Chinese created AI tools. It would criminalize DeepFakes depicting miners and ensure AI is not the sole determinant in insurance claim denial. Then finally, venture capital for Andreessen Horowitz. Really state roadmap for federal AI legislation, arguing that Congress must establish national standards to prevent a patchwork of state laws.
[01:09:39] So Paul, big thing stands out here. That data center topic really seems like both sides of the aisle are drawn. Battle lines around that.
[01:09:48] Paul Roetzer: Yeah. again, politics is like one of my least favorite things to talk about. so I mean, one thing to know in politics is you have to propose extremes and then you try and [01:10:00] like meet at a middle ground.
[01:10:01] So usually when a proposal is made from whomever it is, you're trying to like get people thinking in an extreme and then you eventually get them to where you actually want. Obviously we are not gonna stop building data centers. That's a ridiculous concept. But the point is to try and raise awareness about what's going wrong and things.
[01:10:20] And so you can. Make some progress on, you know, getting your constituents to like, start seeing the side that you want 'em to see. And so whatever that is not gonna happen. It is a, it's a fine idea. the stuff DeSantis is doing, I don't know. There's elements of it that seem viable. yeah. Seem worth discussion.
[01:10:40] He, the A 16 Z roadmap for federal AI legislation, all I will say there is, to reiterate what I said earlier, is you have to understand the context with which things are created and the points of view and the financial interests of the [01:11:00] parties who are creating something. And I, if you wanna read the roadmap of, for federal AI legislation as an objective piece of, ll, I'll say I would first go reread the Techno Optimist Manifesto from a 16 z, and Mark Andreesen.
[01:11:21] And so at at least have the context of that manifesto in your mind when you read the federal AI legislation recommendations. That's all I'm gonna say. I think this topic, I'll say to end year in a positive mind frame
[01:11:38] Mike Kaput: and perhaps also positive, maybe refresh your memory on the recent executive order on AI that we talked about a couple episodes ago.
[01:11:46] Yes,
[01:11:48] Paul Roetzer: yes. And again, I am not taking a point of view here that the idea of the federal approach is bad. Right? I don't, it actually makes a ton of sense in my mind. [01:12:00] All I'm saying is I wanna read objective as close to objective and neutral opinions of why the different approaches are good and bad. And sometimes it's really hard to find stuff that doesn't have financial interests and political interests tied to their points of view.
[01:12:20] Um, and so the best we can often do is read the extremist stuff on both sides and try and say, what would the middle ground be to these opinions? Because the stuff that's neutral and just objective often doesn't get any attention. And thereby it's hard to find the reasonable logic-based stuff and you just find the stuff that has very specific points of view for very specific personal reasons.
[01:12:51] AI World Models
[01:12:51] Mike Kaput: Alright, next up, investor and industry attention is surging around world models, which are AI systems designed to simulate physical reality rather than [01:13:00] just process text or images and the like. Now, according to a report from ai industry watcher, Alex Heath, the startup general intuition, is in late stage, talks to raise several hundred million dollars at a valuation north of $2 billion, and they're focused.
[01:13:15] On building a general purpose AI agent and world model. Now that coincides with a new strategic roadmap that runway published, which basically unveiled its vision for a universal world simulator runway. The video generation company argues that video models trained its scale, essentially become physics simulators.
[01:13:37] They learn how objects move and forces propagate. The company predicts it will achieve interactive simulations, indistinguishable from the real world within five years. And they claim this approach will eventually replace physical infrastructure for scientific discovery, allowing researchers to test things in fields like robotics and climate science without [01:14:00] needing traditional laboratories.
[01:14:02] Now, Paul, we've talked about this a couple times as a thread throughout this episode, seems like world models are increasingly being talked about or perceived as maybe in the next big piece of the future of ai.
[01:14:13] Paul Roetzer: Yeah, it's also fundamental. Yann LeCun, who, you know, recently left meta, it's pretty fundamental what they're doing.
[01:14:18] But yeah, so, and again, like, so I'll just try and explain this at like the most simplistic level, which we've to touched on a couple times. But the basic premise is they, for years, they trained these models, giving them all of human knowledge in a text space form. And somehow those language models actually seem to start to develop an understanding of the physics of the world just through reading about it.
[01:14:41] When you start to give computers vision, or training them on videos, the assumption is you could, you could make more progress towards their understanding of the physical world and their ability to eventually interact with it. when you listen to people like Demis talk about this, the thing that's [01:15:00] somewhat shocking is they seem to actually gain this emergent capability faster than they thought would happen by just showing them physics basically.
[01:15:10] And so to get to these universal agents and eventually have those embodied in robotics that can walk around facul, you know, facilities or work in nursing care homes, be available for purchase to have in your home. They have to understand the world like you and I do. They have to know if I knock this cup over, the water's gonna pour out onto the floor and what it's gonna look like and how it's gonna work, and how the f you know, fluid dynamics works like from a very early age as kids, I mean, toddler.
[01:15:39] Yeah, you just, I'm gonna touch something and it's hot and I'm gonna get burned, or I'm gonna, if I step here, I'm gonna slip. Or if I step on ice, I'm gonna fall. that takes like experience and un and developing this intuitive understanding of physics and they, that's like what they think needs to happen.
[01:15:57] that, that the AI models to truly reach human level [01:16:00] intelligence beyond, they actually have to be able to experience the world the same way you and I do and perceive that and actually like. predict what will happen in advance. Yeah. And that requires that real world experience. So that's the bet of world models.
[01:16:15] It's the be of Fei, FEI Lee, and Jan Koon and deas and, you know, Sam Altman in opening eye. Like everyone knows that this matters a lot. and there's gonna be a lot of effort to be made or break breakthroughs in this area. A lot of money's gonna get poured into world models.
[01:16:32] Mike Kaput: Yeah. And, you know, who knows what the predictions and timelines, but that's pretty interesting.
[01:16:36] Runway is saying we're gonna basically have this within five years.
[01:16:41] Paul Roetzer: Yeah. I, again, the, the timelines are hard. what that means have it, like what does that, what are the outcomes of having it? Does it mean we can Right. Play real time video games that dynamically generate the world as we're turning to the right and to the left.
[01:16:55] And like in, in that rendering speed and things like that? Yeah. Or does it mean like [01:17:00] 10 seconds and 15 seconds at a time? Like. That's what we just don't know. But I no doubt that we are gonna get very far along and by the end of the decade, I think it's very realistic. Ro humanoid robots are a very real thing throughout society and they're interacting with the world the same way a human would.
[01:17:17] And so I don't doubt those timelines.
[01:17:20] Mike Kaput: I look forward to that feature.
[01:17:22] Paul Roetzer: I think I do, I also, terrible future, I guess. Yeah, there's ways go wrong. My vacuum robot. I don't, I dunno like a Yeah, yeah. Humanoid robot walking around my house.
[01:17:31] US Government Launches Tech Force
[01:17:31] Mike Kaput: Alright, our last topic before we wrap up here. The White House is launching a new initiative to embed private sector technical talent into the federal government.
[01:17:39] This is dubbed the US Tech Force, and this program aims to recruit approximately a thousand engineers and specialists to work on AI infrastructure, application development, and data modernization across federal agencies. Now interestingly, this initiative. Centers on a collaboration with leading tech firms, including Amazon Web [01:18:00] Services, Microsoft, Nvidia, and openAI's.
[01:18:03] Under the agreement private sector partners can nominate employees for government service and have committed to considering program alumni for full-time roles once their federal terms conclude. Now this of course comes after the executive order signed earlier this week, establishing national AI policy frameworks.
[01:18:21] So Paul seems interesting. Probably something pretty well needed within government. Kind of interesting too, the AI labs are embedding even deeper into the federal government.
[01:18:31] Paul Roetzer: Yeah. You know, who's not on that list? Philanthropic. Yep. Jumped out me right away. Very conspicuous absence.
[01:18:37] Paul Roetzer: Yeah. again, at a high level, conceptually cool idea.
[01:18:42] the government needs the best scientists. I don't know how many of them are leaving private sector for 150,000 a year. And that's like not even their signing bonuses most of the time. So I don't know how that works. The labs are offsetting the salaries, but like, I don't know, an AI researcher [01:19:00] that's leaving philanthropic or openAI's or right.
[01:19:03] X AI to go to make a couple hundred grand working for a government they maybe don't agree with. yeah, I don't know. Again, cool, cool concept. I will be interested to see how well this actually works. but I have, I have not dug into all the details of, I'm sure they've thought through a lot of the challenges.
[01:19:23] I can see straight off just looking at this at the surface level.
[01:19:27] Mike Kaput: Yeah. All right, Paul, well, thanks again for breaking everything down for us this week. Just as a quick reminder to folks, if you could go ahead and leave us a review and follow us on your podcast platform of choice, that would be very, very helpful for us to improve the show and reach more folks.
[01:19:43] And go ahead and remember to take that AI pulse survey this week at SmarterX.AI slash pulse. Paul, thanks again.
[01:19:51] Paul Roetzer: And again, thanks for a great year everyone. We will be back January 6th with the first edition of 2026. In the meantime, go check out some [01:20:00] of the past episodes. The one I would tell you if you enjoyed the trends to watch, go listen to episode 1 41.
[01:20:05] That's when I did the Road to AGI. not much has changed. Honestly. If you could go listen to that Road to AGI episode, there really isn't anything on that list I would change. And I got asked that question like, oh, what, what were you wrong? What predictions have you had wrong, Lisa? I was like, they're not predictions, but like trend wise, I'm, I'm still feeling pretty good about the timeline I laid out for the next few years.
[01:20:26] So I would say go, go check out that episode. We'll drop the link in the show notes. All right, Mike. Thanks. Great job this year. Looking forward to another wild year of AI next year. And, again, everyone, happy holidays, happy New Year, and we'll talk to you in the new year. Thanks for listening to the Artificial Intelligence show.
[01:20:44] Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI [01:21:00] courses, and earn professional certificates from our AI Academy and engaged in the SmarterX slack community.
[01:21:05] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
