Microsoft's AI CEO says your job gets automated in 18 months. But most companies can't even get past step one of AI adoption. Paul Roetzer and Mike Kaput break down what's really happening, and why the gap between AI haves and have-nots is becoming even wider.
Also in this episode: Stanford economist Erik Brynjolfsson declares the AI productivity payoff has arrived in national economic data, Anthropic CEO Dario Amodei warns the world isn't taking the AI exponential seriously enough, ByteDance's copyright firestorm, Claude Sonnet 4.6, the Open Claw acquisition and more in our rapid fire section.
Listen or watch below—and see below for show notes and the transcript.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Click here to take this week's AI Pulse.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
- AI Academy
- AI for Departments Webinar Series
- 2026 State of AI for Business Survey
- Ep. 200 Live in AI Academy
00:05:38 — AI Pulse Survey Results
00:08:48 — Microsoft AI CEO Predicts White Collar Work Automated in 12-18 Months
- Mustafa Suleyman plots AI ‘self-sufficiency’ as Microsoft loosens OpenAI ties - The Financial Times
- X Post from Financial Times: "Mustafa Suleyman... fully automated by AI within the next 12 to 18 months"
- Microsoft AI chief gives it 18 months — for all white-collar work to be automated by AI - Fortune
- The End of the Office - Andrew Yang Blog
- Andrew Yang says AI will wipe out millions of white-collar jobs in the next 12 to 18 months - Business Insider
- The Worst-Case Future for White-Collar Workers - The Atlantic
- Fed governor Barr: 3 ways AI could shake up the labor market - Axios
- AI Threatens Staffing Industry as Companies Bring Recruitment In-House - Bloomberg
00:20:42 — AI Productivity Evidence
- LinkedIn Post from Paul Roetzer
- The AI productivity take-off is finally visible - The Financial Times
- X Post from Erik Bryn: "US productivity growth at 2.7% for 2025. Nearly double the average of the previous 10 years."
- X Post from Erik Bryn: Follow-up thread on whether AI is boosting productivity
- AI Coding and Product Development - Axios
- Accenture Combats AI refuseniks by linking promotions to log-ins - The Financial Times
00:33:23 —Dario Amodei on Dwarkesh
- Dario Amodei — "We are near the end of the exponential" - Dwarkesh Podcast
- X Post from Bloomberg: Sam Altman and Dario Amodei refused to hold hands at AI summit in India
00:47:55 — Dor Brothers AI Movie and the Rise of Seedance
- X Post from The Dor Brothers: "We just made a $200,000,000 AI movie in just one day. 100% AI."
- After AI Video of 'Tom Cruise' Fighting 'Brad Pitt' Goes Viral, MPA Denounces 'Massive' Infringement - Variety
- Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood - The New York Times
- ByteDance responds to copyright infringement concerns with Seedance 2.0 - NBC News
- MPA Sends Cease and Desist Letter to ByteDance Over Seedance 2.0 Videos - Hollywood Reporter
00:55:07 — Claude Sonnet 4.6
- Introducing Sonnet 4.6 - Anthropic News
- X Post from Claude AI: "This is Claude Sonnet 4.6: our most capable Sonnet model yet."
- X Post from Alex Albert: "Sonnet 4.6 is here. Approaching Opus-class capabilities in many areas."
- X Post from Artificial Analysis: "Claude Sonnet 4.6 is the new leader in GDPval-AA."
- Anthropic Says New AI Model Is Better at Using Computers - Bloomberg
- Task-Completion Time Horizons of Frontier AI Models - Metr
- X Post from METR
01:00:51 — OpenClaw Creator Goes to OpenAI
- X Post from Sam Altman: "Peter Steinberger is joining OpenAI to drive the next generation of personal agents."
- OpenClaw, OpenAI and the future - Peter Steinberger Blog
- OpenAI's acquisition of OpenClaw signals the beginning of the end of the ChatGPT era - Venture Beat
- X Post from Summer Yue
01:05:00 — OpenAI Devices and AI Devices
- Inside OpenAI Team Developing AI Devices - The Information
- Jony Ive's First OpenAI Device Will Be Smart Speaker With Camera, 2027 Launch Planned - Mac Rumors
- Meta Plans to Add Facial Recognition Technology to Its Smart Glasses - The New York Times
- Apple Ramps Up Work on Glasses, Pendant, and Camera AirPods for AI Era - Bloomberg
01:14:51 — AI in Journalism Controversy
- Journalism schools are teaching fear of the future, Letter from the editor - Cleveland.com
- X Post from Sam Allard: "Editor puts you on blast for wanting to be a journalist instead of an AI content farmer."
- Editor's Note: Retraction of article containing fabricated quotations - Ars Technica
- X Post from Molly Taft: "Writing things down helps you synthesize information MUCH better than feeding it to a machine."
- X Post from Max Spero: "It's easier than ever to let AI automate your job until you become nothing more than an AI editor."
01:25:05 —Meta Patents AI for the Dead
01:26:56 — AI Product and Funding Updates
- Anthropic $30B Funding & Business
- Claude in PowerPoint
- Gemini 3 Deep Think
- Gemini 3 Deep Think: Advancing science, research and engineering - Google Blog
- X Post from Noam Shazeer: "An updated Gemini 3 Deep Think is out today. SOTA on ARC-AGI-2, MMMU-Pro, and HLE."
- X Post from Noam Brown: "Criticisms of Google DeepMind's release are missing the point."
- Gemini 3.1 Pro
- Gemini 3.1 Pro: A smarter model for your most complex tasks - Google Blog
- X Post from Artificial Analysis: "Google is once again the leader in AI: Gemini 3.1 Pro leads the Intelligence Index."
- Google Lyria 3: Music Generation
- X Post from Gemini App: "Introducing Lyria 3, our new music generation model in Gemini."
- X Post from Google AI: "Today we introduced Lyria 3 — turn ideas into musical tracks."
- X Post from Ed Newton-Rex: "Google added Lyria to Gemini. What was it trained on? They haven't said."
- X Post from Josh Woodward: SynthID now covers images, audio, text, and video
- X Post from Gemini App: Prompting tips for Lyria 3
- Manus Agents
- X Post from Manus AI: "Introducing Manus Agents — your personal Manus, now inside your chats."
- Introducing Manus in Your Chat: Your Personal Agent, Everywhere You Are - Manus Blog
- X Post from Aakash Gupta: "Meta paid $2B for Manus. Eight weeks later, launches on Telegram."
- Grok 4.2
- X Post from Elon Musk: "Grok 4.2 release candidate (public beta) is now available."
- X Post from Elon Musk: "Grok 4.2 will be about an order of magnitude smarter and faster than Grok 4."
- Cloudflare Markdown for Agents
- PolyAI $200M Raise
- X Post from Poly AI: "PolyAI has raised $200M from Nvidia, Khosla Ventures."
- X Post from Nikola Mrkšić: "I quit Apple because I knew Siri would never make an actual dent in the world."
- ElevenLabs Agents
- ElevenLabs — AI Agents for Customer Support - ElevenLabs
- ElevenLabs secures first-of-its-kind AI Agent insurance - Elevenlabs
- Perplexity Advertising
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Learn more here.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: The more time I spend with executives at major companies, the more convinced I am that largely as a group, they don't really comprehend what's happening. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer.
[00:00:19] I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:39] Join us as we accelerate AI literacy for all.
[00:00:46] Welcome to episode 198 of the Artificial Intelligence Show. I'm your host, Paul Roetzer on with my co-host Mike put, we have a very busy week. I t's so if you listen regularly, last week's was kind of. We did it on like a [00:01:00] three day sprint because I was traveling for like the last 10 days. So there's like nine or 10 days worth of news maybe that we have to cover for this one.
[00:01:09] And I'm not exaggerating, there was over 60 topics in the thread. When I say topics, I, the way I do this is if, like, if I read about like Claude SA at 4.6 once, and then throughout the week as other things happen, I'll put them into the same topic. So Mike Heroically went through, I got a, I had been over a hundred links
[00:01:29] Mike Kaput: easily.
[00:01:29] It was easily over a hundred.
[00:01:31] Paul Roetzer: Yeah. Yeah. So, so to curate this week's episode, I was actually thinking Friday morning, I was like, I don't know how Mike's gotta do this. I've been glad I don't have to do his job this weekend. so yes, there was an immense amount to go through. We're gonna do our best to consolidate it because a lot of times there are threads that are related if you, kind of like paying attention and like seeing how this all connects.
[00:01:54] So we're gonna do our best to sort of thread this together for you all there. There's some very important [00:02:00] storylines emerging. we had no less than four new models last week. I think if you count Lear, if you count like the songwriting model from Gemin I think we had four at least released, like major releases last week.
[00:02:13] there's some word that we might get a new openAI's model this week. So I t's just we're in it like we are entering the busy season of ai, which I feel like is just every week now. All right. So, and I'm still trying to like, re acclimate my, I got back at like midnight Thursday night after being away for whatever that was.
[00:02:32] And I 'm just still like trying to like regroup and I gotta run a workshop today. So I'm like, do this and then run in one a four hour executive AI workshop. okay. So today's episode is brought to us by AI Academy, by SmarterX. if you're a regular listener, you hear us talk about academy a lot. AI Academy helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys.
[00:02:56] An AI powered learning platform. We currently have [00:03:00] 12 professional certificate course series available. So these are, we had our two base ones we're always piloting and scaling ai. And since August of last year, we have launched 10 new ones and every month we're coming out with a new certificate course series.
[00:03:14] These are available on demand right now, for individual purchase or as part of AI mastery membership. So we just released our newest course series AI for Financial Services that is taught by a director of research, Taylor Rady. It covers real world applications of AI across banking, insurance, wealth management, and more.
[00:03:33] So you can start applying AI strategically in your organization today. If that is a industry you are in, you can learn more about AI Academy and our AI mastery membership program at academy dot SmarterX dot ai. And then a special note. So as part of AI Mastery membership, there's a bunch of benefits outside of just the courses and certificates that themselves,
[00:03:53] One of the value adds we're gonna do is episode 200, which is actually happening next week, believe it or not, Mike. [00:04:00] we are inviting mastery members only, so it's an exclusive thing for our mastery members to attend the live recording. So we have been doing this podcast for four plus years now, Mike, every week I think it is.
[00:04:12] Yeah. Does that sound right? and this is the first time we're going to have a live audience to record this. So that recording is gonna happen on Monday, March 2nd. So the same way Mike and I always do this, we show up and we record the session on Monday mornings. mastery members are gonna get a link to log in and join us.
[00:04:32] So you'll basically be here for what we do. Like we're just gonna kind of have everybody hanging out and Mike and I will do our thing. And then once the podcast recording ends, Mike and I are gonna stick around for 30, 45 minutes and just have a conversation about whatever we talked about, answer questions from people.
[00:04:50] So we thought it'd be a cool way to celebrate the 200th episode is involve our community and our master members in that. So I'm looking forward to it. I'm sure it's gonna be kind of crazy, [00:05:00] but what the hell, like, you know, make it, make it fun. So again, links will be there for that in the show notes.
[00:05:06] anything I'm missing there, Mike? Is that it?
[00:05:07] Mike Kaput: No, no. That should, it should be fun to show people behind the scenes.
[00:05:11] Paul Roetzer: Yeah, that will be great.
[00:05:12] and you'll see like, I'm gonna jinx us. This is one take, right? We, we do not right When we say we're gonna go for, you know, it'll probably be like, usually it up be an hour and 20 minutes.
[00:05:21] It's it. Like, we get on and we go, we talk for an hour and 20 minutes, we hit stop. And then Claire does her magic on the back end and has it ready to go the next morning. So we're gonna record it on March 2nd. We'll, it'll drop on Tuesday, March 3rd, just like our regular weekly event. Okay. AI pulse, I haven't looked at this yet, Mike.
[00:05:38] Okay.
[00:05:38] AI-Pulse Survey Results
[00:05:50] Paul Roetzer: So this is our, every week we do an AI pulse survey. You can go to smarterx.ai/pulse and participate in these. These are informal polls of our listening audience. So last week we asked two questions. It's always two questions. I will give you the kind of the recap of last week's, and then at the end of today's podcast we'll give you the two questions for next week.
[00:05:58] So we had 105 people [00:06:00] participate in this poll. Based on your own experience, how would you describe the current pace of AI improvement? 70% said it's accelerating faster than I can keep up with. That's, that's not surprising. I hear that. I feel that, don't you?
[00:06:15] Mike Kaput: For sure. Yeah.
[00:06:16] Paul Roetzer: Especially on weeks. Like last week,
[00:06:17] Mike Kaput: I wanna, the next stands there, people gave, I need to talk to these people.
[00:06:20] Yeah. Because I don't know how people are keeping up.
[00:06:23] Paul Roetzer: So 20, 28% said it's moving fast, but I'm keeping up. I. Yeah. More power to, it's like, I mean, if you're keeping up by listening to us, I'm, I'm very happy to hear that, but I'm gonna tell you right now, like Mike and I don't feel like we're keeping up. And if we're the gatekeepers, it's probably moving faster than you think it is,
[00:06:41] Mike Kaput: right?
[00:06:41] Paul Roetzer: So, yeah, those are the dominant two answers. So it'd accelerating faster. I can keep up 70% and then 28%. It's moving fast, but I'm keeping up. All right. Then the next one was, has using AI tools changed the total amount of work you do? This has to do with the productivity, which is a continuing topic today.
[00:06:58] 58% [00:07:00] said, yes, I'm getting more done and less time. 35% said yes, but I'm doing more work overall, not less. I'm hearing a lot of that, that, yeah. Yeah. Like I'm, I'm definitely producing more, but I'm still working the same hours or more hours. I would say I've, I , I struggle with this one. Like I kind of fit in that bucket.
[00:07:17] Like I definitely achieve way more in my daily, weekly routine than I did 12 months ago. Like the level of output I'm able to do. I think I work, we like, I think a lot of people assume I don't sleep and that, like, I just nonstop work and I don't, I don't think that's the case, but I don't know. I was like, someone was asking me this when I was traveling last week, at an event I was at, and I said, you know, I don't know, like I get seven hours of sleep every night.
[00:07:43] I take my kids to school. I usually stop working by like 4:00 PM to hang out with my kids and I don't generally work evenings. That being said, I think I'm always thinking about, yeah, a I 'm always working, I'm always like grabbing content. I'm always reading things, so I don't what I consider work. I guess I'm [00:08:00] like almost separating with like, it's just become infused into my life where I'm constantly absorbing what's happening, think about what's happening, making notes for the podcast.
[00:08:07] So I think I'm like subconsciously working. Yeah. All the time. but I don't know, I , I don't know that the number of hours I would say. So anyway. That's an interesting one. I could see that being just balanced moving forward where people feel like, yeah, it's, I'm doing more, but. like it makes me work more hours, so.
[00:08:25] Mike Kaput: Right.
[00:08:25] Paul Roetzer: We'll see.
[00:08:26] Mike Kaput: Right.
[00:08:27] Paul Roetzer: Okay. So speaking of doing more or not, I don't, I don't know, like, I guess we'll see the first couple topics actually probably the first three main topics all have to do with this concept. So, the first one, Mike, is the CEO of Microsoft AI making a comment that it seemed to be almost in passing he made it, but it definitely captured a lot of attention.
[00:08:46] So let's get started there.
[00:08:48] Microsoft AI CEO Predicts White Collar Work Automated in 12-18 Months
[00:08:48] Mike Kaput: Alright, Paul, so like you mentioned, CEO of AI at Microsoft, Mustafa Suleyman, just put a specific expiration date on most white collar works. So Suleyman actually did a wide ranging [00:09:00] interview with the Financial Times this past week. And like you alluded to, he had as almost a throwaway comment in this, this quote that AI would reach human level performance on most professional tasks.
[00:09:13] Within 12 to 18 months and that that would result in all sorts of knowledge, work, jobs, including accounting, legal, marketing, project management, everything involving sitting down at a computer, being automated by AI in that time. So you basically describe this future where AI handles most knowledge work and human shift into supervi, supervisory and creative roles driven by exponential growth in computing power.
[00:09:39] So Paul, I was just curious, this is what grabbed the headline. He said a bunch of other stuff that was interesting. Seems like a really weird kind of PR move from Microsoft to say that all your customers are going to be automated in 12 to 18 months. I don't know. What did you think? That timeline seems crazy fast.
[00:09:57] I guess if you said to me, is AI capable of [00:10:00] doing this technologically in 12 to 18 months? That's one thing. If you say it's actually going to diffuse in organizations in 12 to 18 months. I don't know, color me skeptical because we've had companies take 18 months to get chat GPT approved, so I don't dunno.
[00:10:14] Or five people, right? Yeah. And what did you think here?
[00:10:17] Paul Roetzer: Yeah, so I think you, you nailed it, Mike. This is, and you hear this over and over again from the AI labs themselves, these, these very short timelines of massive disruption to the economy and to, to knowledge work. and I think the key, as you alluded to is, will the technology be capable of doing that?
[00:10:40] that I might not argue with that in 18 to 24 months, that basically anything a human does that's digital work related could, in theory, if it was properly trained, it could do a lot of it. So I'm not not gonna dispute that. But I think the reality of the adoption and like the dfu diffusion of that technology is [00:11:00] key.
[00:11:00] So I 'll throw a couple of thoughts out there, Mike. There was a lot of articles. So as I alluded to at the start of this, like the way we often kind of form what we're gonna talk about in these episodes is like the main topic. So we, you know, throw the article from Mustafa in there, in this 12 to 18 months is like the sexy headline that everybody ran with.
[00:11:18] But under, under underlying that there was a bunch of other stuff happening and related articles. And so often what we try and do with this podcast is sort of surface the trend data for you and like connect the dots. So I'll walk through a couple of other related things as well, Mike, and hit a little bit more on some of the other things he said.
[00:11:34] So as a recap for people who don't, remember who Suleyman is, Google DeepMind co-founder, when Google acquired DeepMind, he stayed there for a while. He was VP of AI products and AI policy at at Google. He, some of you may know him as the author of the Coming Wave. I know a lot of people in our audience read that book when it first came out.
[00:11:55] He was also the co-founder of Inflection AI with Reid Hoffman. [00:12:00] and then Microsoft Acqua hired Inflection a I f I remember correctly. Yeah, that was like March of 24. So Microsoft hired most of inflection AI staff, including Mustafa, and then he, he took over as the CEO of Microsoft ai. So that's kind of how he got to this position.
[00:12:14] So when he, when he talks, people listen, I mean, he is in an authoritative role. Certainly. I 've always felt like there's just a very weird friction at Microsoft, and again, this is nothing from personal experience, but the observations of how Mustafa talks and what his talking points are, and how Satya talks and what Microsoft actually does for a living and how they make money.
[00:12:37] They just, it doesn't seem like they sit in the room very often and get on the same page with their talking points. I would say, so his, his quote was white collar work, where you're sitting down at a computer, either being a lawyer or an accountant, or a project manager or a marketing person. Most of those tasks will be fully automated by an AI within the next 12 to 18 months.
[00:12:57] That it in some ways it's [00:13:00] just an absurd statement that, yeah. Right, right. So I would just say straight up like that is not true. Like if you are an attorney, a project manager, marketing person, would the AI if properly trained, and if you and your team were fully trained on how to use that AI and agents were reliable and autonomous, would it automate it to where it starts, like taking everyone's jobs in 12 to 18 months?
[00:13:20] Sure. But that's a whole lot of like what ifs. So the tech is gonna advance really fast. We're not gonna dispute that. I said these age AI agents will be able to co-ordinate better within the workflows of large institutions in the next two to three years. The AI tools will also be able to learn and improve over time.
[00:13:37] That's the continuous learning thing. We'll talk more about taking more autonomous creations or actions. creating new model is going to be like creating a podcast or writing a blog. It is going to be possible to design an AI that suits your requirements for every institutional organization and person.
[00:13:52] So that is basically like saying I'm a marketing person on demand. I'm gonna be able to spin up my, basically my own custom model to do a [00:14:00] thing. as we'll talk about in one of the next topics, people have had access to GPTs for like three years. And like the amount of professionals, knowledge workers who have built AGI PT is probably less than like 0.5%.
[00:14:14] Right. So like, again, just 'cause the tech is there doesn't mean anything. they also touched on, I thought, expanding on this idea that Microsoft is going to build their own models. And, you know, they obviously have been very reliant on openAI's. They made a massive $13 billion bet on openAI's. They own what, 27% I think?
[00:14:31] Yeah. Of the for-profit. but they're now very directly saying, Hey, we're building our own foundation models. We're not gonna rely on them. so then a couple other things, just again, connecting trends. So Andrew Yang, who ran as a democratic, nominee, I believe Mike in the 2020. Yeah. And he ran on universal basic income, like give everybody a thousand dollars a month.
[00:14:51] So he was like foreseeing automation and impact. So he has a book to sell and he may or may not be running again. I have no idea. I don't, I don't follow his [00:15:00] political career, but he definitely came out with a book. So he's, he's making some headlines again. And the only reason I'm highlighting what Andrew Yang is saying, and we'll link to the post, is because we're seeing other politicians saying the same stuff.
[00:15:13] And I think this is a sign of what we're starting to see, which is way more public commentary around the impact AI is gonna have. So in a post that he published. He said he's highlighting this, not because these predictions are necessarily true, but because they're exactly the sort of commentary I expect to gain ground, or this is what I was saying.
[00:15:32] So I'm expecting these, like what he's saying to like gain ground. So what he said is this automation wave will kick millions of white collar workers to the curb in the next 12 to 18 months. So same timeline. I don't if you heard, heard the Mustafa interview. As one company starts to streamline, all of their competitors will follow suit.
[00:15:47] It will become a competition between the stock market. hope that will reward you for cutting headcount and punish you if you don't. And then I hope we don't get, you know, bleeped or, hit for saying [00:16:00] this. I just, Claire can choose whether edit this or not. I don't think we will. I've started to call this displacement wave "the fuckening", because that feels more visceral.
[00:16:08] I actually thought that was a pretty funny term. so he said, I want to forecast a few of the immediate social impacts of "the fuckening" first mid-career office workers will be fired in droves. second personal bankruptcies will surge. Third college grads won't be able to find jobs. Fourth downtowns and office parks will empty out and fifth pessimism and anger will rise up.
[00:16:28] So I'm not gonna go into great detail on this, but if you're curious about it, like go read it, we are seeing a lot more chatter in these directions. And while some of what he's saying is likely extreme and not, likely to occur in that 12 to 18 month timeframe, I think that if you take each of those statements on a spectr it's probably like further along that spectrum than most people would like to admit.
[00:16:54] Right. Or acknowledge. then there was, the Atlantic had an article called The Worst Case Future for White Collar [00:17:00] Workers. They were talking about the impact on people with college degrees and unemployment rates. And then we actually had the Fed Governor Barr, that did a talk. And in that talk touched on three ways AI could shake up the labor market.
[00:17:12] So Axios covered this and I actually went and read the full transcription. In essence, the Fed is now finally acknowledging what's going on. Like for, again, if you've been an early listener to this podcast for the last couple years, you heard me like begging with economists and the Federal Reserve to do something like to, to acknowledge that this was coming, and they finally are.
[00:17:31] But it's weird because they're largely relying on mostly the studies that we talk about on this podcast, which mostly have flaws, right? But in essence, what Barr said was there's three basic scenarios, like a gradual scenario where like it's the least economically painful. The I uptake is widespread, but gradual enough that the joblessness doesn't really like, make a major difference.
[00:17:54] Then there's the rapid takeoff where the AI capabilities swarm the economy more quickly than labor market can [00:18:00] adjust, leaving a large share of the population essentially unemployable. I actually like bias this, but like I'm, I'm kind of in that camp right now, and based on everything I've seen the last 90 days, like I'm moving more and more towards the rapid takeoff explanation.
[00:18:13] And then, in the final scenario, shortage of electricity, supply, financial cap, et cetera, actually slow things down. So again, there's like a few other articles to help start to frame the context that at minimum we're just starting to hear more conversation, which I think is good. And what I would just encourage people to do is you're gonna see lots of headlines.
[00:18:35] You're gonna see lots of like very bold claims with high levels of confidence about what the next 12 to 18 months are gonna look like. I would just take all of it in to form your own like point of view and perspective and realize none of them are probably fully true on their own. And you have to like, just absorb the information and be critical in your own thinking about where we really are.[00:19:00]
[00:19:00] and the thing that really matters that we're gonna probably get into in the next topic is where are you actually at? Like where is your company actually at? Where is your industry actually that, because I've yet to see a study that truly captures. How everyone is actually feeling. Right. and we'll again, we'll kind of touch on this coming up, but I think that's the most important thing is observe what's going on and then think about the reality of what you're actually living through and what your friends and coworkers are seeing.
[00:19:28] It's probably not that aligned, especially if you're looking to economists and like Federal Reserve to guide you. They, they don't seem to be the right people at this moment to be the ones to tell you what's coming. They, they're pretty good at telling you what's happened in the past. They generally have dropped the ball dramatically with over the last 10 years on what was coming.
[00:19:47] And I feel like they're probably largely still doing it, as are most politicians.
[00:19:52] Mike Kaput: Yeah. I really liked actually the Fed Governor's framing of this just because I found it helpful as like a thought experiment. But my gosh, [00:20:00] I get really nervous when policymakers are saying there's a scenario where things gradually happen in this space.
[00:20:07] I just don't see it. I was like, and believing it. Okay. You're right, right. Ugh.
[00:20:12] Paul Roetzer: But again, like I, well, again, well, I don't wanna like preempt what we're gonna talk about. I think that that is, it's probably where most of society is at the moment, and I'm, I'm really starting to wonder how the AI taking jobs messaging and the impact on the economy messaging is gonna play out in the midterms.
[00:20:31] Because I, the more time I spend with executives at major companies, the more convinced I am that largely as a group, they, they don't really comprehend what's happening.
[00:20:42] AI Productivity Evidence
[00:20:42] Mike Kaput: Hmm. So let's talk about this next topic, because it's intimately related to this, because for the first time it seems like the productivity payoff from AI might be showing up in national economic data.
[00:20:58] So, Stanford Economist [00:21:00] Erik Brynjolfsson, published an essay in the Financial Times as well this week. Declaring that this kind of surge in productivity we've started to kind of predict over time has actually arrived. So he talked about how US productivity growth hit 2.7% in 2025, which was nearly double the average over the past decade.
[00:21:20] And at the same time, payroll growth was revised downward by 403,000 jobs, while real GDP held strong at 3.7% in the fourth quarter. Basically this means the economy is producing more with fewer workers. So Brynjolfsson points to this as a sign that AI is moving from this phase of heavy investment trying to build out the technology to what he calls like its harvest phase.
[00:21:45] Citing his research on how these transformative technologies can require years of invisible reorganization before measurable gains start to appear in the economic data. And he also says it's still very, very early because despite [00:22:00] AI productivity apparently starting to show up in the economic data. He also just comes out and says, most companies still are using AI as a, what he calls a glorified dictionary.
[00:22:10] And they're still a small cohort of power users who are really the only ones at this stage compressing weeks of work into hours, which we have seen firsthand. But Paul, I'm curious like why, I mean, Brynjolfsson is, seems like a pretty credible actor here. He is been doing work on this for years. this is something we've talked about.
[00:22:29] Where is AI going to, when is it going to show up in the economic data? Why is it taking so long?
[00:22:37] Paul Roetzer: so my general take is because most companies are still at the starting point. Like, so I've been thinking about this a lot. Like I, so I've, I've done a series of talks to start the year with, you know, outside of like our daily interaction with executives and government leaders and people like that.
[00:22:56] I 've actually spent like real time [00:23:00] with heads of major software companies, like a, a group of them. health system executives, banking executives, like, where you start to like really get a pulse of what reality is in the market. And so I think that the biggest problem, and I ended up posting about this on like, like Friday on LinkedIn.
[00:23:20] I'm gonna, I'll go through that post in a second. 'cause I think this kind of captures where I'm, I'm currently at with all of this. I think the problem is everyone keeps waiting for the data to tell us this. Like the government officials are waiting to see it in GDP or in productivity or these leading indicators.
[00:23:35] Like they're just, they're waiting for proof that it's gonna happen.
[00:23:39]
[00:23:39] Paul Roetzer: And so what I've come to believe is we, we just can't see what's coming by looking at what's happened so far. So no matter how many studies we go find, no matter what the Fed does, what these economists research, how many times we look at past general purpose technologies, like none of it is gonna show us the reality of what is going to [00:24:00] be happening in the next one to three years.
[00:24:02] So, like, I was trying, I got back from my, my trip last week where I did spend a, a lot of time, in, you know, health system executives in particular. And again, building on like these other talks I've been having and people I've spent time with, I just started thinking about like, well, what is actually true about the future?
[00:24:23] And so, you know, particularly with this health systems, we were talking about like, what does 2028 need to look like in, in an organization and how do we get there? And so for me, I started like real time in these think tanks, right? Like, well, what is true about the near future? Like, what if, and if we assume these truths, then what do we, what is really gonna happen to business?
[00:24:42]
[00:24:42] Paul Roetzer: And so I, on my flight back Thursday night, I was just like, jotting stuff down and trying to like, expand on this. So just, I , again, I'll just read this, maybe, you know, if you follow me on LinkedIn, you saw this or read my newsletter, you've seen it. But I 'll add some context as we go through. And then Mike, I'd be really interested to get your thoughts.
[00:24:58] So what I said was, let's conduct a thought [00:25:00] experiment for a moment about the future of work. Assume that these statements are true in your business. In the next one to three years, everyone has access to a gen AI platform. ChatGPT, Claude, copilot, Gemini, that can think, reason, understand create.
[00:25:12] Everyone understands. Number two, everyone understands what AI is and what is capable of doing. Not just the basic answer engine and chat use cases, but the deeper reasoning and multimodal capabilities that open up a world of business applications and innovations. Three, everyone has received personalized training to help them prioritize, use cases, and maximize the impact of AI on their jobs.
[00:25:32] Efficiency, productivity, creativity, innovation. Four. Everyone's personal AI assistance have on-demand access to a comprehensive company knowledge base and clean real-time data. They're able to turn data into intelligence, intelligence into insights and insights into actions with simple prompts or automated agents that proactively produce the outputs.
[00:25:51] Five. Everyone has on-demand access to ai, subject matter experts across every topic inside and outside the organization. Six. Everyone has on-demand access to [00:26:00] high level strategic support and consultation from their AI for any business problem or goal. And seven, everyone has access to AI agents that can complete digital tasks with high level of reliability and autonomy.
[00:26:12] If these are true, what changes across talent, technology processes, products and services and business models, the answer is everything. So Mike, I'll just pause there for a second. I was framing this with like what we've been talking about recently in the last couple of other stuff about these like parallel universes, like we're like most of that is all true already.
[00:26:30] Like outside of the connection to the knowledge base stuff and the reliability of the I agents. Like we have all this and the people who know how to use gen AI tools that know they have reasoning capabilities and pay for those reasoning capabilities and know they can do images and video and all these things.
[00:26:47] You live in a world where everything I just said is like, well, yeah, of course. Like that's all obvious. What I'm telling you from personal experience with leaders of major organizations is that is not a [00:27:00] given. Like what, what I just said is kind of mind blowing to most of those people. So I'll, I'll conjoin, I said, and yet most companies I talk to, especially large enterprises, are still stuck on number one access to gen ai.
[00:27:13] The ones who have actually provided gen AI access to their teams have rarely moved to two and three. It is shocking how many enterprises are stuck on the most basic and accessible parts of AI adoption transformation. The tools and the training, the only items above that are hard today are four, which is the data access to your knowledge base.
[00:27:33] And then seven, which is reliable agents. And that'll get changed in the next one to two years as they become more reliable, right? So data really is the only one that has to be overcome and that just takes the right approach and technical support. But there's like thousands of use cases that don't even need the data to be in.
[00:27:48] So really you could just throw the the data one out and you can still transform. So what I said is like if your organization just does number one through three, well, which is access to gen ai, everybody has [00:28:00] a license. Everyone understands what it's capable of doing, like the full capability of that license, and everyone gets personalized training.
[00:28:07] If you just do those things, which are all doable with zero involvement from it, like you don't even need technical support for this, you can completely transform a business. And yet time and time and time again, I talk to major enterprises who do not even provide gen AI licenses to their teams. And if they do, almost no one provides personalized training for the use cases.
[00:28:33] I , it's, and I know you see it too, Mike, but like, so when you look at these productivity studies, when you look, it just doesn't matter. Like none of it applies to what is going on in your business. Because if you just do one through three, you can throw all the research out the door and it'll change your company like 10, 20, 50% productivity gains.
[00:28:53] If you, I've said it before, like if you can't realize those gains, you're doing something wrong.
[00:28:58] Mike Kaput: A hundred percent. I mean, this [00:29:00] is resonates so much because it is, I wanna like shake people and show them like our lived experience of this and just be like, no, I'm not making this up. I'm living this every day.
[00:29:10] And I'm also seeing the flip side of it every day. And I just think, you know, we've talked about a hundred times, we are in such a parallel universe over here. I just often forget and underrate how poorly people are using these tools. It's not a knock on like your capabilities and competence, it's just like, it doesn't even occur to me.
[00:29:30] The failure modes that people are finding using these tools. It wouldn't even occur to me that people are using this only as a glorified chat bot still three years later and they still, or a glorified search engine rather, or dictionary as Bri sent, said. and yet they are, and it's like, and the data piece is so well put too, because I saw Understandably so a fair amount of people in the comments on your post talking about, well, you know, data is a real challenge and is a real barrier and if you don't solve for that, the rest doesn't matter.
[00:29:59] That's not true. [00:30:00] It's important, it's critical for organization-wide transformation. But even if you told me three years from now your data was still a mess, there's plenty of data in the heads of every single one of your knowledge workers that if they can just talk to AI for a bit, which I know they can and I know they know how to do their job.
[00:30:17] To your point, do one to three, take only the data. That's your domain expertise. Use that as your knowledge base until you get your house in order, you'd still crush it over the next couple years.
[00:30:28] Paul Roetzer: Again, I think listeners have to keep in mind, you know, we've been trying to share a lot more of what we're doing at SmarterX.
[00:30:33] We have no IT department. We have zero engineers, zero data scientists. We are, we are a purely like strategy, creative, liberal arts backgrounds. Like that is our staff. Like we are not doing this with engineers who are like doing all this grunt work behind the scenes to figure out how to apply all this stuff.
[00:30:55] This is, it's just us. Like I'm, I came outta journalism school built [00:31:00] companies, storyteller by trade. Mike's the same way. Mike has a communications journalism background. Most of our leaders, like we have a number of people who've gone through MBAs. Like they, they have their MBAs, but like, this is not technical stuff.
[00:31:12] And the companies we see, 'cause we get access to these, we have hundreds of, you know, business accounts that we talk to every day. And like the people that are driving the change are often sitting in the marketing teams, the customer success teams, the HR teams. It's not even the IT people who are driving the business cases, they're just worried about the risk and the security of it all.
[00:31:31] I it's, it's so weird. Like I, again, I sometimes I think like, this can't be reality. And then I go and I spend two weeks with the leaders of these companies and you realize like, oh my gosh, like it may actually be further behind than I thought it was. And I was already kind of bearish on where the adoption was.
[00:31:49] Mike Kaput: And I would also say there is a flip side here where there is some hope too, where we've done some really great work with very conservative enterprise level organizations that frankly like are [00:32:00] still struggling with the data piece, but the permissions piece with the IT piece. And yet we've worked around it by finding use cases that don't touch those areas or that are easier to get off the ground.
[00:32:11] And I can tell you, I have dozens of examples of very simple stuff that is not gonna, you know, make you go viral on Twitter. 'cause people be like, Hey, that's super obvious. You ignore me. Like, no. Really valuable. Dozens of use cases are super simple that nobody is taking advantage of that can accelerate and e even transform how you work.
[00:32:31] Paul Roetzer: Yeah. The file now, Megan is just like, based on what you're saying, Mike, you, you, if you're in an organization and you don't, you're not at the very top. Like you don't get to like put this as a top three priority for the organization starting tomorrow and you just gotta do your piece within your department or your team.
[00:32:46] That's where we're seeing the vast majority of the success anyway. Like Yep. Within these groups that we talk to as this homogenous group that like is largely behind. There are always like these leaders who are just racing ahead, who all [00:33:00] feel behind themselves. And yet when we look at, we're like, you have no idea how far along you are Yeah.
[00:33:05] Compared to your peers. So you can be that, like, it can be just a small team or a single department, you know, the leader of like a marketing team for example. And you can just choose to race ahead while your organization is like figuring the rest of this stuff out. 'cause they're, they're just not moving fast.
[00:33:21] Mike Kaput: A hundred percent. All right,
[00:33:23] Dario Amodei on Dwarkesh
[00:33:23] Mike Kaput: our third big topic this week, Anthropic, CEO, Dario Amodei sat down for about a two hour interview on the Dwarkesh this past week. And he argued the world is not treating what he calls the end of the AI exponential with the seriousness it deserves. So he said the most surprising thing about the last three years in AI is not the pace of progress, which he said has actually tracked roughly to what he expected, but quote, the lack of public recognition of how close we are to the end of the exponential.
[00:33:52] So he walked through in this what he actually means by that, which is the scaling laws that you know him among other people first started [00:34:00] documenting in 2017. Our holding, reinforcement learning is now showing the same log linear gains as pre-training. And he actually says, his personal hunch is we're going to see what he calls a country of geniuses in a data center, IE AI tools and models or capabilities in just one to three years.
[00:34:19] And he actually talked a bit about. What happens after that technology arrives and, you know, to a theme we've been hitting on, he argued the real uncertainty here is not AI capabilities, but diffusion of ai. A couple other interesting things that came outta here, Paul, and then I want to kind of get your take.
[00:34:35] What was most important to pay attention to? He said, anthropics revenue went from zero to 10 billion in three years. As an example of, you know, diffusion takes time, quad codes started as an internal tool and then kind of took off to become this category leading product. But even at that pace, he said, enterprises move slower than startups.
[00:34:56] if you buy, you know, a trillion dollars of compute and your revenue [00:35:00] projections in this space as a lab are off by even a single year, this is the diffusion problem. He said, there's no force on earth that could stop me from going bankrupt. So he covered a ton of different things here. Paul, what jumped out at you here?
[00:35:12] Paul Roetzer: I just think that, again, like anything time you get a chance to listen to Dario or Demis in particular. Those are like the two I'm, I'm most interested in hearing from in these long form interviews. I think there's just always gonna be elements that are important when you're trying to connect the bigger dots of where this all goes.
[00:35:28] So the exponential he refers to is this, he wrote a paper, I couldn't find it. I was trying to see if this was something published, but I dunno if it was an internal, it's called the Big Blob of Compute Hypothesis. Hmm. And so it's basically back in 2017, the early days where a group of researchers were sort of projecting out the scaling laws and be like, okay, if we just give it more compute, I mean more Nvidia chips, the quantity of data, the quality of that data, and the distribution across knowledge.
[00:35:54] and then how long you train for that. In essence, these things are just gonna keep getting smarter. And so [00:36:00] this is the, you know, the pre-training scaling law that people thought in 2024 was starting to run up against some walls. But most people were like, yeah, it's, it's not going to, and it didn't.
[00:36:10] and then that's when the reasoning capabilities started emerging. then the post training or the reinforcement learning, he talked a lot about that and the value they're seeing there. And that's where we're seeing like Claude work cowork. And a lot of the stuff coming from Anthropic today is coming out of these, the post training era, I guess.
[00:36:28] And then the inferences is, you know, the test time compute scaling law. So those together as sort of the three scaling laws, pre-training, post-training and test time compute, the current three that the labs are kind of leaning into. I dunno, a couple other just overall notes talked a lot about the context window that, you know, again, think of it as like the short term memory for these chatbots or the models that it can remember back to like a million, tokens, of conversation and then it can do some compression within even those million.
[00:36:58] And they're, they're just finding [00:37:00] really creative ways to get more out of these models. And a lot of it is coming from, from that. The growth was pretty crazy. And again, we've covered that growth, but it's still hard to believe these numbers. Like it's hard to imagine. These are real numbers. the continual learning, I thought was an interesting one.
[00:37:17] We've touched on that quite a bit. In 2026, I think at least two or three episodes we've brought this back up again and again, this is a, a key unlock, as I've alluded to in recent episode, maybe even last week. The idea of continual learning is like, historically when you train a model, it has a cutoff date, like it learns up to a point and then it doesn't learn anything else beyond that.
[00:37:37] And then they do the post training on top of it. and then the way that it's able to access like real time information is through tool use. Like it gets access to search and things like that. So let's say a knowledge base cutoff was just for arbitrary sake, like August, 2025.
[00:37:50] Mike Kaput: Yeah.
[00:37:50] Paul Roetzer: If a model is trained and that cutoff date, it knows nothing about the world beyond that.
[00:37:55] And then it only learns from access to knowledge and like information lookup and stuff. [00:38:00] So the idea of continual learning is that the models. Yeah, don't just stop at, at the cutoff date. They actually continually learn like a, like a, like a human would. Like you're always learning. Now, Elon Musk has teased that Grok 4.2 has some element of continual learning.
[00:38:15] I don't believe it. I don't know. Yeah, I don't know how they're simulating it, but I don't think they've actually unlocked continual learning otherwise, there would've been a much bigger deal made out of it. But he does talk a lot about how they think this is largely solvable, that they're on the path to do this.
[00:38:30] And it might just be through the context window. the planning and buying compute one was interesting. He, he, he doesn't shy away from making veiled comments about Sam Altman and openAI's very much, and this was a pretty direct one. He said, there's no hedge on earth that could stop me from going bankrupt if I buy too much compute.
[00:38:50] Right. So basically they have to project out three to five years. And he said, so, even though a part of my brain wonders if it keeps growing at 10 x, meaning that revenue, I can't buy a trillion dollars [00:39:00] of a year of compute in 2027. If I'm off by a year in that rate, if the growth rate is five x instead of 10 x, then you go bankrupt.
[00:39:08] So it's interesting because openAI's is obviously like insanely leveraged towards the future. Like this Stargate, which actually apparently just collapsed like this, this like day one. And was it day one, Trump was in office where they're like, oh, this Stargate thing and you know, a trillion dollars or whatever they're gonna spend on this.
[00:39:23] Yeah. It's not happening. Like, it like completely fallen apart. And so you're starting to see these, these kind of cracks in these, these massive future investments, that openAI's is making and Dario's kind of comfortable to just take like the middle ground on risk is basically what he was doing. but at the same time he did say he is like, it's hard for me to see that there won't be trillions of dollars in revenue before 2030.
[00:39:47] Like I can construct a plausible world where it might be maybe three years. So his confidence is extremely high that theyre going to keep growing really, really fast. But he's trying to be somewhat [00:40:00] responsible. and what they do. And then I did think it was interesting they asked him about like how they charge for this stuff.
[00:40:04] And since we've been talking recently about how software companies would charge for this. so, you know, they obviously make a ton of money through their API, which basically is charged like a utility. Like the developers use their API, but he said I could see at some point we're going to pay for results or maybe some form of compensation that's like labor, which is interesting 'cause that was what I was hypothesizing, was at some point you're just gonna pay like as a labor replacement.
[00:40:30] And he did sort of generally address those couple of ideas and then he went into regulation. So yeah, I just, I think if you're interested in understanding the macro level, any chance you get to listen for two and a half hours to Dario or Demis talk, take it like it's, it's not all, like 80% of it you might be able to throw away is like, eh, it's too technical or it doesn't affect my daily life.
[00:40:52] But the 20% of it. Might change the way you think about the future, and that's how, that's why I always make a point of like listening to [00:41:00] these. You just never know when you're gonna find the pieces that help you understand what's going on with an Anthropic or with an openAI's.
[00:41:07] Mike Kaput: For sure. One additional item that just jumped out to me because I hadn't really thought about it.
[00:41:12] He stated at one point he thinks the AI industry will reach an equilibrium where companies divide their compute spending equally, roughly 50% dedicated to training, 50% dedicated to inference. But inference interestingly yields extremely high gross margins around 75%. So he basically thinks he will get to this point of scale where you're essentially lightening money on fire for years to build the models.
[00:41:38] Then when you get to this point where you start serving them or that balance you turn on, like a money printing machine is kind of how he gets to it.
[00:41:46] Paul Roetzer: Yeah. So I in
[00:41:47] Mike Kaput: theory,
[00:41:47] Paul Roetzer: yeah, I thought about that a lot too. But I would think the, like the economic pressure would, would make it so that's not true itself.
[00:41:53] Mike Kaput: That's not as profitable over time. 'cause
[00:41:55] Paul Roetzer: Yeah, because that's like the basic premise. So this is, again, like Wall Street generally just has [00:42:00] completely clueless on the entire AI industry for 10 years. Like, so if you recall like Nvidia Cratered last year at one point, like 12% in a day. Yeah. And it was because something happened with Grok or cereus or like one of those like next gen chip companies or inference companies, something like that.
[00:42:18] And they basically thought that like the value of NVIDIA's chips was just gonna like crash. Like, oh, somebody figured this out. What they didn't realize was that the Nvidia chips from six years ago were at 100% capacity because they're actually great for inference. So these chips that largely had like say, I don't know, like a four to five year run rate depreciation over time and they're worth nothing.
[00:42:37] They realize like, oh, that's not true. Well who has a ton of chips? It's like Nvidia, Google has TPUs the same deal. So these TPU from two generations ago are actually great for inference and inference. Just again, like it's a very technical sounding term. It just means when you and I use the thing, so like if I go into HubSpot, I was talking recently about HubSpot's credit model.
[00:42:58] When I go into HubSpot and I [00:43:00] talk to its AI chatbot or whatever internally to talk about some data I have that's inference. Like it's gonna, it's gonna pull on the API from Claude or open app whoever. So they're paying an API credit to Claude to and drop it, and then we are paying for that credit to be served to us.
[00:43:17] So that's the moment of inference when we actually use ai. It can be on your phone, it could be the chat bot, it could be whatever. but yes, what they're saying is the demand for inference, the demand for use of AI and all these devices is going to skyrocket and they make more money when you and I use the thing than when they pre-train the thing basically.
[00:43:35] Mike Kaput: And you're kind of saying also though, conversely over a long enough time period, like the profit margins on that will slim because I would presume ably people compete and Google will say, Hey, we'll charge you way less to be
[00:43:46] Paul Roetzer: able to Yeah. Serve this. But I think but there be, there comes a point. Yeah.
[00:43:49] You could think someone like a Google could just undercut the market and just Yeah. Say we we're willing to take it as a loss leader for years because we can. but [00:44:00] what, like, you, you hit a saturation point where it's like, let's say Opus 4.6 from Claude, it's like, it's just good enough for like 95% of what we would do as a company, right?
[00:44:10] And so they could be on Claude seven in three years and we might still generally be using Claude 4.6, which is by that point gonna be 100 x cheaper than what it is today.
[00:44:22]
[00:44:22] Paul Roetzer: And so how in the hell would they have those margins on the average knowledge work task? Like I get scientific stuff, mathematics.
[00:44:33] Drug discovery, super advanced stuff. You're always gonna want the frontier model, the thing that's the best of the best.
[00:44:39] Mike Kaput: Yeah.
[00:44:39] Paul Roetzer: But for like writing emails and landing page, like, like a chat on it's good enough. I don't, I don't need another generation of models. So we know that those costs are RL roughly dropping 10 x every year.
[00:44:52] So I don't know. Like I 'd have to, I would love to hear him explain the economics of how that remains a high margin business. I don't, I like, [00:45:00] it's not intuitive to me how that would be true.
[00:45:02] Mike Kaput: Yeah. Interesting. Yeah. Alright, before we dive into our rapid fire, topics this week, just a quick, additional announcement here.
[00:45:10] This episode is also brought to you by our AI for Departments webinar series. We've talked about this quite a bit. It's actually happening. This week, starting the day that you hear this episode. So February 24th, 25th and 26th, we are doing a series in which myself and Paul will break down our latest AI for department's blueprints, AI for marketing on the 24th, AI for sales on the 25th, and AI for Customer Success on the 26th.
[00:45:37] So we'll actually talk through the key takeaways and answer your questions in q and a. All of these blueprints and webinars are presented by our friends and partners at Google Cloud. Registration is of course free and you will receive ungated access to the AI blueprints for the webinar or webinars that you register for.
[00:45:57] So I would say if you have not signed up [00:46:00] for one or more webinars for this coming week, yet, go to smarterx.ai/ webinars. You'll see the registration page right there. I would also say, one
[00:46:10] Paul Roetzer: quick note I'll make on that one Mike. I would, I would classify these as like 101 level. the blueprints themselves I would download at at any point.
[00:46:17] But I would say the webinars are very focused on kind of that, trying to help people find those three to five key use cases. We expand beyond that and we talk about the use cases, but if you're already an advanced user, when I went through those seven steps and you're like, I'm doing all that stuff,
[00:46:34] Mike Kaput: right,
[00:46:34] Paul Roetzer: send this to your coworkers who aren't.
[00:46:36] Yep. Like that's who this audience is for. We're really trying to hone in on that intro 101 level, get people comfortable and confident with the fundamental uses. Explain how you can use it in reasoning and multimodality, like trying to help them understand it's more than an answer engine. And so that's like a big focus of what we're doing with these webinars.
[00:46:54] Mike Kaput: Yeah. Especially any of those people had said, Hey, it didn't work for me. This would be a really good webinar to send [00:47:00] them to. One other quick reminder. We are currently running a survey to inform our 2026 state of AI for business report. We've talked about this as well a couple times. It's an expansion of our popular state of marketing AI report that we've done, for the last five years.
[00:47:14] So this year we're actually going beyond marketing specific research to uncover how AI is being adopted and used across companies. So we're hoping to survey thousands of business people across all industries and functions. We would love for you to be one of them. The survey takes only about five to seven minutes to complete.
[00:47:32] In return for completing it, we'll send you a copy of the report when it drops, plus a chance to win or extend a 12 month AI mastery membership from SmarterX. So go to smarterx.ai/ survey to share your input. That is smarterx.ai/ survey. We'll drop all this in the show notes as well.
[00:47:52] All right, so let's dive into some rapid fire Paul.
[00:47:55] Dor Brothers AI Movie and the Rise of Seedance
[00:47:55] Mike Kaput: First up, a new AI video tool went from a tech demo to an industry-wide copy cri right Crisis in about a week, as typical in ai, how quickly things move. So Bytedance released a tool called Seedance 2.0 this past week, and users immediately pushed it past anything beyond what the company appears to have anticipated.
[00:48:17] So, allegedly, allegedly, which we'll talk about. the Dor Brothers who are Berlin based creators, they're quite popular, published a pretty stunning three minute movie trailer made using the tool. They claim they made this in 24 hours, for very cheap, and it's, they claim the same level of quality as a $200 million Hollywood production.
[00:48:40] The, trailer drew some praise for being super photorealistic, but some people also criticized it for some visual errors and hollow storytelling. Things also escalated when someone made a video that went viral of Brad Pitt fighting Tom Cruise. It was generated from a two line prompt and hit [00:49:00] millions of views.
[00:49:01] Users also recreated a scene that was very visually intense from the 2025 film F1, and it cost 9 cents. An actor discovered his own likeness, was in a video someone made that he never filmed. So predictably Hollywood had some things to say about this. The Motion Picture Association condemned Seedance for what it called unauthorized use on a massive scale.
[00:49:24] Disney Paramount sent some cease and desist letters. the Actors Union SAG AFTRA called it blatant infringement. Bytedance kind of claimed it was caught off guard and said it will strengthen its safeguards. So Paul, one of, I just liked this tweet interestingly from one of the writers on the movie, Deadpool Retweeted that Tom Cruise, Brad Pitt video, and just said the following, he said quote, I hate to say it's likely over for us, which was definitely a strong reaction, but it sounds like people were freaking out over this and or trying to, stop copyright infringement.
[00:49:57] What did you make of all this drama?
[00:49:59] Paul Roetzer: So if [00:50:00] bytedance sounds familiar to people. The Chinese based technology company, that's the parent company to TikTok, which I think recently had to sell. I don't remember exactly what ended up happening. I think TikTok had to sell to like a US
[00:50:10] Mike Kaput: Yes.
[00:50:10] Paul Roetzer: Company and Okay. Yeah. So if it sounds familiar that, that, that's the same company.
[00:50:14] so there was an article in Hollywood Reporter on Friday, so the, this kind of escalated throughout the week, Mike, as you were saying. This article is that Hollywood's top aren't satisfied with a promise from bytedance on February 16th to tamp down on unauthorized use of intellectual property on Seedance 2.0.
[00:50:33] As a new letter from the Motion Picture Association demonstrates the trade association sent a strongly worded, cease and desist letter to the Chinese tech giant on Friday alleged alleging systematic infringement by the tool. The Holly Reporters confirmed it's this first, the first time the Motion Picture Association has sent a cease and desist to a major generative AI company, even though they all have obviously trained on the same data.
[00:50:57] The letter which is framed as a collective industry [00:51:00] response to Seedance 2.0 argues that unauthorized use of IP by its video isn't an err mistake, but rather it's baked into the tech. Mm quote. The scale and consistency of these results demonstrate systematic infringement rather than inadvertence. In other words, Seedance's, copyright infringement is a feature, not a bug.
[00:51:19] since then, Netflix, Warner Brothers, Disney Paramount, and Sony have sent their own legal threats to Bytedance. In its own cease and desist letter, Warner Brothers described the Chinese tech company as following a familiar playbook for generative AI tools, infringing on copyright for marketing purposes, and then adding in guardrails once the legal threats roll in.
[00:51:35] openAI's did the exact same thing, by the way. Yeah. so yeah, I, again, like, I don't know where this all goes. I get asked every time I go do talks now about the ethics of all of this. And like how do you justify using tools when we know all the labs are doing the same thing? There's ed Newton Rex.
[00:51:57]
[00:51:57] Mike Kaput: Rex?
[00:51:57] Paul Roetzer: Yeah. Yeah.
[00:51:57] Mike Kaput: Yeah.
[00:51:58] Paul Roetzer: So he's, he was at [00:52:00] Stability ai. He, he worked on like the video models and stability AI in the early days and he's sort of like, I don't know. He is like the self-appointed guy who was just like calling out all these labs on their training data. And it's funny because, I dunno, this is funny or not, he's basically been blocked by every AI lab leader because every time one of them puts something out, he's like, what's the training data?
[00:52:20] Right. And they've all just blocked him. And he, like, he tweets like, oh, blocked by another AI lab leader, basically. So they all have done it. They all know they did something that is probably illegal, but certainly ethically questionable. and I, you know, so there's the legal and ethical side of this, and then there is the impact on actual actors and
[00:52:45] Mike Kaput: Right.
[00:52:45] Paul Roetzer: Like I had a conversation recently with the head of marketing that was considering just using AI actors instead of hiring real, real actors. Because it would be done 10 times faster and cheaper.
[00:52:59] Mike Kaput: Yep.
[00:52:59] Paul Roetzer: And so like, [00:53:00] why, why would we go through the trouble and. So, so I'm not saying they're gonna use, like Hollywood actors, they would obviously get sued if they did that on an individual basis.
[00:53:09] But the idea that this is just gonna start to change the ad industry and the movie industry and the TV industry, I don't think that's like a future thing. I talk to people who are right now making these decisions with 2026 budgets based on where this tech's at already. So this is a very real problem.
[00:53:30] Mike Kaput: Yeah. I always get super nervous too, because I really respect and understand the perspective here where a lot of people criticize, like that Dor Brothers video being like, okay, cool, but like, this is like soulless and like humans can tell stories better. There's a human element and a soul to this stuff.
[00:53:45] I would agree with that, but I also like, just look at how many times people have said, well the tech can't do this, it can't do that. Right. And like, you bet against the exponential, you get steamrolled every single time. So I'm not saying we're there. I [00:54:00] don't know if we are. In terms of like, I haven't personally, I don't know about you.
[00:54:03] I haven't really seen anything like totally AI generated where I'm like, wow, that was like art to me necessarily. But I'm also like, who knows? Three years is a heck of a long time. My doctor just don't know.
[00:54:16] Paul Roetzer: Yeah, you, you like, you and I know this, but anyway, like you just can't look to the future and assume that any of these things won't get solved to where it just truly is like the same quality or beyond.
[00:54:31] most of the limitations we would see today are, are going to be solved for. So,
[00:54:36] Mike Kaput: yeah.
[00:54:36] Paul Roetzer: Yeah. I don't know. I don't, I don't think as deeply about this one as I do like about the journalism we're gonna talk about and stuff. But, this is a, this is a real challenge for the industry, for the movie industry, for the ad industry, for the legal industry.
[00:54:50] they're just gonna keep pushing the boundaries and whatever you see as frontier today as open source 12 months from now, so you can. You can slow down [00:55:00] the oncoming train, but this is coming one way or the other, whether people want it to or not.
[00:55:04] Mike Kaput: Yeah. Alright, next up, Anthropic.
[00:55:07] Claude Sonnet 4.6
[00:55:07] Mike Kaput: Release Claude Sonnet 4.6 this past week.
[00:55:10] This is a full upgrade to its mid-tier model cover covering, coding, computer use, long context reasoning and knowledge works. So the model is now the default for free and paid users on Claude dot ai and within Claude Cowork, it is at the same price as its predecessor. Three bucks per million input tokens, 15 bucks per million output.
[00:55:28] And here's some interesting stuff about this though. In testing users preferred sonnet 4.6 over its predecessor, roughly 70% of the time. And over actually the previous Opus flagship model, which is the bigger, more powerful model. 59% of the time it actually matches according to one customer, Opus 4.6, which is the current model.
[00:55:50] On enterprise document comprehension and closes the gap on tasks like bug detection. So it's also got a 1 million token context window in [00:56:00] beta. Improved computer use for navigating complex interfaces and better instruction following with fewer hallucinations and false success claims. So Paul, I'd love to get your thoughts here.
[00:56:11] I, one really important thing that I thought came out of this in the announcement post, they said the following quotes, performance that would've previously required reaching for an opus class model, including on real world economically valuable office tasks is now available with sonnet 4.6. I mean, it just feels like we've talked about this before, that like we're not only making more powerful models.
[00:56:34] The next generation of faster, lighter, smaller, less energy intensive models are now rivaling very recent top tier frontier models. And it's like. I feel like the cost and power of Intel, cost of intelligence, like going into zero. Wow. And you get incredible human level intelligence in lighter and lighter and less energy intensive models.
[00:56:55] It's crazy.
[00:56:55] Paul Roetzer: Yeah. And there was, I think I mentioned this last week, but there was some online chatter that this was [00:57:00] actually supposed to be opus five. Yeah. That, that basically just kept this in the sonic class for, for different reasons. So, right now philanthropic has Opus 4.6, which that came out in, was that November of 25 I think?
[00:57:14] Mike Kaput: yeah, I think it sound right. Was in November. Yes. Either or Opus 4.6. Let me check because at 4.5 might've been in November. I'm not sure,
[00:57:22] Paul Roetzer: because that was the one that made Claude and never freaked out.
[00:57:25] Mike Kaput: Yeah. Yeah,
[00:57:26] Paul Roetzer: because I think if you out in November and then it, like all of a sudden the CLO code thing took off over Christmas break of 20.
[00:57:30] Mike Kaput: Exactly. Right.
[00:57:32] Paul Roetzer: so they have Opus 4.6, which is the most powerful model, sonnet 4.6, which I tested on one use case last week and I was blown away.
[00:57:39] Mike Kaput: Yep.
[00:57:39] Paul Roetzer: and then Haiku 4.5, which is the lightweight version. meter research, which we often talk about this whole idea of like, you know, how long something takes a human expert to do and then what's the reliability at a 50% that the AI agent could do it.
[00:57:56] So this thing is like off the charts, it basically like the [00:58:00] exponential is real. Where Yeah, we went from what we were at four hours,
[00:58:05] Mike Kaput: yeah,
[00:58:05] Paul Roetzer: like three months ago, and now it's at 14.5 hours. So basically what that means, and again I'll keep this to a rapid fire, is meter looks at how long it takes a human expert to do a task.
[00:58:18] And then they take these new models and they see can AI complete that task? At a level of reliability, like 50% is the threshold that gets talked about a lot. Now the tasks that meter tests it on are specifically related to software engineering, machine learning, or cyber security. But the interesting thing is meter when they publish this finding where there's this like leap in opus, 4.6, so actually I'm going to Opus 4.6, not sonnet.
[00:58:46] So they just released the findings this week on Opus 4.6 and they said that one, they don't have tasks in their database that take humans 14 and a half hours. So they actually are like having to rethink how they [00:59:00] do the human side of this. 'cause they have to now find tasks that take humans that long so that they can test these.
[00:59:06] So they basically have reached saturation on their own benchmarks. And the reason I bring this up is because, again, this is just for software engineering, machine learning, cybersecurity, this same path forward is what is gonna happen in other industries and why your company needs its own evals. Like, look at things that take your people.
[00:59:24] One hour is two hours, five hours. And then as these agents start getting built into other areas of work, you need to be able to go in and say, okay, like, well, this is something that used to take us five hours. Wow. If we run this 10 times, the agent actually nails it like eight out of the 10 times.
[00:59:41] Mike Kaput: Right?
[00:59:41] Paul Roetzer: That's the kind of testing that's gonna need to happen at your company to know what this stuff really means to you. You cannot wait 12 months for, for McKinsey or somebody to publish a study or meter to do their research. You've gotta build your own because their evals are getting saturated. Anyway, so this is like, I , I've alluded to this.
[00:59:58] I'm working on like an idea of how to [01:00:00] do this and I think it's really, really important to like, teach companies and empower them to build their own benchmarks for this stuff and their own evals. But this is, this is why these models come so fast now, and we gotta be able to like figure out what does this mean to me and should we be switching what we're doing to based on it?
[01:00:15] Mike Kaput: Paul to that last point I had to double take on looking this up. We whiffed on this. Opus 4.5 literally was released November 24th. That's what kicked everything off. Opus was Op Opus 4.6 came out on February 5th and we definitely covered it, but that feels like it was ages ago.
[01:00:32] Paul Roetzer: So that was three weeks ago.
[01:00:33] Mike Kaput: Incredible.
[01:00:34] Paul Roetzer: Okay. So they had two months between 4.5 and 4.6.
[01:00:39] Mike Kaput: Yeah.
[01:00:39] Paul Roetzer: Again, like that goes back to the pulse survey. You think you're keeping up with this stuff? It's like, geez.
[01:00:44] Mike Kaput: No kidding. That's why we got, that's why we've got a notebook LM with all these episodes. 'cause I can't keep track seriously.
[01:00:51] OpenClaw Creator Goes to OpenAI
[01:00:52] Mike Kaput: All right, next step.
[01:00:52] The creator of OpenClaw, the fastest growing open source project in GitHub History is joining openAI's to lead its next [01:01:00] generation of personal agents. Peter Steinberger is an Austrian developer. He previously spent 13 years building PsPDF Kit, A PDF software company. In his free time as a side project in late 2025, he published OpenClaw, which you've probably heard about.
[01:01:16] It's an autonomous AI agent that runs locally on your machine and acts through the messaging apps you already use. So it can manage emails, control browsers, execute tasks without waiting for prompts. So when this came out, it hit 60,000 GitHub stars, which I kind of measure popularity in 72 hours, crossed 145,000 within two months.
[01:01:37] And interestingly, Steinberger had originally named it Claude Bot, which you may also have heard it named as, after Anthropics, Claude Anthropic sent a cease and desist, so they renamed it Molt Thought and then OpenClaw within a few days, which we covered on a previous episode. This got tons of traction, tons of attention.
[01:01:56] 'cause OpenClaw is basically just like an agent without [01:02:00] any restrictions or limits. So people are like turning their whole lives over to it and having it do all sorts of crazy stuff online. Multiple companies were also impressed. They were courting Steinberger to join them, including Metas. Mark Zuckerberg, who personally reached out.
[01:02:15] He turned them down saying he wanted to change the world, not build a large company, and instead chose to join openAI's. his stated goal in a post he published said, build an agent that even my mom can use and OpenClaw will continue as open source under a foundation. terms of the acquisition are not fully disclosed.
[01:02:33] We don't know how much you got paid. you know, it's kind of framed as more of an acquihire. So, Paul, I was curious, like when you heard this news, seems like OpenClaw went very quickly from viral X topic to very real, real difference in trajectory for Peter.
[01:02:48] Paul Roetzer: Yeah. When we talked about it, we sort of put this under the context for the non-technical audience of like, it's noteworthy.
[01:02:55] This is like signs of what's to come, but you probably shouldn't go do this [01:03:00] yourself.
[01:03:00] Mike Kaput: Right?
[01:03:01] Paul Roetzer: And to build on that, I actually saw a tweet this morning that was the. The Director of Safety and Alignment at Meta who apparently decided to give Claw Bott OpenClaw, full access to her computer. And she tweeted.
[01:03:18] So she publicly acknowledged this happened to her. nothing humbles you like telling your OpenClaw quote, confirm before acting and watching it. Speed run, deleting your inbox. I couldn't stop it from my phone. I had to run to my Mac Mini, like I was diffusing a bomb. And she actually put the screenshots where she's like, don't do that.
[01:03:40] Stop, don't do anything. Stop OpenClaw. And she's like trying to tell it to, oh my God. But it basically, she connected to her email and it just decided to just like compress everything and just get rid of it all because it was just too much to process.
[01:03:52] Mike Kaput: Oh Jesus.
[01:03:52] Paul Roetzer: So like, again, user warning, this stuff is early.
[01:03:58] Even technical people don't [01:04:00] necessarily understand what they're messing with. This is not. Stuff that the average knowledge worker or business leader should be going and doing. Like it is very early. It's kind of like when computer use first started mu, where you give it access to your screen and all these things like just user beware.
[01:04:17] Okay. Like be cautious with this stuff, even if you know what you're doing.
[01:04:22] Mike Kaput: But does also, I would say probably prove that this is where we're headed, right? Yes. Not something necessarily unfettered, like OpenClaw, but the fact openAI's, I'm sure has paid him a boatload of money to join them. It's such a high profile thing, like agents are not, not a passing fad, I would say.
[01:04:39] Paul Roetzer: No, no. It's, it is important to understand and to watch all of this emerging, but just if you are an early adopter or an innovator, be cautious as you're dipping your toes in is all I'm saying. And like, it's okay to be on the frontiers and figuring this stuff out, but like you are now living in the future, like for sure when you're messing around with this stuff.
[01:04:58] No kidding. [01:05:00]
[01:05:00] OpenAI Devices and AI Devices
[01:05:00] Mike Kaput: Next up. So we've got some updates on three of the major AI labs, building AI hardware. So first, the information reported this week, that Open AI's device team led by former Apple designer Johnny. Ive, is developing a family of products, which we knew that is now confirmed to include a smart speaker with a camera in the two to $300 range targeted for the second half of 2026.
[01:05:24] According to some additional reporting and rumors, the speaker will use facial recognition similar to face id. It'll enable purchasing and proactively suggest actions. Additional devices are in development from them also including earbuds, a pen device, and AR glasses. They have an ambition to ship a hundred million devices faster than any company has done in the past.
[01:05:45] Separately, around the same time, the New York Times reported that meta plans to add facial recognition called name tag to its RayBan Smart Glasses, and they've sold over 7 million of those units. In 2025. This basically let you as a [01:06:00] wearer, identify people and pull up info via meta AI through the glasses.
[01:06:05] And Bloomberg also reported the app was ramping up work on three new AI wearable categories, smart glasses, targeting a 2027 launch and air tag size pendant with an always on camera on microphone and camera equipped AirPods that could arrive as early as this year. So Paul, what did you make of these developments?
[01:06:25] Updates, I mean, like it or not, seems like we're headed towards the always recording future here if these devices pan out.
[01:06:33] Paul Roetzer: Yeah, so the, I mean the leaks are flowing, man, like the, no kidding. the information article about openAI's is by far the most I've seen about any of this stuff. So for sure. I mean, we don't wanna like just gloss this over necessarily in the rapid fire, but I mean, they say they have 200 people, which is way more than their safety and alignment team like working on devices.
[01:06:51] So. One thing, it just, it alludes to the fact that openAI's is through and through like a commercial product company. Like they are, they've made the shift [01:07:00] basically into this future. I think it was interesting they had some di details on the Johnny I thing, and maybe I was sensitive to this 'cause I saw the Stargate stuff collapsing over the weekend and like these, all these partnerships and they're racing to do all these different things and robotics and the brain interfaces and the consumer products and building models and it's like Sam Oman's time is, is only gonna be able to be stretched so much, which is
[01:07:22] You know, why they have another CEO. but the Jony Ive thing doesn't sound like it's going like great. I mean, again, it's one article, but the information said despite the deal i's involvement with openAI's is complicated. He still runs his design firm love from, as an entity independent of openAI's.
[01:07:41] even though it's love from that is in charge of coming up with potential openAI's de device device design. So my, so basically you have an outsider who's in charge it seems, but there's a 200 person team internally who's also working on this,
[01:07:55] Mike Kaput: right?
[01:07:56] Paul Roetzer: and it says, meanwhile, openAI's internal devices team is in charge of making the [01:08:00] hardware and the software powering it.
[01:08:01] So once OpenAI's team comes up with what they're gonna do, then they take over it sounds like. as well as understanding how consumers will use the device, the division of responsibilities has sparked tensions. Some openAI's staffers have com complained that love from has been slow to revise its designs and shares little about its process of coming up with the new ones, even with other workers on devices within openAI's, that secrecy and meticulous focus on design is par for the course for Apple, where a number of device staffers and leaders came from.
[01:08:30] Apple has strict rules around which employees are allowed to know about various projects. Something to keep an eye on. Yeah. the name tag stuff from Meta, if you think that they. Let that get out in the midst of a completely chaotic period in society in hopes that no one would notice you would be a hundred percent accurate.
[01:08:49] Like that is some creepy ass stuff that they have been probably planning for half a decade or more, starting with allowing you to tag people in the photos you upload to Facebook. [01:09:00] They always wanted to build basically a database of faces and names. and so the idea that Meta knows who everyone is based on Instagram and Facebook and all this stuff, and they're gonna infuse that into the glasses.
[01:09:14] So like as you're at the gym with your glasses on, you could look at somebody and know who they are and Black Mirror stuff like it's real. And so like, don't it? I don't know. That stuff terrifies me. So yeah, that's happening. And they don't want you to notice it and make a big deal out of it is all I'll, I'll say there.
[01:09:36] And then the apple stuff, man, like Bloomberg, just again, the leads no kidding, are crazy. Like the amount of Apple does not let, let details like this out. Yeah. So the smart glasses we've talked about the pendant, we've talked about, there was a couple of details in there. Like one, they had the code name N 50.
[01:09:52] This stuff doesn't come out like this is very unusual for, for Apple stuff. they talked about some of this being like the [01:10:00] humane AI pin, which I was critical of for good reason when it failed. Right. but the pendant and things like that, they're actually talking about more of, and like the cameras in the AirPods about not recording devices, but awareness devices.
[01:10:14] So they're obviously like seeing and understanding what's happening around you, but they're not actually recording things.
[01:10:18] Mike Kaput: Yeah.
[01:10:19] Paul Roetzer: is kind of how they're playing it. So I don't know, like I would go read the Bloomberg article. There's a ton of detail in there. I, and I find this stuff personally fascinating, especially when we're talking about Apple, who, you know, has the capability to actually do this stuff.
[01:10:33] But they said that the, what the glasses prototype is like being shared internally. They have people testing it and they're targeting to start production on those as early as December of this year with a two 2027 release. Now that can obviously change.
[01:10:47] Mike Kaput: Yeah.
[01:10:47] Paul Roetzer: But if Apple gets in the game, and they even talked about, like, they were originally the thing about doing deals with outside manufacturers, but now they're, it sounds like the current path is for Apple to manufacture their own glasses.
[01:10:59] Hmm. So [01:11:00] this, this is like major shit. I mean, consumer devices is, is gonna, the next three years is gonna be crazy 'cause all this stuff is gonna come online from all these different companies. And like, I , I don't know, like, I'm just kind of thinking out loud here, but what happened when the iPhone came into the world in 2007 and how just changed behaviors.
[01:11:21] We could very much be heading toward that next generation of the user interface. And I don't know which one wins or like how popular the glass has become, but there's enough critical mass now and enough companies working on it where you could definitely see in the remainder of the decade a shift where this stuff becomes mainstream.
[01:11:38] Mike Kaput: Yeah, I was gonna say, I mean we really haven't had a major consumer hardware launch outside of the iPhone. And also do you just kind of think about where this is all going, knowing what we know about ai And I always wonder if like, you know, I keep thinking about like the most valuable things you could do to really level up the AI we have today outside of the improvements in the frontier models [01:12:00] is it has to do with either how you engage with AI or like what data it can take in, right?
[01:12:05] So like the real level up for me is like, not to say it's right or wrong, but if you suddenly have all this environmental data of what my day looks and sounds like, voice first interaction, all this stuff. Can dramatically cont 10 or a hundred x the value you get out of AI if it's done right.
[01:12:22] Paul Roetzer: Yeah. And I 'm, again, I'm, I didn't really have time to like think ahead on this one too much, but it is this like ambient awareness.
[01:12:28] Mike Kaput: Yeah.
[01:12:29] Paul Roetzer: And if you think about it, like the closest parallel at the moment I can come up with is when you're driving in a Tesla, so there's seven outward facing cameras of a Tesla.
[01:12:40] Mike Kaput: Yeah.
[01:12:40] Paul Roetzer: That's observing everything around you. And increasingly it'll be able to, through Grok, be proactive about processing that information to like surface it for the human and maybe see stuff you're not seeing, things like that.
[01:12:58] 'cause right now it's very just [01:13:00] observation only and it doesn't actually change the behavior of the human. it changes the behavior of the car. Like it might steer it differently, things like that. But yeah, I think if, like I've always been. The glasses creep me out. Always have. Yeah. And I like, I actually had an instance last week on a plane where I was working on something and I felt like the person next to me was like looking over at my screen.
[01:13:23] Mike Kaput: Hmm.
[01:13:24] Paul Roetzer: Which isn't abnormal. And it's like, fine, I would never work on something on the plane. I wouldn't like if I would care that much if someone looked over. But I actually found myself doing a double take to see if he was wearing meta glasses or not because there was that point where I was like, why would I don't want anybody like recording what I'm working on.
[01:13:39] This is it. And I started realizing that like, just awareness you're gonna have to have around people walking into classrooms. Like last week Zuckerberg was, you know, on trial. On trial but, well maybe it was a trial for some creepy thing they did related to like, I dunno, kids in their platform or something.
[01:13:57] And meta executives walked into the [01:14:00] courtroom wearing meta glasses and the judge berated
[01:14:02] Mike Kaput: for real.
[01:14:02] Paul Roetzer: It. Yeah. So you're getting to this point where in society where there's this ambient stuff and people are recording things on pendants that you don't, might not even know they're wearing. I hate that stuff.
[01:14:12] Like I really do not look forward to that future where you have to like, assume everyone is wearing some device that's recording you. And I know we'll get this, but like, oh, it's already happening. Like, no, it's not like, not, not at this level. Like yes, if you're in Silicon Valley, like hanging out at hackathons.
[01:14:28] Sure. Like I assume people are probably recording everything, but when you're just like living your life,
[01:14:34] Mike Kaput: right.
[01:14:35] Paul Roetzer: going to business meetings and sporting events, like I don't assume I'm being recorded and they have name recognition like that, they recognize my face be like, that's like a whole nother degree of weirdness to me that I'm not, I'm not personally ready for.
[01:14:49] Right. But I know it's coming. Right.
[01:14:50] Mike Kaput: Yep.
[01:14:51] AI in Journalism Controversy
[01:14:51] Mike Kaput: All right, so next up the editor of, one of our hometown, media outlets cleveland.com. And the Plain Dealer actually published a column this week that is getting some pull outside of our, our local town, arguing that journalism schools have become actively harmful to their students.
[01:15:08] So this is all related to ai. So in this editorial editor, Quis Chris Quinn related the story of how a recent college graduate withdrew from a reporting job with his paper. because of how the newsroom uses ai, so specifically cleveland.com has been pretty open about how they use AI extensively in their work as a media company, as a journalism outlet.
[01:15:30] And this student, apparently, according to Quinn, had been told repeatedly by journalism school professors that AI is bad. So the student actually said like, Hey, I don't really wanna work with you guys. Here's an ai. And Quinn wrote about this perspective quote that's backwards, and it seriously handicaps them as they begin their careers.
[01:15:48] I've written extensively about how we use AI to do more and better work. It has quickly become critical to everything we do and our success. He's outlined how his reporters now do nothing except do reporting because they're augmented [01:16:00] by AI in a variety of ways, and AI rewrites specialist takes their reported material, turns it into drafts.
[01:16:07] The editors and reporters review. Quinn argues that by removing writing from the workload, reporters gain an extra workday each week. They now spend that time on the street doing in-person interviews, meeting sources, and he claims the approach allowed the paper to expand coverage into other counties and beats that it couldn't afford to staff with full teams.
[01:16:28] So he basically uses this all as an argument that journalism degrees are literally no longer necessary. He points out that before Watergate broke, most journalists didn't have one. He just says, newsrooms need smart people who know how to get information and build trust face-to-face. And then the rest is kind of augmented.
[01:16:45] By ai. So Quinn got a ton of pushback on this. Paul, there's a lot of controversy around this. I know you followed plenty of the people we know commenting on this. What was your take watching this all unfold?
[01:16:57] Paul Roetzer: Yeah, this one definitely hits close to home. So [01:17:00] it is a Cleveland, you know, organization. I am a journalism school grad.
[01:17:05] Mike is, was a journalist in his past life. Mike, you spoke at the press club last week, like you did a virtual event for the press club, right? I did. Like, as all this is unfolding.
[01:17:14] Mike Kaput: Yes.
[01:17:14] Paul Roetzer: So I think that, probably could have, should have maybe done a main topic on this one. I , I probably have a lot of thoughts, but I honestly found myself really struggling to put some notes together this morning as I was prepping for this one.
[01:17:30] Yeah, maybe because it's so raw and I actually don't have the answers and it's something I've thought a lot about. I mean, I was guiding the journalism school where I graduated from back in like 2018, 2019, to get ready for this, that there was gonna. Come a day and when AI was gonna be able to write at the human level and that we should start thinking about what does that mean to the future of journalism and the school and things like that.
[01:17:53] So this is something I've thought a lot about. in our 2022 book, I wrote a section on what happens when AI can write like [01:18:00] humans. So that, that's how I kind of come at this. I 'll highlight a couple of the excerpts from this mike, in addition to one you said. So he said, college journalism programs are failing to prepare students for the workforce.
[01:18:14] like many students we've spoken to, this one had been told AI was bad, which we hear all, all, all the time. They said they fact check everything. Editors review it, reporters get the final say. Humans, not AI control every step, but by removing writing from reporters' workloads, they've effectionately freed up that extra day, like you said, and they're spending it doing human things, interviews, meeting sources, things like that.
[01:18:34] Yeah. But then they touched on this idea that by walking away, they're entering the worst journalism market In years, hundreds of veteran journalists will compete for the few openings. This is facts. someone outta school stands Little chance, especially if they don't, aren't willing to use the tools that are part of journalism profession today.
[01:18:54] journalism programs are decades behind. Many graduating students have unrealistic expectations. They imagine [01:19:00] themselves as long form magazine storytellers chasing a romanticized version of journalism that largely never existed. Hmm. AI is not bad for the newsrooms. It's the future of them. anyone entering this field should be immersing themselves in it.
[01:19:13] I can't, I can't argue with those things. Fortunately for us, for those of us who know exactly what skills we need in applicants, AI has altered the landscape so dramatically that we don't need journalism school grads. Like that one hurts a little bit. Yeah, that one. I think he might actually like gone a little too far with that one.
[01:19:32] what we need now is AI can help draft stories. It can sit across from someone make, can't sit across from someone, make eye contact, build trust. The course skills today differ sharply from even a decade ago. If you're exuding skipping, if you're student considering Jerusalem, I would skip that degree.
[01:19:47] Study political science, learn technology, understand how government business and nonprofits work. Take communications, law and ethics as electives. Skip much of the rest, man. Like,
[01:19:57] Mike Kaput: yeah,
[01:19:58] Paul Roetzer: I don't know. Like, it's, so [01:20:00] I would, I would actually love not to have a debate with them. I would love to actually sit down and just have a conversation about these perspectives and how you arrive at them.
[01:20:08] I mean, I'm obviously like you gotta be AI forward. Like I think the school that student graduated from did them a disservice. And I hear this all the time, that students are not, they're told it's plagiarism and cheating and not to use the tools. And if we have right. Educators listening to this podcast.
[01:20:26] I will tell you point blank, you are doing a disservice to their future. So as extreme as some of these perspectives may seem, if you are still telling your students it's plagiarism and cheating and you're not encouraging the use of AI in your classroom and teaching responsible use, you, you are doing a disservice to their, employability once they get out of school.
[01:20:45] So we have to accept the reality of the job market. We have to accept where this is going. You have to understand this is how organizations are thinking for better, for worse. It is the reality of how they're thinking. And he's willing to tell you point blank, it's how they're thinking. And then there's the side to me [01:21:00] that's like, I don't agree that journalism doesn't matter and that you shouldn't go to journalism school.
[01:21:04] I think writing is the most important skill in business, and I think it always has been. And even though the AI can write for you, it doesn't mean you still shouldn't be a strong writer that can communicate thoughts in a cohesive way and build outlines and like go through a, a rational thinking process to arrive at a decision point and know if the output's any good.
[01:21:22] Like, I don't know how you get there without writing. I, so I don't, so I feel like some of this is an extreme, but there's some of it I can't debate. And I do think journalism schools are a decade behind and that's, that's, that's a problem. But it's not just journalism schools, it's every school. It's every college is the same way.
[01:21:40] They're all half a decade behind. They're all stuck in the pre gen AI era, and it's, it's hard to move fast enough to keep up.
[01:21:49] Mike Kaput: Yeah. I would, one other thing I would just add that kind of became apparent to me throughout this whole debate, and there's so many really cool perspectives on it, and many of which I agree or disagree with.
[01:21:59] I just [01:22:00] love the variety of debate happening. But you know, as a journalist, one of your core jobs is to be critical, right? Of what people tell you of power, of headlines. And I would actually argue, and this can be hard sometimes, whether you're an old school journalist or a new school journalist, all of this can be very threatening.
[01:22:17] And challenging, but I think you probably also have to apply that criticism to the narratives you are deciding to embrace about ai, whether it's energy usage, that it stops you from thinking that it's inherently bad. I think you should be equally critical of everyone being like, rah, rah, ai all the time too.
[01:22:34] But I'd challenge you to apply that critical lens to some of the things you take for granted. Updating your priors, so to speak, might be a useful exercise here too. I think these kinda editorials, for better or for worse, like just are bringing that
[01:22:47] Paul Roetzer: Yeah.
[01:22:47] Mike Kaput: Debate to the forefront.
[01:22:48] Paul Roetzer: And I do think that that's, you know, part of why our audience, hopefully listens to us each week is I think Mike and I just inherently take a very [01:23:00] journalistic approach to what we do.
[01:23:01] We try and take a balanced approach. We try our best to like, think of the audience and write to the audience and talk to the audience about where they are. And like, I don't, I, some people are natural storytellers. Sure. That didn't go through training to do it, but. I don't, I couldn't do the podcast the way we do it.
[01:23:19] Had I not gone through journalism school, like not a journal. And I'm not a journalist like that. I never spent a day as a journalist. I just went through journalism school and learned how to write and story, tell and form ideas and, you know, think critically about things. And so, yeah, I , I do have a hard time and I think that what they're thinking about@cleveland.com is maybe halfway right.
[01:23:44] and then there's a part of me that thinks it also is very nearsighted and could end up leading to its demise. Like I think that there's almost like you go to this extreme of all in on AI and get the human so much out of it. Yeah. The reason, like I was thinking about [01:24:00] as I was driving into this, he is like, why do I do what I do?
[01:24:02] Like I was actually listening to a podcast about coders who's using AI to write a hundred percent of the code. And he's like, but IWI wasn't doing it to write the code, I was doing it to create things that change the world. Right? And so, like, if part of the reason why you write why you're a journalist is.
[01:24:17] To tell the stories. Then I could see like, well, if the AI is good at telling the story, you gotta do this other part of it. But you're, the whole goal is to tell the stories to change the world. Right. Then you accept the AI as a part of it. But if part of the reason why you write is because you love writing and you love the creative process and thinking, and that is why you went to journalism school, then it's a really hard thing to accept that you're not gonna do that when you get out.
[01:24:39] Mike Kaput: Yep.
[01:24:40] Paul Roetzer: It,
[01:24:40] Mike Kaput: yeah.
[01:24:41] Paul Roetzer: there's no clean answers to this. That's why I love the conversation. Like you said, it's just good to have the debate.
[01:24:46] Mike Kaput: Yeah. We need to figure out how to Yeah. Have a further conversation with Chris, perhaps about all this.
[01:24:51] Paul Roetzer: Yeah.
[01:24:51] Mike Kaput: Yeah.
[01:24:51] Paul Roetzer: It'd be good. Maybe that's a good make on, we should invite him to make on Yeah.
[01:24:54] Mike Kaput: Oh, that'd be a quiet idea. Yeah.
[01:24:55] Paul Roetzer: That'd actually be a really good session.
[01:24:56] Mike Kaput: Yeah.
[01:24:56] Paul Roetzer: I, we should do that. Hey, Chris, if anybody goes dot com's looking Yeah. [01:25:00] Have the team reach out to us. I would love to, yeah. Maybe explore that.
[01:25:05] Meta Patents AI for the Dead
[01:25:05] Mike Kaput: All right, just a couple other quick topics here as we wrap up this week. So another item on meta.
[01:25:11] Meta has been granted a US patent for an AI system designed to let users keep posting on social media after they die. That is not a typo. The patent was filed in November, 2023, granted, just late last year, and describes training a language model on a deceased user's posts, comments, chats, voice messages, and likes.
[01:25:33] The system would then respond to newsfeeds, send direct messages, leave comments, make posts, and potentially conduct simulated audio or video calls all across Facebook, Instagram, and threads. The primary inventor on the patent is listed as Andrew Bosworth, who has met his CTOA Metas spokesperson, told Business Insider.
[01:25:53] The company has no plans to act on the patent. Paul, I hate to be skeptical, but I've like never [01:26:00] thought something was not true so much in my life based on Meta's track record. I , sorry for, I'm
[01:26:06] Paul Roetzer: gonna do something I've never done on the show for, I'm actually gonna take a pass on even providing commentary on this.
[01:26:11] I love it. I just can't even go there right now.
[01:26:14] Mike Kaput: The only little thing I will add here is I looked up some data on this and I was just curious like, okay, what's the incentive of this for meta? Right? You know, I just gotta wonder like who benefits? And so here's an interesting stat from a business perspective.
[01:26:28] Meta is facing a looming demographic. Reality researchers predict that by 2050, the number of dead users on Facebook will outnumber the living. So this may be an explanation, but let's move on.
[01:26:42] Paul Roetzer: I just can't, I can't do it. I can't go there right now.
[01:26:45] Mike Kaput: Let's move on to Better and Brighter things, which are a range of AI product and funding updates, which I'm gonna breeze through very quickly.
[01:26:53] Paul, feel free to chime in here and then we'll wrap up for the week.
[01:26:56] Paul Roetzer: Sounds good.
[01:26:56] AI Product and Funding Updates
[01:26:56] Mike Kaput: Alright, so first up, Anthropic has raised $30 billion [01:27:00] in Series G funding at a 380 billion. Valuation. it's the second largest venture deal of all time. Company now generates 14 billion in annualized revenue. They have crossed profitability apparently, and count eight out of the 10 Fortune or Fortune 10, eight out of them as customers.
[01:27:17] They also launch Claude in PowerPoint, a purpose built integration for building and editing presentations directly inside the application. At the same time, a number of Google announcements, Google released Gemini three deep think a specialized reasoning mode for science and engineering that has solved previously unsolved research problems and set new benchmarks.
[01:27:39] Google has also released Gemini 3.1 Pro, which now leads artificial analysis' Intelligence index and is designed for tasks where extended multi-step reasoning is required. Google launched Elyria three, a music generation model inside Gemini that produces tracks from text prompts. Though the company has not disclosed what it was trained on.[01:28:00]
[01:28:00] Paul Roetzer: That one was, I actually used that one on my kids. Yeah, it was hilarious. I've never seen my kids run outta the room faster because like my son wouldn't get off his computer like he was doing Minecraft or something before bed. And so I was like, make a song about my son Balen, telling him to get off of Minecraft and him and his buddy and that they need to go to bed.
[01:28:17] And so it, like, it does it spins up like this 10, 12 second song that's awesome in like 10 seconds. And then I just put it on blast and I like walked into the room. Oh my god. I do it for my daughter too. It cringed on both houses. You wanna have some fun with your kids. That's a fun way do
[01:28:32] Mike Kaput: Incredible.
[01:28:35] acquired by Meta for over 2 billion, has launched AI agents directly inside Telegram, which is a messaging service. Users scan a QR code and can run multi-step tasks through the chat interface. Xai released ROC 4.2 in public beta. Elon Musk claims it will be an order of magnitude smarter and faster than ROC four.
[01:28:54] CloudFlare introduced something called Markdown for agents, which automatically converts [01:29:00] webpages to markdown when AI agents request them. Poly AI is a startup that has raised $200 million from NVIDIA and others for enterprise voice agents. 11 Lab launched AI agents for customer support across more than 70 languages, and the financial times last but not least, has reported that perplexity AI has abandoned its ad-based revenue model entirely with executives claiming users will not trust ads embedded in AI generated answers.
[01:29:31] Or
[01:29:31] Paul Roetzer: they don't have enough users to justify
[01:29:33] Mike Kaput: or they don't. Right. Right.
[01:29:35] Paul Roetzer: So real quick note on these, like when Mike does these end end of the thing, AI product and funding, like don't assume just because we're lumping these into one update that they're not significant. Like every one of these things could be, I could sit here and talk for 10 minutes about like why they matter.
[01:29:50] So like if you're really wanting to stay on the edges of this stuff. Take the time and think about these, these announcements he's doing at the end and like go do a little homework on, I'm like, [01:30:00] they're all like the Anthropic one. It's like, who gets fired at Microsoft after Anthropic built better AI into Excel and PowerPoint than you did.
[01:30:08] Right? Like what the hell it is. Like it just fixes the thing. We've been way begging for three years to get from Microsoft. Claude shows up and does it better than they do it. Like, my God. Yeah,
[01:30:18] Mike Kaput: each, yeah. Each of these is indicative of such a larger trend too, right? Yeah. We talked, that's the main topic about the SaaS apocalypse.
[01:30:25] Anthropic release is not even a new tool, but a feature that sends markets tumbling. Right? Or a
[01:30:31] Paul Roetzer: plugin for security like last week and security stocks crash like Yeah. It is again, like, I mean, we could do, I , and I'm not saying we're going to do this, we're definitely not gonna do this. Like we could go to three hours on the show every week.
[01:30:43] 'cause every one of these could have easily been a rapid fire worthy of conversation. So. Yeah, just don't throw away these last like five to seven minutes when Mike goes through these kind of really rapid updates because they're all relevant to the story.
[01:30:58] Mike Kaput: A hundred percent. [01:31:00] So Paul, as we wrap up here, one quick note on this week's AI pulse survey.
[01:31:04] Again, as a reminder, smarterx.ai/ pulse. This week we're gonna ask two questions about the stuff we talked about. First up, a question about how Microsoft AI, CEO, Mustafa Suleyman says, most white collar tasks will be fully automated by AI within 12 to 18 months. How realistic do you find that timeline?
[01:31:22] And also where do you land on AI generated video using real people's likenesses? In the videos it generates. Be interested to see the audience's commentary on these two news topics.
[01:31:35] Paul Roetzer: Packed week, man,
[01:31:36] Mike Kaput: packed week. That's crazy. Thank you again for breaking it all down, man. I mean, that's, that is it. That was a beast.
[01:31:41] Paul Roetzer: Yeah. All right. So, AI for department's, webinars this week join us. We also are gonna have episode 1 99 will be on Thursday. We're doing a special AI answers edition this Thursday, so you get two episodes this week. I'll record that tomorrow morning. And then, next week is the episode [01:32:00] 200. So if you're an AI Mastery member or wanna become one before next week, you can join us for the live recording on Monday, March 2nd, and then that opposite will drop on the third.
[01:32:07] So lots going on. I'm trying to take it one one day at a time this week. I'm trying to not even look too far ahead. So thanks as always for joining us. We'll talk to you again either on Tuesday, Wednesday, Thursday, or next Monday I guess. Thanks for listening to the Artificial Intelligence show. Visit SmarterX dot AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy and engaged in the SmarterX slack community.
[01:32:45] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
