Something fundamental shifted in AI this quarter, and it wasn’t just the new model releases.
Paul and Mike step back from the weekly news cycle to rank the 10 trends that defined the last three months: a model release frenzy that saw state-of-the-art change hands multiple times, OpenClaw's breakout into mainstream consciousness, a SaaSpocalypse that erased hundreds of billions in market value, AI-driven layoffs starting to go mainstream, and a vibe shift so significant that AGI is no longer just an insider conversation.
Listen or watch below and see the show notes and the transcript that follow.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:05:14 — The Model Release Frenzy
- Ep. 189 of The Artificial Intelligence Show
- 00:05:41 — How Close Are We to AGI? (Claude Opus 4.5 capabilities)
- Ep. 196 of The Artificial Intelligence Show
- 00:50:55 — Claude Opus 4.6
- 00:56:00 — GPT-5.3 Codex
- 00:59:10 — OpenAI Frontier
- Ep. 198 of The Artificial Intelligence Show
- 00:55:07 — Claude Sonnet 4.6
- 01:26:56 — AI Product and Funding Updates (Gemini 3 Deep Think, Gemini 3.1 Pro, Grok 4.2, Google Lyria 3)
- Ep. 201 of The Artificial Intelligence Show
- 00:54:54 — GPT-5.4
- 01:00:55 — The Move 37 Moment for Math
- Ep. 205 of The Artificial Intelligence Show
- 01:30:47 — AI Product and Funding Updates (GPT-5.4 mini and nano models)
00:11:20 — Big AI is Big Lobbying
- Ep. 200 of The Artificial Intelligence Show
- 01:10:54 — Politics of Data Centers
- Ep. 201 of The Artificial Intelligence Show
- 01:13:35 — AI Art Can't Be Copyrighted (Supreme Court Declines Review)
- Ep. 203 of The Artificial Intelligence Show
- 01:19:49 — AI Politics Update (Sanders Data Center Moratorium, Trump AI Standards)
- Ep. 205 of The Artificial Intelligence Show
- 00:29:07 — New Polling on AI and Trump National AI Framework
- Ep. 207 of The Artificial Intelligence Show
- Sanders and AOC Push Data Center Moratorium
- Trump's Tech Council (Zuckerberg, Andreessen, Huang)
- Congress Could Pass AI Standard in Months
- OpenAI's Chief Futurist vs. His Own Boss
00:16:15 — Anthropic vs. the U.S. Government
- Ep. 200 of The Artificial Intelligence Show
- 00:07:08 — Anthropic vs. US Government (Pentagon Demands Unrestricted Claude Access, Government Blacklists)
- Ep. 201 of The Artificial Intelligence Show
- 00:07:00 — Anthropic vs. US Government Round 2 (Formal Supply Chain Risk Designation)
- Ep. 203 of The Artificial Intelligence Show
- 00:07:48 — Anthropic vs. Pentagon Round 3 (Federal Lawsuits, Amicus Briefs from Microsoft/OpenAI/Google)
- Ep. 205 of The Artificial Intelligence Show
- 01:10:01 — Anthropic vs. Pentagon Continues
- Ep. 207 of The Artificial Intelligence Show
- Anthropic Granted Preliminary Injunction (Judge Calls Ban a "Punishment Attempt")
00:22:37 — The Rise of OpenClaw
- Ep. 195 of The Artificial Intelligence Show
- 00:05:27 — Moltbot and Moltbook Take the World by Storm (OpenClaw Goes Viral)
- Ep. 198 of The Artificial Intelligence Show
- 01:00:51 — OpenClaw Creator Goes to OpenAI (Peter Steinberger Joins for Personal Agents)
- Ep. 201 of The Artificial Intelligence Show
- 01:10:03 — NVIDIA CEO Calls OpenClaw "Most Important Software Release Ever"
- Ep. 203 of The Artificial Intelligence Show
- 01:34:47 — Meta Acquires Moltbook
- From skeptic to true believer: How OpenClaw changed my life - Claire Vo
- #491 – OpenClaw: The Viral AI Agent that Broke the Internet, Peter Steinberger - Lex Fridman Podcast
00:28:32 — Enterprise AI Adoption: The People Problem
- Ep. 189 of The Artificial Intelligence Show
- 00:31:48 — AI Change Management
- Ep. 193 of The Artificial Intelligence Show
- 01:17:39 — New Survey Shows Big Disconnect Between Employees and Leaders on AI
- Ep. 195 of The Artificial Intelligence Show
- 00:25:56 — Marketing AI Council Report
- 01:09:13 — New Gallup Research on AI Usage in the Workplace
- Ep. 197 of The Artificial Intelligence Show
- 00:46:37 — Academy Success Score
- Ep. 201 of The Artificial Intelligence Show
- 00:49:19 — Barriers to Enterprise AI Adoption
- Ep. 205 of The Artificial Intelligence Show
- 00:45:46 — Company Transformation with AI (SmarterX Offsite Recap)
- LinkedIn Post from Paul Roetzer
00:36:22 — SaaSpocalypse
- Ep. 193 of The Artificial Intelligence Show
- 01:28:29 — How Do Credit Pricing Models Work?
- Ep. 196 of The Artificial Intelligence Show
- 00:06:24 — SaaS Apocalypse ($300B+ Stock Decline)
- Ep. 201 of The Artificial Intelligence Show
- 00:35:30 — Services as the New Software (Sequoia Capital Analysis)
00:43:11 — Labs Pivot to AI Agents
- Ep. 191 of The Artificial Intelligence Show
- 00:26:07 — Claude Cowork (Desktop Agent for Non-Technical Task Automation)
- Ep. 196 of The Artificial Intelligence Show
- 00:59:10 — OpenAI Frontier (Agent Product)
- 01:14:52 — Agentic CRMs
- Ep. 201 of The Artificial Intelligence Show
- 01:19:21 — Microsoft Copilot Cowork (Autonomous Work Product)
- Ep. 203 of The Artificial Intelligence Show
- 01:30:51 — Andrej Karpathy's Autoresearch Agent
- Ep. 205 of The Artificial Intelligence Show
- 00:05:50 — AI Labs Refocus on Agents and Enterprise
- 00:59:52 — Nadella Takes Over Microsoft Copilot
- Ep. 207 of The Artificial Intelligence Show
- AI Agent Nightmares
- Entire Claude Code CLI source code leaks thanks to exposed map file - Ars Technica
00:50:58 — AI-Driven Layoffs Go Mainstream
- Ep. 189 of The Artificial Intelligence Show
- 00:41:59 — Khan Academy Creator Calls for Job Displacement Fund
- 00:47:30 — Jevons Paradox in AI
- Ep. 193 of The Artificial Intelligence Show
- 00:21:26 — Amazon Layoffs and the "Great Divergence" (14,000 Corporate Cuts)
- 01:24:39 — xAI Wants to Automate White-Collar Workers
- Ep. 196 of The Artificial Intelligence Show
- 01:11:01 — Latest on AI Impact on Jobs
- Ep. 198 of The Artificial Intelligence Show
- 00:08:48 — Microsoft AI CEO Predicts White Collar Work Automated in 12–18 Months
- Ep. 200 of The Artificial Intelligence Show
- 00:59:38 — Block AI Job Cuts (~4,000 Employees)
- Ep. 201 of The Artificial Intelligence Show
- 00:19:43 — Anthropic Analyzes AI Job Impact (94% of Knowledge Tasks Theoretically)
- Ep. 203 of The Artificial Intelligence Show
- 00:46:18 — Atlassian Layoffs and Job Loss Dashboard (1,600 Employees)
- Ep. 207 of The Artificial Intelligence Show
00:55:54 — We’re Seeing More "Move 37" Moments
- Ep. 196 of The Artificial Intelligence Show
- 00:33:56 — The Move 37 Moment for Everyone
- Ep. 200 of The Artificial Intelligence Show
- 00:51:19 — Interview with the Head of Claude Code (Coding "Effectively Solved")
- Ep. 201 of The Artificial Intelligence Show
- 01:00:55 — The Move 37 Moment for Math
- Ep. 203 of The Artificial Intelligence Show
- 00:30:02 — New York Times "AI Writing Quality" Quiz (86,000+ Readers, 54% Preferred AI)
- The Move 37 Moment for Knowledge Workers - Paul Roetzer, MAICON 2025
01:02:38 — The Vibe Shift
- Ep. 189 of The Artificial Intelligence Show
- 00:05:41 — How Close Are We to AGI? (Claude Opus 4.5, METR Time Horizon)
- Ep. 190 of The Artificial Intelligence Show
- 00:21:01 — Audience Reactions to AGI Discussion
- 00:29:33 — Real World AI Use Cases for Claude Code
- Ep. 193 of The Artificial Intelligence Show
- 00:05:10 — AGI Comes to Davos (Amodei: 2026–2027, Hassabis: 50% by 2030)
- 00:58:55 — Google DeepMind Is Hiring a "Chief AGI Economist"
- Ep. 195 of The Artificial Intelligence Show
- 00:40:49 — Dario Amodei Publishes "The Adolescence of Technology"
- 00:59:05 — METR Releases New AI Time Horizon Estimates
- Ep. 197 of The Artificial Intelligence Show
- 00:07:58 — Something Big Is Happening (Matt Shumer's Viral Essay, 72M+ Views)
- 00:27:06 — Claude Safety Risks (Sabotage Risk Report, ASL-4 Thresholds)
- Ep. 198 of The Artificial Intelligence Show
- 00:08:48 — Microsoft AI CEO Predicts White Collar Work Automated in 12–18 Months
- 00:20:42 — AI Productivity Evidence (Brynjolfsson: 2.7% Growth)
- 00:33:23 — Dario Amodei on Dwarkesh
- Ep. 205 of The Artificial Intelligence Show
- 01:14:42 — DeepMind's New AGI Scorecard
- 01:18:40 — What 81,000 People Want from AI (Anthropic Survey)
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Learn more here.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: The organizations that are really struggling here often lack CEOs who have presented a clear vision for the future of work in their organization and what is required and expected of their employees in that future of work. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:22] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career, join us as we accelerate AI literacy for all.
[00:00:50] Welcome to episode 208 of the Artificial Intelligence Show. I'm your host Paul Roetzer on with my co-host Mike. We have a special edition of the [00:01:00] weekly podcast episode this week. I'm on vacation, so when you're listening to this, I will be, yeah, I'll be out of the office and spending some time with my family.
[00:01:10] And so rather than skipping a week, we decided let's do a Q1 trends review. So let's take a look back at the, I dunno how many episodes, I guess about 12 episodes, Mike, right? Yeah. That we did in Q1 12 of these weeklies. And across those 12 weeklies, you know, we cover three main topics each week, and then probably seven to 10 rapid fire items.
[00:01:32] So we're talking about what, like 150 ish topics, Mike? That's about right.
[00:01:36] Mike Kaput: I'd say easily. Yeah.
[00:01:37] Paul Roetzer: Yeah. So, so yeah, we've, we've covered 150 different clips, and by the way, like the, our main segments are all clipped to YouTube. So if you ever go to like our YouTube channel, you can actually go and drill into specific segments.
[00:01:49] but yeah, so Mike curated the 150 or so topics that we have had in Q1 of 2026 and broke it down into 10 [00:02:00] key trends that we're gonna recap on this episode. So again, rather than having nothing this week while I was away, we figured let's record this. So we are recording on Tuesday, March 31st. This will be dropping on, I don't know what that date is, Mike.
[00:02:12] Mike Kaput: It'll be the April 7th. Okay. Tuesday, April 7th.
[00:02:16] Paul Roetzer: There you go. And then we will be back with our regular weekly episode on April 14th. So I'll be back, for my trip and we'll be back in recording our regular weekly episode that week. So yeah, so that's, that's what we're doing today. So something special.
[00:02:31] We normally do these trends briefings as part of our AI academy, my SmarterX Mastery membership program. We're gonna keep doing that, but we're thinking there might be like an evolution to where the mastery members actually get to, like participate and join these live. We tried that with episode 200.
[00:02:47] Of our podcast, we, invited mastery members to attend live and ask questions. So we're kind of working through the evolution, but I'm thinking that might be a cool direction to go with these, where we record these quarterly trend briefings actually for [00:03:00] the podcast. But then we invite our mastery members to join us in a live audience when we do those.
[00:03:04] So, you know, more, more news to come on that. But for right now, we thought, let's just get this out there. These quarterly trends are a great way for us to take a look back, kind of a retrospective of what's happened over the previous three months. And it is a lot, as Mike can attest, having in the last 36 hours pulled this all together.
[00:03:20] but yeah, there's just a ton to talk about. So we're gonna go through just these 10 items. I'm gonna try not to like over narrate these. I I think we're just gonna kind of get the information to you, give you some context and, if you're new to the podcast, it's a great way to just sort of catch up with what's been going on in the last three months.
[00:03:36] Okay. This episode is brought to us by AI Academy by SmarterX, which helps individuals and business, businesses accelerate their AI literacy and transformation. Through personalized learning journeys and an AI powered learning platform. New educational content is added weekly, so you always stay up to date with the latest AI trends and technologies.
[00:03:56] The AI four departments collection features five courses [00:04:00] and course series and certificates that are designed to jumpstart AI understanding and adoption. So our AI mastery members have access to all of these on demand right now, and you can also buy them individually as course series. But we have AI for marketing, sales, customer success, hr, finance.
[00:04:15] And Mike, if I'm not mistaken, you wrapped up or are wrapping up operations this week?
[00:04:19] Mike Kaput: Yeah, we're wrapping it up this week.
[00:04:21] Paul Roetzer: Alright, so operations coming soon. So we have five already on demand. The sixth is coming very soon. So these series are an ideal launchpad for organizations that want to level up their teams and accelerate AI adoption and impact.
[00:04:34] As I mentioned, individual and business account plans are available now, or you can buy single courses, and series for one-time fees. Visit academy dot SmarterX dot ai. To learn more. Okay, Mike 10 trends from Q1, 2026. This is gonna feel like I, I can tell already, like these are gonna feel like some of this stuff happened a year ago.
[00:04:56] It didn't. This all happened in the last three months. [00:05:00]
[00:05:00] Mike Kaput: Yeah. So Paul, the way we typically do this is we're kind of going to count down right from 10. So not to say that the earlier ones are least important, but they are in the terms of this list. So we're trying to kinda stack rank these a little bit.
[00:05:14] Model Release Frenzy
[00:05:14] Mike Kaput: So first up, I'm gonna kind of tee up each trend, tell us a little bit about it, tie together a few things that have happened across episodes that we've covered, and then we're going to talk about it.
[00:05:25] So first up, number 10. The number 10, spot counting down from 10, the model release frenzy. So Q1 2026. it might be in the running for one of the more compressed periods of frontier model releases so far in ai. The title, State-of-the-Art changed hands multiple times within weeks, and basically every major lab shipped something pretty significant in the last few months.
[00:05:52] So a huge one is that Anthropic release. Claude Opus 4.6 in February. Anthropics own reports and benchmarks [00:06:00] revealed that it has saturated most automated evaluations to the point where the company plans to discontinue them. Opus 4.6 was followed weeks later by Claude Sonnet 4.6, which approached Opus class capabilities.
[00:06:13] Keep in mind, sonnet is the smaller, less powerful model and took the lead on the GDP Val Double A Benchmark. openAI's countered with a couple of releases including GPT 5.3 Codex. This is a coding focus model that logged 500,000 app downloads in its first week in March, GPT 5.4 arrived with pro and thinking versions, outperforming human professionals on economic benchmarks, and setting a new record on the Frontier Math Benchmark.
[00:06:42] openAI's also shipped many and nano variants of 4.5 later in the quarter. Now to mention, Google released Gemini 3 Deep Think, which hit state-of-the-art on the arc AGI I two Benchmark as well as several others. That was followed quickly by Gemini 3.1 [00:07:00] Pro, also XI dropped Grok 4.2 in the same window.
[00:07:05] And Paul, I mean, just reading through this and I'm sure you know, there's probably a couple other, you know, non-US models as well. Yeah. In terms of deep seek and some open source things. My gosh. Like it really doesn't, it's not only not slowing down, it might be speeding up.
[00:07:19] Paul Roetzer: It sure seems like it. And we alluded to it on, episode 207 that there's, we, we think there's a couple more models coming very soon.
[00:07:27] That would not surprise me at all if we don't have a similar trajectory of launches in Q2. You know, I was thinking about this randomly yesterday, Mike. Like I, I, I think back. You know, maybe last year, maybe two years ago, I was saying like, I really wish that, like Chad should bet and Gemini would just do the model picking for you, which they've done.
[00:07:46] They've like the auto or like, like a defaults to whatever. But I have found that the more, especially some of the use cases we've shared on episodes lately of our own internal use cases, which model it is, is becoming extremely important. And I, I actually [00:08:00] like the ability to choose the models. And as I've alluded to when I'm doing like a high value strategic project or an app building project that, you know, with no code that I'm working on, I will do it in like five or six different models.
[00:08:14] I'll test these models. And it just goes to, I don't think we have this as a topic, Mike, so I, I, I'll throw it out there. the other thing that's like really pushing for me is something we've been talking a lot about internally, which is the idea of having your own evals to, to evaluate these models.
[00:08:30] And so what I mean by that is. You know, we talk about these evals that they have in the industry that's testing, like the IQ of the models and testing them across like math and biology and these other areas. What, what you need to be thinking about as an individual, as a business leader, you know, more broadly as an organization is like, what are the evals you can put in place that allow you to know which of these models is best for your use case and when you should care that another one launches?
[00:08:56] Yeah. So now in a lot of enterprises, you're gonna be just stuck to we, [00:09:00] we we're just given copilot. We don't know, even, even know what the underlying model is. Most people don't even know to ask what the underlying model is. We're just gonna use copilot. But if you're in a, like an AI native company like ours.
[00:09:10] We can use anything like Mike and IE each use Gemini, Claude, and chat GBT probably daily, Mike, I would say at this point.
[00:09:18] Mike Kaput: Yeah. Yeah.
[00:09:18] Paul Roetzer: And so the, you know, this challenge of which model is right for which use case becomes harder and harder. And so ev you know, these custom evals is something we're gonna probably talk a lot more about in Q2.
[00:09:29] As I mentioned, you know, me, Mike and Taylor in particular at Smart X, been working on some ideas around this of how to help organizations build these evals. So they're super understandable to an, in, like a marketer, a sales person, a CS person, an ops person. so we're gonna do a lot more round that. And as we talk about the how fast these models are coming out, I think it's gonna become more and more important to, to be able to quick, quickly assess, should I care about this model?
[00:09:56] Does it change any of my standard workflows? Is it better than what I was using [00:10:00] before for my high value use cases? Those are really important questions to be able to answer, and most people don't have a system yet to do that.
[00:10:07] Mike Kaput: Yeah, and I've heard anecdotally from multiple people and read about others where it's so interesting to see people get exposed to some of the newer models.
[00:10:15] I mean, there's people that don't follow this stuff as closely as we all do, who are freaking out in a good way online or have texted me out of the blue saying, wait a second, Claude can do what? Because they haven't been exposed as much to different models. So if you, you find yourself in that camp where you haven't taken the most recent models for a spin outside of whatever is your daily driver, weekly driver, I'd highly recommend doing it.
[00:10:39] You might be really, really surprised.
[00:10:41] Paul Roetzer: Yeah. And we'll, I think in some of the upcoming trends here, we'll talk, touch, like, touch more on, on this, but I I, if you were just using like ChatGPT 5.2 or like, you know, just good, good models and you were unaware for three months that Claude Opus 4.6 existed, or like sonnet four [00:11:00] and you had no idea, you've like missed this leap forward in capabilities that we're seeing every day.
[00:11:06] Because you just weren't following along with these models. And sometimes it's just like incremental and it's not gonna make a big difference in your life. But sometimes we go through these three month periods where it's just like, wow, like model capabilities are dramatically better.
[00:11:20] Big AI is Big Lobbying
[00:11:20] Mike Kaput: All right, so number nine, in terms of our quarterly trends, again, remember counting down from 10 big AI is getting big into lobbying.
[00:11:30] So AI has been a first tier political issue in Q1, no doubt, and for a little bit before that. But this story that's kind of started to capture the shift as we start to get more into US midterms is the sheer scale of money now flowing into AI focused political operations. So there are actually three pro AI political groups that are collectively spending nearly $300 million on US midterm ads, all of them pushing deregulation and an acceleration agenda.[00:12:00]
[00:12:00] The largest new entrant here is Innovation Council Action, which has the blessing of David Sacks, the White House, and plans to spend over a hundred million dollars on the upcoming cycle. This group is led by a former White House deputy Chief of staff under Trump, and has compiled a scorecard assessing how supportive lawmakers are of Trump's AI agenda to determine who they fund or oppose.
[00:12:24] Now, separately leading the future is something we actually talked about in many episodes, which has raised $50 million from donors, including openAI's President Greg Brockman, Palantir co-founder Joe Lonsdale and Mark Andreessen. Brockman alone has contributed 50 million to this super PAC plus 25 million to a Trump Super pac, making him one of the largest individual donors to the current administration.
[00:12:47] And Meta has actually launched its own pro AI super PAC effort, expected to spend around $65 million on state level races. Now, on the other side of this, we actually talked about this on 2 0 7. [00:13:00] Senator Bernie Sanders and representative Alexandria Ocasio-Cortez introduced a AI data center moratorium act to pause all new data center construction nationwide until Congress passes federal AI legislation with protection for workers, consumers, and the environment.
[00:13:18] So Paul, we are seeing on the further left some opposition to AI acceleration mounting. whether or not that bill actually passes is, it's actually quite unlikely, but my gosh, the amount of money being marshaled for pro AI efforts appears to already be significant, and I expect we'll hear a few more announcements as we get to midterms.
[00:13:40] Paul Roetzer: Yeah, it's gonna become a major issue in the midterms. I'm increasingly convinced of that. The interesting part though, as I've said numerous times on the podcast is, and again, if you're, if you're new to the podcast, Mike and I do our very, very best to stay absolutely politically neutral. Here it is literally just like.
[00:13:57] These are the facts. This is what's happening on each side. [00:14:00] and so in that spirit, I, I'm not so convinced whether AI is a right or a left leaning issue at this point. Like there, I think there's increasing like murmurs that people on in the Republican side are actually getting kind of annoyed with David Sachs like ultra pro AI stance because here's a reality.
[00:14:19] Jobs and energy affect everybody regardless of who you vote for. So if you start losing tens of thousands or more jobs this year, it doesn't matter how you vote, you are not going to be a fan of ai. Yeah. So if the Republican party is cast as the AI accelerationist party at all costs, and one of those costs is jobs of your family and your friends, or you, then your political, you can be swayed politically a totally different direction.
[00:14:51] And so I feel like the Democrats at the moment are leaning very heavily into. the data center side. Yeah. I think they're gonna push on [00:15:00] the job side, but I could totally see the Republicans actually finding messaging to also, especially on the job side. Yeah. You can't be anti jobs like the, no one is winning an election anti jobs.
[00:15:14] So I don't know. It's just gonna be really intriguing and that's why I'm not even convinced that, that some of these funds are just for Republicans. Like these, the ones we're talking about, I,
[00:15:24] Mike Kaput: right.
[00:15:25] Paul Roetzer: I think they're truly gonna fund whoever is like in, you know, pushing for their side. But yeah, it's gonna be really interesting.
[00:15:32] And I, we haven't seen much on the super PAC side for the Democrats when it comes to like anti ai. 'cause I, again, I don't think they're anti ai. Like they're, it's not really pro anti, it's what do we consider responsible ai, I think is maybe where the distinction needs to be found. Yeah. And what they're trying to decide through polling.
[00:15:53] Where is the line we have to draw to where we start to gain or lose votes on these issues? [00:16:00] And I think that's what's gonna be super intriguing in the coming months, is how they start to message this. And I, I feel like both sides are gonna be very fluid in their messaging until they figure out what's actually gonna move, the needle on votes.
[00:16:15] Anthropic v. the U.S. Government
[00:16:15] Mike Kaput: Alright, number eight, also quite focused on politics and AI is Anthropic verse the US government. So this is basically kind of the biggest ongoing or continuing story of Q1 so far, which began in February when Secretary of War, Pete Hegseth issued an ultimatum demanding Anthropic grant, the Pentagon full unrestricted access to its clawed models, which they were already using for every lawful purpose.
[00:16:41] Now, Anthropic kind of decided to draw a line in the sand and refuse to remove its red lines against things like using Claude. Mass domestic surveillance and fully autonomous weapons. Now, after some back and forth with the Department of War, hegseth actually designated [00:17:00] Anthropic a supply chain risk. And as we've reported on multiple episodes that same night, openAI's announced that it had gone behind the back of everyone else and signed an agreement with the Pentagon.
[00:17:13] So in March, things continued and they escalated. The Pentagon formalized the supply chain risk designation, making Anthropic the first American company to receive this. Federal agencies, including Treasury State and HHS began ending their use of Anthropic products. Ironically, Claude continued powering Palantir's Maven Smart System, which reportedly identified over a thousand targets in 24 hours recently during operations in Iran.
[00:17:42] Anthropic filed two federal lawsuits to block all of this. This de designation warning that hundreds of millions in expected 2026 revenue were was at risk. Microsoft filed an aggressive amicus brief in support of Anthropic 37 AI researchers. 22 former [00:18:00] military and intelligence leaders also filed their own supporting briefs.
[00:18:02] And that brought us up to this past week when Federal Judge Rita Lynn issued a preliminary injunction blocking the designation. She wrote that nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government.
[00:18:25] Pentagon, CTO, Emil Michael called this ruling a disgrace. The government has seven days to appeal from when that was released. So. Paul, we kind of got up to speed on this on episode 2 0 7. As of right now, this is still kind of an open question as to whether or not they will actually find a deal here. It sounds like we've talked about multiple times people are trying to find an off-ramp, trying to strike some type of deal.
[00:18:50] Despite the rhetoric and despite ongoing military operations using Claude,
[00:18:56] Paul Roetzer: the story definitely has had some staying power on the podcast. I feel like we're [00:19:00] on three or four straight episodes. We've at least given some updates. Yeah. I don't know what else to add at this point. I, I do, you know, we're anxiously awaiting the appeal from the government, which I think is just gonna continuously delay things and give people time to negotiate something.
[00:19:15] There hasn't been too many leaks the last, I don't know, five to seven days about back channel negotiations and things like that, which actually probably tells me that they're happening. Like I, I think they're just trying to get this done and get it over with. Um. I, I, I hope the Department of War has other bigger things.
[00:19:33] Now, the interesting thing, Mike, is there was this issue with Anthropic. It's sort of like a spinoff of this story, I guess is they left the, what was it? 207, episode 207. We talked about, they, they left all this upcoming model information, all this stuff online. Yeah. And then as of today, on March 31st, first there's some, I, I've still been trying to verify, exactly what's going on, but like, it looks like the Claude Code code base was published, like [00:20:00] some, almost like somebody copied and pasted it internally and put it online.
[00:20:05] So, I don't know. I mean, there's these, there's just like, there's this big story obviously related to the Department of War and what's going on there, but there's these, these spinoffs around like the security of this stuff. And Anthropic is just constantly in the news. And, and then, you know, in the process of all this, they are just shipping product.
[00:20:23] Like nobody I've ever seen. Like
[00:20:25] Mike Kaput: yeah,
[00:20:25] Paul Roetzer: I saw a. I dunno. I was like, I think it was yesterday, there was something like 50 some releases in like the first quarter of the year they've pushed out. So Anthropic is just an infinitely fascinating company. We talked on episode 2 0 7 about kind of this battle between them and openAI's and battle between them and the government.
[00:20:45] It's, it's just like, I don't know. I, I was, somebody commented on my LinkedIn post this morning and I've been thinking about this, like, somebody's gotta turn this into like a Netflix series. Like it is, go back to like 2014 with the acquisition of DeepMind. Like start in [00:21:00] that moment where genius makers starts basically.
[00:21:02] Yeah. and like everything that has happened across these labs, it is wild.
[00:21:08] Mike Kaput: You know, it strikes me with the Pentagon stuff, with even the security issues and the leaks, it just feels like in some ways, even with the features being shipped, which are great, things are almost moving too fast to like put the.
[00:21:23] Genie back in the bottle in terms of like, things can get really out of control really quick, and it seems like that's probably a function of how fast things are moving.
[00:21:32] Paul Roetzer: Yeah, I think we'll touch on this one. In one, it looks like number six, maybe we've got the enterprise AI adoption, but honestly, like I, I'm kind of coming around to the idea that the human friction is gonna end up being the saving grace of all of this.
[00:21:45] Like
[00:21:46] Mike Kaput: yeah.
[00:21:46] Paul Roetzer: Meaning the models are getting so good, so fast, we got OpenClaw, we're gonna talk about, we have all this stuff, and yet to do anything in an enterprise is so damn slow. And that might actually be the thing that like, [00:22:00] gives us time to figure this all out. because if every organization was able to move as fast as these frontier labs are moving and what these, the models are enabling
[00:22:11] Then we would absolutely be completely unprepared and have no chance as a society, but the fact that most companies still have no clue what they're doing with AI and can't even get like copilot approved. or distributed. Once they have it approved like that, it might actually be a good thing. I, I don't know.
[00:22:29] That's kind of like what I'm starting to think about it.
[00:22:31] Mike Kaput: A silver lining.
[00:22:32] Paul Roetzer: Yeah.
[00:22:33] Mike Kaput: Well first, before we talk about that, and we will,
[00:22:37] The Rise of OpenClaw
[00:22:37] Mike Kaput: we gotta talk about number seven, which is the polar opposite of this, which is the rise of OpenClaw, which is an open source AI agent framework that allows autonomous agents to interact with each other, execute complex tasks without human oversight and even form communities.
[00:22:54] So this burst into public consciousness, earlier in the year, and then also is kind of [00:23:00] compounded by this release of this social network called Moltbook, which was built on OpenClaw and that went viral. And what that had was a social network for expressly for AI agents. It had millions of. OpenClaw agents creating their own communities, their own posts, their own comments, and they were all operating autonomously and engaging with one another.
[00:23:22] And you know, Andrej Karpathy actually called what was happening on Moltbook, genuinely the most incredible sci-fi takeoff adjacent thing I've ever seen. Ethan Molik, you know me, mentioned it actually, you know, even though it might be a little overhyped, how much these agents were actually forming their own kind of worlds, it did provide a visceral sense of how weird a takeoff scenario where agents are operating autonomously might look if one happened.
[00:23:47] Now, we also heard tons and tons of stories, some incredible, some horrifying of how much control people gave OpenClaw to their computer. People were running entire businesses, jobs, functions with it. OpenClaw [00:24:00] was going rogue all over the place. But this stuff was important enough that in February, OpenClaws creator, Peter Steinberger, joined openAI's to work on personal agents.
[00:24:10] Even Jensen Huang, Nvidia, CEO, called OpenClaw, the most important software release probably ever. And in March, meta acquired M Book as part of its broader push into AI agents. So Paul, this really is showing that the age of AI agents appears to be starting, not everyone's going to dive into OpenClaw.
[00:24:30] It's really, really out there on the frontier, but we're starting to see elements of agent to AI crop up everywhere along with a lot of complications related to them.
[00:24:39] Paul Roetzer: Yeah, and you know, I've been watching it from the outside, the OpenClaw stuff, like you and I haven't gone in and like built these things yet, mainly because of the risk that's associated with them and the unknowns related to them.
[00:24:53] but like just yesterday I was listening to Claire Vo, [00:25:00] who on the Lenny's podcast. It was incredible. And so. She shared the story of like going from skeptic to true believer in how her first instance of building an OpenClaw deleted her like personal family calendar. But then she kept giving it a chance and now she's built like nine different agents through OpenClaw, where she's like running her sales and basically executive assistant.
[00:25:20] And so you start to see the potential of this as the risk profiles start to come down or the governance around them start to be more, possible to where you could see this sort of thing being applied within organizations and it really changes your perspective about the future. There was the one example that I share with our team internally just yesterday where Claire was, talking about the SDR example in her company and how she's got Sam, I think it's called, is the agent's name.
[00:25:50] And it does all the outreach, it does the daily analysis, surfaces things for her, you know, writes the emails, all this stuff and it's like, wow. It's just like a [00:26:00] glimpse into the future once you sort of. Like capture how that that all works and how you govern it. not most enterprises are gonna be touching this stuff for a while, I would say, but it does feel like it's just gonna become in incredibly important to understanding the future of work and what organizational charts look like.
[00:26:20] And so I think it's something people need to be paying attention to. if, if nothing else has a window into the near future, as Google and Microsoft and openAI's and others start to figure out how to safely enable this. 'cause OpenClaw still requires quite a bit of technical chops to get set up. Yeah.
[00:26:39] this, like I said, there's lots of risks. You're having to kind of keep an eye on all that and it's not easy to do. But once you kind of break down those barriers in the next six to 12 months, and you can spin up an OpenClaw, maybe as easily as you can, spin up a new conversation or thread in ChatGPT PT now it just starts to really change the dynamics of what work looks like.[00:27:00]
[00:27:00] Mike Kaput: Yeah, it feels a bit like the very earliest days of when ChatGPT came out where if you're paying attention, I'm not saying this stuff can do everything everyone says it can do. I'm not saying it's safe, it's so early, but you're paying attention. You can start seeing like back then with ChatGPT, it's like, okay, like it's not doing exactly what I need it to do.
[00:27:20] It is still very rudimentary, but we see clear as day where this is going and once it gets there, it's going to change everything. And yeah, I think that's where we're at with that.
[00:27:32] Paul Roetzer: Yeah. And if you want to like understand it, I would go listen to that Lenny's podcast interview. And there's also a YouTube video of it where you can watch her demo some of these things.
[00:27:41] Mike Kaput: Yeah.
[00:27:41] Paul Roetzer: And I think it makes the whole topic very approachable because it is a very abstract thing. you know, even, even listening to the interview with Peter where he was talking with Lex Friedman about the creation of open clots like. Your, your mind's just like, yeah, I don't really get it. Like I'm not really following how exactly this gets set up and [00:28:00] like I don't use it.
[00:28:00] I know you're more comfortable working in a terminal. Mike, I've never worked in a terminal. I have Yeah. No idea how to do that stuff. And, and so like, it just feels unapproachable to me. And then you listen to her explain it and you watch and it's like, okay, I'm still not gonna personally set one up, but I totally understand now what, why.
[00:28:19] Right. And once it becomes more accessible to people who aren't as technical, 'cause like I just, I don't have the time to go learn it. So I, you start to really realize the impact it could have in the very near future.
[00:28:32] Mike Kaput: Right.
[00:28:32] Enterprise AI Adoption: The People Problem
[00:28:32] Mike Kaput: Trend number six, counting down, we alluded to before, which is about enterprise AI adoption and specifically the people problem in enterprise AI adoption.
[00:28:42] So. This is a persistent theme we're seeing more and more, especially in Q. One is that organizations are failing to generate significant ROI from ai. Often not because of technological hurdles, but because of the people. There's change management gaps, passive adopters, legal, and IT [00:29:00] bottlenecks and sometimes leadership that is not able to actually lead from the top, and as a result, deployments are stalling.
[00:29:08] So interesting data that backed this up kind of over Q1, our own AI pulse survey, kind of an informal survey of the audience found that 65% of listeners cited e fear and resistance as either a major challenge or their single biggest barrier to adoption. A separate survey revealed a growing disconnect between employees and how employees and leaders perceive AI's impact.
[00:29:30] You know, leaders consistently are overestimating organizational readiness. We had some Gallup research showing expanded adoption patterns of. But a widening gap between power users and everyone else. You know, about 20 to 30% of employees actively resist AI adoption. And you know, they also found in some of this research that a lot of enterprise use cases do not actually require access to sensitive data.
[00:29:54] So this whole idea that our data isn't ready while it's important and is commonly cited [00:30:00] as a blocker, is not the whole story here. And Paul, you put this really well in the LinkedIn post a couple months ago, saying, if your company isn't generating significant ROI from AI adoption, then you have a people problem.
[00:30:12] And like you alluded to earlier in this episode, I mean, we're seeing this even more than I would've expected, I think.
[00:30:20] Paul Roetzer: And I would, I would build on the people problem to say it starts with a leadership problem most likely. So what I keep finding time and time again is. The organizations that are really struggling here often lack CEOs who have presented a clear vision for the future of work in their organization and what is required and expected of their employees in that future of work.
[00:30:41] And what I mean by that is like, if you have a CEO who doesn't fully understand AI capabilities today, doesn't realize that the reasoning has gotten pretty good, that the agentic stuff is emerging, and you know, maybe some people on their team are starting to experiment with these things. If, if the CEO doesn't comprehend that, then how is that [00:31:00] CEO gonna present a vision for what the future of work looks like?
[00:31:03] And to say, listen, we expect you to take advantage of AI capabilities. We're gonna provide licenses to you, you know, generative ad platform licenses for ChatGPT or copilot or Gemini. We're gonna provide AI education and training to you, and as a result. We expect you to constantly improve your AI literacy, your AI competency.
[00:31:23] We want you to make a greater impact on the efficiency and productivity of this organization. We want you to drive innovation like this is what we want from you, here's how we're gonna measure it. It's gonna be part of your performance reviews. Like it's literally an expectation that you are doing this.
[00:31:35] Now there's leading indicators, like you're completing the courses, you're getting certificates, you're building GPTs, you're using Gen AI daily. You can look at those leading indicators, but if a CEO hasn't said this yet, then it's gonna stay within pockets. Maybe the marketing team is doing it, or there's an AI champions group within the marketing team that's doing it.
[00:31:54] But that's what we see. Way too often. We, we have hundreds of companies [00:32:00] of brands that are part of our AI academy, and almost every conversation goes this way. It's the marketing team, the sales team, someone on the ops team, like they're taking the initiative to go get 15, 50, a hundred licenses. For the ai, four people in that company, which might be 70,000 people.
[00:32:19] And they're, it, they're the only ones that are actually like seeking out education and training.
[00:32:24] Mike Kaput:
[00:32:24] Paul Roetzer: And so we will ask like, Hey, is the CEO stated like what the plan is presented? A future of work? And it's like, no, almost every time. So yeah, I think the people problem starts with a leadership problem in that those leaders haven't presented a clear vision and plan for how the organization is gonna evolve, even if it's just, we know it's going to evolve and we're working on figuring it out.
[00:32:49] And we would like you to be engaged in that process. So we're gonna provide this tools to you, we're gonna provide training to you to help you use those tools. And we, you know, we think what we're gonna see is [00:33:00] increases in productivity and innovation and like, let's do this together and we're gonna keep you posted.
[00:33:04] Like it can be that, like, it doesn't have to be, we have the answer, but it's so rare to see that being done well right now.
[00:33:12] Mike Kaput: I am curious, do you see in the leaders where you see that happening, is that a result of them not knowing they need to communicate that, or a result of them not knowing what to do in the first place?
[00:33:25] Paul Roetzer: I think it's, they don't understand AI because, you know, again, think about all the things we talk about. Think about just even these first five trends.
[00:33:31] Mike Kaput:
[00:33:32] Paul Roetzer: Like if you're a CEO and you're seeing this too, how could you be anything other than like racing forward to solve for this? Because there's no way to look at what's going on in AI and realize it's not gonna completely reinvent your industry and your company.
[00:33:48] And so if you truly understand AI's capabilities and you're using it yourself every day and like feeling it each day, how could you not like, have a sense of [00:34:00] urgency To tell your team that you're working on the plans and to go get those plans in place. So I, I think it is more just they haven't had that aha moment where they realized the significance of what's happening.
[00:34:11] And I think a lot of times it was because they knew it was important and they've read the research, but they don't use it themselves necessarily. Every day they don't feel it.
[00:34:20] Mike Kaput: Hmm.
[00:34:20] Paul Roetzer: And so they throw it to like the CIO or the CTO or whoever, and they're like, go figure this out. This is a technology problem.
[00:34:27] It's like, no, it's not. It's a business problem. And it is like gonna change everything about the way organization runs. And if it's not treated in that way, then it's not gonna have a sense of urgency in other organizations. That's what we see a lot. Like, you'll see these priority projects in a major enterprise where everyone knows like, Hey, what's the most important thing to the C right now?
[00:34:47] 1, 2, 3. Like, we know these things. If, if someone asks that of your organization, like is, you know, and AI transformation isn't in the top three, you got a problem. I don't care how big the company is. [00:35:00] So I think that's the issue is it's not being treated as a priority of the ceo and until it's a CEO's priority, it doesn't like diffuse across the organization.
[00:35:11] Mike Kaput: All right, before we get into our next, final five trends, Paul here, a quick announcement. this episode is also brought to you by our state of AI for Business Report. On the day you are listening to this, you'll be listening to this podcast episode in the final week. We are running our 2026 State of AI for Business Survey.
[00:35:30] This survey is going to inform the report. It's an expansion of our popular state of marketing AI report that we've done every year, and we are going beyond marketing specific research to uncover how AI is being adopted and used across organizations. We are trying to survey thousands of business professionals across every industry and function.
[00:35:49] If you love the podcast, if you like what we've been doing, we would really, really appreciate it if you took this survey. If you have not already, it takes about five to seven minutes to complete. You can go to [00:36:00] SmarterX dot ai slash survey to share your input in return for completing. You will get a copy of the report when it drops, plus a chance to win or extend a 12 month SmarterX AI mastery membership.
[00:36:15] So go to SmarterX dot ai slash survey. It is the last week to do this and contribute.
[00:36:22] SaaSpocalypse
[00:36:22] Mike Kaput: All right, trend number five, fast apocalypse. In early February, $300 billion was erased from software and data stocks in just two days after Anthropic announced legal and sales plugins for claw stocks like LegalZoom dropped 20% HubSpot year to date down 39%.
[00:36:43] ServiceNow dropped 27%. The s and p software index alone lost 15% in January, and the market called this SaaS apocalypse a SaaS apocalypse. And the reason is because SaaS companies are caught in a bit of crisis. These frontier [00:37:00] models are releasing features that are eating into the core features of traditional SaaS companies.
[00:37:05] Tools like Claude Code are giving people the ability to code their own solutions. And its clear frontier model companies and labs are going after not just US software spend, but also us white collar wages because AI agents are increasingly able to just do work directly instead of you needing software in the hands of a human to do it.
[00:37:27] Not to mention, we've talked about in past episodes, SaaS companies are caught in a bit of a pricing crisis at the same time. So the traditional per seat models start breaking down when one person with AI can do the work of 10. If head count drops, seat count will also drop. Credit-based pricing has emerged as an alternative, but companies are still working out how to price AI that replaces labor rather than augmenting a workflow.
[00:37:54] So some SaaS providers are racing to figure that out. Some are trying to become model agnostic. All of them [00:38:00] are trying to stay relevant as these underlying models. Commoditize their features. Paul, where does this stand today? Obviously, you know, the stock values and drops have changed since the initial SaaS apocalypse, but the core issues here, I don't think we've figured out in the last two months.
[00:38:19] Paul Roetzer: I haven't seen any answers yet. I think it's just still more uncertainty, and that's what we talked about at the time, that just Wall Street hates uncertainty. And so SaaS companies have been built through, you know, relatively predictable multiples. Their valuations are largely set on that their funding rounds are set on that their market cap is, you know, influenced by it.
[00:38:38] And so when all of a sudden you're like, well, okay, maybe in five years they won't be worth as much, or like the multiple won't be as high for software because people can build alternatives. Even though it's like, okay, well no, you know, people are gonna necessarily spin up their own CRMs. But it starts to create this doubt of like, well, maybe some small businesses can, or maybe they just don't need as many [00:39:00] seats, or maybe they're not willing to pay as much per seat.
[00:39:02] Or you assume that you're paying your $50 per month, you know, seat license, whatever it is, that like your job is to make the software better for me. So why am I paying separately for the AI capabilities? Like it's, I'm paying for the software to do a thing and the intelligence helps me do the thing faster.
[00:39:21] So it's like, it's not my problem as a consumer that your costs went up. Like I'm, I'm paying for what I'm expecting from you. So it just, it creates all this complexity and I think a lot of software companies are just scrambling trying to solve for it. And you know, I mentioned on a recent episode, I think we're gonna see some turnover at the top of a lot of these software companies because it's gonna be a difficult time to navigate.
[00:39:46] And generally the markets aren't very patient. And so if you start to see these stocks staying down in this 30 to 50% range, and there's no. Bounce back apparent that they're just, it starts to look more and [00:40:00] more uncertain despite the fact that the revenues have been pretty strong. you're gonna need to get somebody in there who's got a different vision for how to do this.
[00:40:08] And so I, yeah, I just think it's gonna be a really challenging time for software companies and the people who invest in those software companies. and then as a buyer, you know, as a CEO of a company that buys the software you do like every time, you know, you think about what does the future look like?
[00:40:26] It's like, well, is the software we have gonna get us there? Like, you know, I, I'll give you just a prime example. The SDR thing I mentioned earlier about the open flaw. You know, you look at what Claire, presented in that podcast episode about the future of sdr r and you know, I sit here and think like, well, is HubSpot gonna enable that?
[00:40:43] Like, that's our CRM, or do I have to go get a third party piece of software to, to do that? and the fact that I even have to stop and ask that question isn't great for, right. For software companies. And because I do that with everything we do, it's like, well, all right, well the piece of software we have [00:41:00] now, we're paying over a thousand a month for it and it doesn't do that.
[00:41:04] And that would be really valuable to me. What do we do? So I think that that, and is kind of a microcosm of what's gonna go on now is once you understand what AI's capable of, you're just gonna look differently at your tech stack and your monthly expenses tied to those and what the value you're getting from them is.
[00:41:20] And if, if all you're getting in return is a credit based model that you don't understand, you're gonna get pretty annoyed pretty fast. And that's how you get motivated to go find something else.
[00:41:30] Mike Kaput: You know, as if it wasn't hard enough for SaaS companies. I feel like anecdotally I've heard in the last few months from several people where they're encountering.
[00:41:38] Sales reps at software companies that are not as equipped as you would hope to deal with some of these objections either. Why can't I use Claude code to do this thing? And they don't even know what you're talking about.
[00:41:49] Paul Roetzer: Correct.
[00:41:50] Mike Kaput: Or people using AI to do really robust research into competitors and the tech landscape that unfortunately sometimes salespeople are like [00:42:00] not remotely equipped with the same type of research.
[00:42:02] And then you not only have a bad conversation, but come away saying, well, if you're not using AI for this stuff, how do I have confidence that you're using AI in your,
[00:42:12] Paul Roetzer: you have the buyer's gotta do the job. And then I, we can attest to this Mike, like the same happens on the customer success side. Yeah.
[00:42:18] Like if, if you're doing the pre-work before you reach out to customer success through Claude or ChatGPT or whatever, and then you get on a chat with a human at that software company or a phone call with them and you're like, dude, I'm doing your job for you. Like I'll tell you what doesn't work. Like I already tried these 10 things.
[00:42:38] And they're just looking up a knowledge base. It's like, oh, I don't, I don't know. Lemme check with the FAQ. So yeah, I agree. There is this whole, you need to build an AI forward team at all levels of marketing, sales, success, product, because you're gonna end up dealing with more educated buyers than the people who are supposed to be in your company, helping them solve things.
[00:42:55] yeah, what used to be the, like Google it is now, like, did you, did you [00:43:00] build a strategy in club before you called them? Like, did you Right. You know, do all the things. So yeah, I, you know, it's gonna be hard to work with those customers who are further ahead than your own people.
[00:43:11] Mike Kaput: All right.
[00:43:11] Labs Pivot to AI Agents
[00:43:11] Mike Kaput: Trend number four, as we count down is labs pivoting to AI agents.
[00:43:16] So we really started to see in Q1, every major lab, especially starting in March, kind of starting to pivot towards AGI agentic capabilities and enterprise deployments simultaneously. So this especially happened with the three Frontier Labs. openAI's, for instance, is. Announced they're consolidating chat, GPT, their browser and codex into hopefully a desktop super app.
[00:43:39] They're doubling headcount to approximately 8,000. So they're, as they target the enterprise and they're trying to build an autonomous AI research intern. By September of this year, Anthropic has launched Claude Cowork a more agentic system that's easier for non-technical knowledge workers to use. They are also just [00:44:00] brushing it in the enterprise game in their fight against openAI's.
[00:44:03] In terms of signing enterprise licenses, we saw Microsoft restructuring co-pilot under Satya NA's direct oversight as they are trying to find their footing as well. and we've seen over the last few months all these different types of AGI agentic releases. So OpenAI has, dedicated agent products.
[00:44:21] They've been working on a Frontier program to partner with companies and also some PE firms to get in, to get in with different companies and portfolio companies of those firms. Microsoft shift shipped, copilot, cowork. We even saw in the kind of open source front, Andrej Karpathy released an auto research agent.
[00:44:41] So all this agentic stuff is hitting at the exact same time. The labs are not only doubling down on it, but also doubling down on trying to get into and expand within enterprises. And you know, we've kind of seen this anecdotally as well just on the podcast as we've conversed about everything AGI [00:45:00] agentic, right?
[00:45:00] We've talked about the timeline to agents managing the chaos of agents, agents swarms, and of course the security nightmares that come with agents. So Paul, it seems like all agents all the time and get those enterprises to sign on the dotted line is the strategy of the labs right now.
[00:45:18] Paul Roetzer: Last year, like 2025 was definitely the year of agent hype.
[00:45:23] I would say. You know, we dealt a lot last year with over promising from some of these tech companies about agent capabilities. You could see the beginnings. It's almost like where we're at with kind of OpenClaw now. It's like, it's probably a little bit overhyped at the moment. but the reality's gonna start to set in as the year goes on.
[00:45:40] And so we knew coming into this year, again, agents aren't new. You know, it's been talked about for a decade. We've talked about it extensively. I built it into my AI timeline, the stages of AI that we talked about, episode 2 0 7, and many times before that from openAI's has agents as level three. So chatbots one, reasoners, two agents, three, and then [00:46:00] innovators, and then organizations at four and five.
[00:46:02] So agents aren't a new concept, but they're definitely starting to have their moment as they become more autonomous and more reliable in different use cases. as we're talking about this, I'm, I was doing quick searches and I, I can confirm now what I said earlier. So this is tied to this Anthropics Claude Code Command line interface application.
[00:46:25] Not the models themselves has been leaked and disseminated. this is from ours technical I'm reading. apparently thanks to a serious internal air, the leak gives competitors an armchair, enthusiasts, a detailed blueprint for how Claude Code works, a significant setback for a company that has seen explosive user growth and industry impact over the past several months.
[00:46:46] Early this morning, Anthropic published version 2.1 0.8 of Claude Code NPM package, but it was quickly discovered that package included a source map file, which could be used to access the entirety of [00:47:00] Claude Code's source almost 2000 types script files in more than 512,000 lines of code. a researcher was the first to publicly appoint out an X with a link to an archive containing the files.
[00:47:14] The code base was then put into a public GitHub repository and has been forked tens of thousands of times. So keep in mind, we're doing this at 3:00 PM that day. Anthropic publicly acknowledged the mistake in a statement to VentureBeat and other outlets, which reads earlier today, a Claude code release included some internal source code.
[00:47:34] No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. Were rolling out measures, prevent this from happening again. So, man, bad, bad couple days for philanthropic, getting some things put out into the world that should not have been put out into the world.
[00:47:51] And that one, you know, I, I think that one's probably pretty significant. Like,
[00:47:57] Mike Kaput: yeah,
[00:47:57] Paul Roetzer: there's a whole lot of people would like to access that kind of [00:48:00] stuff, and they just got it for free. The other thing this illuminates separately and you know, we'll talk about this on a future topic, is, the weights of these, the model isn't, isn't what got leaked, but the philanthropic has been more forthright than anyone about this significance of who has access to the weights of the models.
[00:48:18] And there was an interview, Dario Amodei did probably a year, year and a half ago, where he said, at Anthropic there's literally like three people who have access to the weights. It's hidden from everybody. And he said, that's the thing that foreign adversaries want to get to. They'll spend billions of dollars to try and get the weights from these models.
[00:48:34] And it's like, how, how good are the guardrails? If, if that's the future is like you're gonna build this insanely powerful thing, like the mythos model, whatever's coming next, and all that's preventing the world from knowing how to replicate that is like figuring out how to get to the three people who have access to those weights.
[00:48:52] Mike Kaput: Right.
[00:48:53] Paul Roetzer: When you're, that's
[00:48:54] Mike Kaput: terrifying.
[00:48:54] Paul Roetzer: You're twice in 48 hours, like leaking things that shouldn't have been leaked.
[00:48:57] right.
[00:48:58] Paul Roetzer: Just weird. It's like a, it's a [00:49:00] weird age we're heading into. yeah, so
[00:49:04] Mike Kaput: it's pretty interesting. I assume this happens, I just haven't read about it. But you have to imagine some of the higher ups, not even the CEOs of these companies have to be walking around with some serious security.
[00:49:15] Paul Roetzer: Yeah, it's like the nuclear codes, basically.
[00:49:17] Mike Kaput: What, what like talk about like your senior engineers or something. Even not even Dario Amodei, I assume he's got it, or Sam Altman. But higher up employees would be pretty separate.
[00:49:27] Paul Roetzer: Yes. Well, this is why, there's, part of it is memes, part of it, not, um, you know, this is how, counterintelligence stuff works.
[00:49:37] Yeah. Like this is how you do espionage and stuff. So yeah. What's the most valuable thing right now? It's, it would be hard to find things, at least in the United States, that have a higher value than the weights of these frontier models. So, espionage, you know, like I, I would imagine there's math rather significant background checks.
[00:49:58] There's probably a lot of [00:50:00] internal security monitoring who the top employees are spending time with and, friends with, we should say. And. It sounds like a, a sci-fi movie. I can promise you that that stuff is happening. Yeah. Like that is a very, very real thing. And those are high value human targets that you will do anything you can to get, access to what they know.
[00:50:24] Again, we need a series, we need a Netflix series on this. Like it's, it's probably, you know, as much as we talk about all the branches of AI and, and all these like, intriguing things, it is likely infinitely more intriguing than we even know. And I use intriguing as a that, that we're carrying a lot of weight, both good and bad, intriguing.
[00:50:45] There's probably so many more layers to what is going on in AI that'll make for such fascinating, like, I'm not sure Hollywood could do justice probably to what's actually going on in the AI world right now.
[00:50:58] Mike Kaput: All right.
[00:50:58] AI-Driven Layoffs Go Mainstream
[00:50:58] Mike Kaput: Trend number three, we are talking about AI driven layoffs going mainstream. So we haven't seen wide scale AI driven layoffs yet, but we have seen a lot more chatter and conversation around this and people starting to actually attribute AI behind some of the layoffs that are happening.
[00:51:18] So for instance, we saw tech company Atlassian cut 1600 employees, 10% of its workforce. They explicitly attributed this, this quarter to their transition to the AI era. They were one of the first major companies to really name AI directly. Block Jack Dorsey's company cut approximately 4,000 employees, nearly half its workforce, and talked quite a bit about how AI was making them more efficient, their stock surged on that announcement.
[00:51:46] And just recently we've heard from Uber's, CEO. Talking on the diary of the CEO podcast and saying that executives privately admit the true scale of AI disruption, even though they are going on TV and telling audiences [00:52:00] everything will work out fine. Uber's, CEO personally estimated AI will replace the work of 70 to 80% of humans within the decade.
[00:52:07] He has no idea what's gonna happen to Uber's 9.5 million drivers in that era. Either the same week PWCs us CEO told the financial times that employees who think they can opt out of AI are, quote, not gonna be here that long. So Paul, we've still seen, we've had, several thousand layoffs related to ai.
[00:52:25] We've talked about how we expect those to rise, but I think the bigger thing here really is CEOs are publicly breaking the silence and saying, look, AI is going to be a factor here. Is that kind of what you're seeing and hearing?
[00:52:40] Paul Roetzer: Yeah. This is a trend I wish would go away, but unfortunately I think this is gonna, and you can't move too much higher up the list than number three.
[00:52:46] But, um. Yeah, I, I, I expect this trend to continue and to gain steam. Unfortunately, you know, both the unemployment and the underemployment, you know, as we get more data around that.
[00:52:58] Yeah.
[00:52:59] Paul Roetzer: [00:53:00] there was a post this morning, so this is Heather Long Chief Economist, Navy Federal. She tweeted, US hiring rate fell to 3.1% in February the lowest since April, 2020, which was mid COVID.
[00:53:14] This is a hiring recession and Americans are feeling it. There were notable hiring pullbacks in February in hospitality and construction, which was like construction. Healthcare has actually been like holding the market up a bit. Bottom line, the job market was already frozen before the war and Iran began.
[00:53:29] It's worrying that a no hire, no fire situation could turn into a no hire, start to fire job market quickly if there isn't a resolution soon. Now, that is not AI specific. There's nothing in there where she was saying, this is because of ai. But it's simply pointing out what, what I've said on the podcast, numerous times, which is what I am hearing is this, that no hire, no fire.
[00:53:53] Like we are not adding anybody.
[00:53:56] Mike Kaput: Yeah,
[00:53:56] Paul Roetzer: we're gonna try and avoid firing, but we are, we are [00:54:00] pausing hiring, and the only new hires will be through attrition when we need to replace people. But if like flat growth is, is sort of like the desired state right now. And as I've said before, like I, I don't know, a CEO who wants to fire 20% of their staff, like I've yet to meet that person.
[00:54:19] you know, I think generally speaking, leaders of companies want to create opportunities for humans and the idea of human employment and that being a driver of the economy, like that's pretty fundamental to our democracy working. And it's pretty important that it continues. But there's gonna be tremendous financial pressure on leaders to.
[00:54:41] Take action and, and to, to capture some of the efficiency gains in profits. And that's gonna lead to some very challenging periods here. And so this is an area we, you know, we're thinking a lot about. Like I was actually just talking with Mike and Taylor on our team, on the research front to, you know, really start leaning more into this and [00:55:00] trying to do more research around what is happening on the frontiers.
[00:55:04] Like what, what is being talked about? What can we be doing? So it's not just us showing up each week being like it's getting worse. Like, yeah, another 50,000 people lost their job. we want to try and contribute to the dialogue at least of finding answers. Um, I don't have the answers. I have some theories of things I'm working on myself, but I think, you know, we collectively need to just be exploring ideas here.
[00:55:30] Putting think tags together of groups of people that you trust you can bounce ideas around. Um. We just need to be talking more about answers because the, it's not coming from the labs who are building the tech and, you know, creating this eventual uncertainty and chaos. so yeah, really, really important trend.
[00:55:48] I wish it would go away. It's not going to, so we gotta do something about it.
[00:55:54] Mike Kaput: Alright,
[00:55:54] We’re Seeing More Move 37 Moments
[00:55:54] Mike Kaput: trend number two before we hit our top trend this quarter. Number two is we're seeing more of what we call move 37 moments. So we track, you know, what we call move 37 moments on the podcast. This is this point where a professional in a given field realizes firsthand that AI can match or exceed their expertise.
[00:56:16] This term comes from AlphaGo's Move 37 against Lee Sedol in 2016. This was when the move that made the world best go player realize the machine had surpassed him. And we're starting to see a few more or glimmers of these moves. Out in the wild, I mean, in February, we actually dedicated an entire segment to this phenomenon.
[00:56:39] Sam Altman recently noted that OpenAI's Codex coding tools had suggested features superior to his teams, his own team's ideas. Dropbox Former CTO declared that he'll never ever write code by hand again. Goldman Sachs has begun deploying Claude for trade accounting. KPMG is pressuring people [00:57:00] to cut audit fees because AI can do it instead.
[00:57:03] David Kipping and astrophysicist reported that AI had about a 90% of intellectual capability that he was seeing in his feel in March. A Polish mathematician reported his own move. 37 moment after GPT 5.4 helped solve a problem that had resisted conventional approaches. Boris Cherney, creator of Claude Code, declared on Lenny's podcast that coding is effectively solved.
[00:57:28] And we also talked about this New York Times AI writing quiz that 86,000 people took, where 54% of them preferred AI written passages over the work of famous authors. So some glimmers here, Paul, that the trend is pointing to this list of fields where humans hold this unambiguous advantage seems to be getting shorter every quarter.
[00:57:50] Can you tell us a little bit about why move 37 moments are important and, it seems like we're seeing more of 'em. Do you agree with that?
[00:57:58] Paul Roetzer: Yeah. This was [00:58:00] the premise of my ma Con keynote in 2025. And in essence, what I was seeing was, you know, for the most part, AlphaGo, which is incredible documentary that sort of changed my perspective on AI and, and really the future.
[00:58:14] it was always talked about as a technical breakthrough of like the technology capabilities of this AlphaGo system. And what I challenged people at Make on to think about was the human side of it. Like what happened to Lee Sedol in that moment when he realized the machine was better than him at the game he was an expert in.
[00:58:30] Mike Kaput:
[00:58:31] Paul Roetzer: And so that was my premise at that time, is like, we would all come to experience that least at all moment, where you just say, wow, it's, it's just better than me at this thing. And then what do we do from there? And so it was really, you know, probably the most challenging keynote I've ever given because up until like 24 hours before I gave it, I actually didn't know the ending of the talk because I, I was trying to, like, it was the start of our conference, so I didn't want everybody like, feeling defeated and like, oh shit, [00:59:00] well let's just go home.
[00:59:01] And so to take people through where I showed like excerpts from the documentary and, and hopefully like that had that emotional impact on people to have that somewhat of a gut punch feeling like CDO had. but then to turn it into something about like, yeah, but we still have choice. Like we can still do something about this and we can figure out.
[00:59:20] How to use these as tools that give us, you know, new abilities and a different way to look at business in our own careers. And so I think that's what more people are gonna come to grips with. I, I don't, this is another one where I don't see this slowing down. I think this is just a reality and pretending like it's not coming isn't gonna do anybody any good.
[00:59:40] You and I, each Mike have these conversations all the time where it's like, well, I can't do what I do. Like, yeah, I get that it's good, but it could never do what I do. And it's like, Hmm, yeah, okay. That, that's probably not gonna end well for you. But like, I understand, and you do have to have these moments where you decide, like, when can you push someone?
[00:59:58] It could be a friend, it could be a [01:00:00] family member, it could be a coworker, could be a boss. Like, you know, like you listen to this podcast like you're, you're probably in the know about what these things are capable of and where they're going. And you look around the rest of the world and you just like, they're blissfully unaware.
[01:00:13] Like I was, I was actually, I was, we had our dad's basketball tournament this past weekend. Buddy of mine who probably listens to the podcast, he's messing with OpenClaw all the time, like he's killing all this crazy shit. And so we're sitting at the bar Saturday night after the basketball tournament ended, and it's literally is a bunch of dads playing basketball for two days.
[01:00:33] It's, it's great. but we're talking about what he's doing with AI and with OpenClaw, and then you're like, I don't know, you have that moment where you look around the room, there's just hundreds of, of, you know, couples there and you're like, damn, they have no idea. Like, just that and not in like, I feel bad for them way, more of like a, you're just, you're the two of you are just living in this parallel universe where like you are seeing the future and you're [01:01:00] realizing they all have careers and families and colleges to pay for and kids to raise and they have no concept of like what is going on.
[01:01:09] Mike Kaput: Hmm.
[01:01:10] Paul Roetzer: And there's a part of me that's envious of that, honestly. Like there, the, like, the ignorance to the moment is actually something I. I sometimes wish I had. and I think anybody who has the knowledge, you have, you, you have those moments where you're like, God, I wish I just didn't know what I know.
[01:01:28] Like I wouldn't be worrying so much about jobs every day in the future of education and like all these things. But once you know it, like you can't turn it off. And then that's, you know, I don't know if the Move 37 Motor is what triggers that for people where you have that realization like, oh my God, it can do what I do.
[01:01:45] Mike Kaput:
[01:01:46] Paul Roetzer: And then everything is different from that moment on. You just start to look at all of it differently. So, yeah, I don't know. It's a really important thing. I, I, we'll drop the link to my keynote in the show notes if you haven't watched it. We put the whole thing on YouTube. It was, [01:02:00] I would say I've given thousands of talks now in my life.
[01:02:03] That was, that was the second hardest talk I've ever done, I would say, because for different reasons. Maybe I'll tell the story a different time. There's one other talk I did at Make Kind that was the hardest I ever did. and I'm not saying hard in terms of technically hard, just personally hard. That was a tough one to keep my composure on stage because, I knew the punchline I was going for and I was having a hard time getting to it because, I think it was like there was no turning back kind of moment for me.
[01:02:33] So, yeah, it's, yeah, it's worth a watch probably if you haven't seen it.
[01:02:38] The Vibe Shift
[01:02:38] Mike Kaput: Alright, so our final top trend that we have been tracking this past quarter is what we're calling the vibe shift, so to speak. So this is the quarter where the conversation around AGI really entered, I think the public discourse. It entered the boardroom, the [01:03:00] newsroom, the living room.
[01:03:01] And despite, you know, many people still being very early and in their own bubble, like we started to hear about this everywhere. The single piece of content that captured the shift was probably Matt Schumer's essay. Something big is happening, and this is viewed seven or 85 million times rather on X. And in roughly 5,000 words, Schumer, who is an ai CEO and founder, wrote about what many insiders had been thinking, but not saying publicly.
[01:03:32] He said he, you know, I've historically had parties and things given the polite version of where is AI going? What's going on with ai? Because the honest version he said, sounds like I've lost my mind. And he goes on to detail how we're in this moment of possibly fast AI take off. That feels a lot in his analogy, like February, 2020, right before COVID struck, where a few people were seeing signals that the world was about to change.
[01:03:57] And you know, we've talked about this in a couple other [01:04:00] contexts, how this has been all kicked off. Episode 180 9, which started this year with a segment called How Close are We to AGI? Because basically Claude Opus 4.5 over Christmas break. Was demonstrating some really wild capabilities, especially when paired with Claude Code.
[01:04:17] We even had a Google principal engineer saying, Claude completed a year's work in one hour. The audience responds to our episodes about, are we at this tipping point? Something big is happening, or unlike anything we've ever seen, like listeners have also been seeing this turning point where something changed at the end of last year, the beginning of this year, in terms of AI capabilities and in terms of what's now possible, especially to non-technical knowledge workers.
[01:04:45] So, Paul, like how big of a moment are we in?
[01:04:51] Paul Roetzer: I mean, you and I did that first episode out of the 2026 when we, when we flipped the calendar. Yeah. You could just feel it like some, something had [01:05:00] changed over that winter break. We talked about it, how, you know, the online dialogue was just different between the people who were building things, specifically with Claude.
[01:05:10] Um. You know, one of the best ways we keep a pulse on what's going on is by the questions we get from audiences. And so it's one of the luxuries I have of teaching the intro to AI and scaling AI class free every month is, you know, we have like 2000 to 2,500 people a month go through these classes and we take questions live.
[01:05:31] And so we are getting hundreds of questions, you know, a month basically, in addition to like the speaking engagements, executive briefings, where it's like those firsthand things. And you can, you can just feel the difference based on what people are asking about and, and the stories they're telling of their own experimentations.
[01:05:50] And it is very, very different than it was three months ago. we did an AI and CLE event just last week, Mike, and yeah, it was like 120, [01:06:00] 150 people or something registered for it. And the questions we got there, like everyone wanted to talk about how they're using Claude Cowork. Or you know, what apps they're building with no code, messing with OpenClaw questions about the environment, political questions like the dialogue has just moved.
[01:06:16] it is, it is so far, but, but even then you have to keep it in context, I guess, because I would say the people who are in the know and out ahead are just moving further and further ahead and they're experimenting on the frontiers and it's easy to do what we do and, and kind of get caught up in that bubble that everyone's moved on, everyone's ready to talk about cowork and cowork and OpenClaw and all these things.
[01:06:40] Mike Kaput: Yeah.
[01:06:41] Paul Roetzer: And then you go spend time with a bank or a, a healthcare system or manufacturing company or take your, pick a school and you're like, man, they, they don't know anything. Like they, they're basic. If they, if they're using a chat [01:07:00] bot, it's likely, a base version of a chat bot that doesn't even have all the capabilities built into it.
[01:07:06] Oblivious to all the capabilities,
[01:07:09] Mike Kaput: right?
[01:07:09] Paul Roetzer: And so I think like the haves and the have nots is maybe a way to say it. With ai, the gap is expanding dramatically. And I think over time that's going to start to expand into the outcomes and benefits of it as well to where it's, the distribution of those benefits is gonna be heavily weighted, weighted towards those early movers and the people who are actually out figuring this out.
[01:07:33] And they're gonna get compounding value while these other people are sort of being left behind. And I don't want that to happen.
[01:07:39] Yeah.
[01:07:39] Paul Roetzer: so I think we feel this vibe shift every day. and I just, you know, I've said before, like I feel like a greater sense of urgency every day to do more because I see so many people who aren't, aren't aware yet, or don't have a sense of urgency to solve for it in their own lives, in their own companies.
[01:07:59] And that's, [01:08:00] it's gonna be challenging to see.
[01:08:02] Mike Kaput: All right, Paul, that wraps up 10 trends for Q1. It's been a wild start to 2026. This is actually really good timing, I think, because I, I feel like this is a good breath and a recap before the storm, so to speak. That's gonna happen in the next few weeks when you're back, like with model releases, I think we're in for,
[01:08:21] Paul Roetzer: oh my
[01:08:21] Mike Kaput: gosh, a very fast spring and summer.
[01:08:23] Paul Roetzer: Yeah. And it's like, quick show note. I was thinking about this. So like episode 2 0 7, we were talking about Peter Steinberger and OpenClaw.
[01:08:29] Mike Kaput: Yeah.
[01:08:29] Paul Roetzer: And I was sharing that. I'd listened to the Lex Freeman podcast episode, which came out on February 11th, but I didn't listen to it till March 30th or 29th or something like that.
[01:08:38] So I'd mentioned at the time, like, you know, he was talking to Sam Altman and Mark Zuckerberg, but then when you said the thing about him going to openAI's, I was like, oh, shit. That's right. He did go to openAI's, so I'd mentioned like, oh, he might go to Met or whatever. But no, Steinberger published a post on February 14th, so three days after the Lex Fridman podcast saying.
[01:08:55] I'm joining Open the Eye to Work, I'm bringing agents to everyone. We'll move on or move to a [01:09:00] foundation and stay open and independent. and so that, like he's got a blog post we'll throw in. So yeah, just like a quick, and I'm not a hundred percent sure what I said, but in that episode with Friedman, he talked about he was gonna basically go work for one of those too.
[01:09:12] And I think I was like, I leaning towards Zuckerberg. but yes, he did, he did end up like going to openAI's and moving OpenClaw a more of a foundation model. So yeah, just kind of a quick show note. We'll throw that link in in the, in the show notes. You can see it, but
[01:09:26] Mike Kaput: cool.
[01:09:27] Paul Roetzer: Yeah, so hopefully this trends format was like super helpful to people.
[01:09:30] It's helpful to us. Like it's always one of the, my favorite things Mike and I get to do each quarter is like step back. Like, holy cow, how did that all happen in three months? and I feel like just the 10th trend, like just all the models is like hard enough to comprehend that all happen. And, and we always inevitably might get these like, well what about this, what about this, what about this?
[01:09:48] It's like, trust us. We know there's like 20 other things that could've made the top 10. so. Yeah. Yeah. We only have so much time in the day to, to go through each of these things, so Good. [01:10:00] Thanks Mike, for putting this all together. These are great. No
[01:10:01] problem.
[01:10:02] Paul Roetzer: And, like I said, hopefully we'll make this kind of a recurring show.
[01:10:05] We'll probably do it as like a bonus episode moving forward, you know, unless we have a, a week where we're on vacation. But we'll start doing these as like a special quarterly episode and we'll be back April 14th with the next weekly episode. So, yeah, I, I mean, we've already had a lot happen in the first two days of this week, so if I imagine by then we're gonna have like a hundred links to get through Mike.
[01:10:23] Mike Kaput: Oh my gosh, looking forward to it.
[01:10:26] Paul Roetzer: Well have a, have a, a great Easter holiday and, and trip if you're taking Spring break anywhere. if you celebrate Easter, you know, enjoy your time with your family. That's what I'm planning on doing and hopefully not working too much, but I got a long flight and I can't sleep on flight, so I'm gonna be super productive for like 20 hours.
[01:10:42] Other than that, I'm gonna try and just en enjoy time with my family. So thanks for listening. We'll be back with you again soon. Thanks for listening to the Artificial Intelligence show. Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have [01:11:00] subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events.
[01:11:06] Take in online AI courses and earn professional certificates from our AI Academy and engaged in the SmarterX Slack community. Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
