Elon Musk took the stand this week in his federal trial against OpenAI and admitted xAI distills OpenAI's models. That's just the first five minutes.
Episode 212 covers an intense week in AI industry drama: the Musk-OpenAI trial and what a Musk victory could mean for every business built on ChatGPT, the sudden rewrite of the Microsoft-OpenAI partnership (and the quiet removal of the AGI clause that defined it), Big Tech earnings that sent Google stock up 12% while raising questions about who actually wins the AI infrastructure race, an AI agent that deleted an entire production database in nine seconds, Anthropic's eye-popping $900B valuation, and a growing populist backlash against AI that's spilling from X threads into Molotov cocktails. Plus: Paul shares why former Senator Ben Sasse's 60 Minutes interview hit him on a deeply personal level and what originally drove him to pursue AI in the first place.
Listen or watch below and see the show notes and transcript that follow.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Click here to take this week's AI Pulse.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:04:45 — Elon Musk vs. OpenAI Trial Begins
- Pre-trial framing
- CNBC: Musk has already won against Altman, says WSJ's Tim Higgins - CNBC
- OpenAI Newsroom: We can't wait to make our case in court - OpenAI
- Musk and Altman head to trial in feud over future of OpenAI - Bloomberg
- Musk's “Scam Altman” social media campaign
- X Post from Elon Musk: Altman owned the OpenAI Startup Fund
- X Post from Elon Musk: Altman and Brockman stole a charity
- X Post from Elon Musk: Altman lied in congressional testimony
- X Post from Miles Brundage: Is Elon tweaking the X algo to influence the trial? - X
- Musk boosts New Yorker's Altman exposé as trial begins - Wired
- The trial itself
00:14:49 — OpenAI and Microsoft Revise Their Partnership
- X Post from Sam Altman: We've updated our partnership with Microsoft
- The next phase of the Microsoft-OpenAI partnership - Microsoft Blog
- X Post from Kobeissi: MSFT -5% as OpenAI license becomes nonexclusive
- Andy Jassy on OpenAI on AWS
00:30:33 — Big Tech Earnings
- Big Tech Earnings Summary
- X Post from Tae Kim: 4 Hours of Earnings Calls Distilled
- The State of AI After Google, Meta, Amazon, Microsoft Earnings - Key Context by Tae Kim
- Alphabet: Pichai Calls Q1 a 201cTerrific Start201d
- X Post from Sundar Pichai: Q1 earnings
- Alphabet Revenue Soars 22%, Strong Growth in Cloud, Search - The Information
- X Post from Gene Munster: Google Search up 19% in March
- Microsoft: Record FY26 Q3 with $37B AI Revenue Run-Rate
- X Post from Microsoft: Record FY26 Q3 earnings
- X Post from Satya Nadella: Earnings wrap-up
- X Post from Gene Munster: MSFT in a tight spot, stuck with negative narrative
- Meta: Stock Down 6% on Capex Guide; Zuck Blames AI Costs
00:39:39 — Anthropic Eyes $900B Valuation
- Anthropic considering funding offers at over $900B value - Bloomberg
- Anthropic's potential $900B valuation round could happen within 2 weeks - TechCrunch
- Q1 2026 earnings call: Remarks from our CEO - Google
- Google IO 2026
- Google's $40B Anthropic Investment
- OpenAI Misses Revenue Targets in IPO Sprint
00:43:53 — Trump's Anthropic Reversal and Mythos Fight
- Trump admin doing a 180 on Anthropic - Axios
- X Post from Dean Ball: DoW picked a fight with the most important AI company
- X Post from Dean Ball: White House should issue guidance to end the manufactured crisis - X
- X Post from Jim VandeHei: Editorial on Anthropic reversal
- White House opposes Anthropic's plan to expand access to Mythos - The Wall Street Journal
- AI Nationalization Debate
- What Happens if Trump Seizes AI Companies - The Atlantic
- X Post from Lila Shroff: Talk of AI nationalization is growing — what would it actually look like
- Google's Pentagon Classified AI Deal
00:52:48 — Agents Gone Wrong
- X Post from @lifeof_jer: Agents Gone Wrong
- X Post from Simon Willison: The two real lessons here
- Claude-powered Cursor agent deletes entire company database in 9 seconds - Tom's Hardware
- X Post from Jason Lemkin: Reaction
- Colin Fleming LinkedIn Post: "A dangerous idea is moving through enterprise AI" - LinkedIn
- Global Intelligence Truth Institute (Jeremiah Owyang) - Global Intelligence Truth Institute
00:56:47 — Myths We Tell Ourselves About AI and Jobs
- X Post from Clara Shih: Very smart, worth discussion
- The AI Layoff Trap (Prisoner's Dilemma Paper) - arXiv
- Apprenticeship Modernization Executive Order - Department of Labor
- Why the AI Apocalypse (Probably) Won't Happen
- State of AI for Business 2026 Webinar - SmarterX
- Salesforce: Benioff Hires 1,000 New Grads
- Sal Khan's $10K AI Degree
- $10K AI degree with Google, Microsoft, Replit - SF Standard
- Khan's AI degree could rival Harvard - Apple News
- Y Combinator Summer 2026 RFS
01:06:15 — Why Shopify CEO Tobi Lutke Says “Saying The Thing Matters”
- X Post from Tobi Lutke: Shopify CEO follow-up to last year's AI memo
- AI First CEO Memo Template - Paul Roetzer LinkedIn
01:11:20 — AI's Public Backlash Problem
- The AI Industry Is Discovering That the Public Hates It - The New Republic
- X Post from Andrew Yeung: 35% of Americans use AI weekly, but regular people don't like it
- Berkshire Hathaway, Chubb win approval to drop AI insurance coverage - The Information
- AI artificial intelligence backlash - The New York Times
- X Post from Taylor Lorenz: Pro-AI dark money group seeded by Palantir/OpenAI execs
01:15:16 — AI Use Case Spotlight
- Experience Inbound | Wisconsin's Premier Marketing & Sales Conference - Experience Inbound
- Stream Creative - Milwaukee Marketing Agency - Stream Creative
- Weidert Group - Industrial Marketing Agency - Weidert Group
- Paul Roetzer LinkedIn Post
01:22:59 — AI Academy Spotlight
01:25:58 — Ben Sasse's Parting Words on AI
- X Post from 60 Minutes: Ben Sasse extended interview
- X Post from 60 Minutes: “22-year-olds couldn't assume the work they did...”
01:31:56 — AI Product and Funding Updates
- David Silver's $1.1B Seed for Ineffable Intelligence
- Former DeepMind researcher's startup raises record $1.1B seed - CNBC
- The Man Behind AlphaGo Thinks AI Is Taking the Wrong Path - Apple News
- X Post from Alfred Lin: Ineffable Intelligence commentary
- Sequoia and NVIDIA back ex-DeepMind researcher at $5.1B value - Bloomberg
- OpenAI launches GPT-5.5 prompt guidance
- OpenAI: Cybersecurity in the Intelligence Age
- OpenAI: Where the Goblins Came From
- Anthropic launches Claude for Creative Work
- Google Gemini gains in-app file generation
- Microsoft Agent 365 hits General Availability
- Meta acquires Assured Robot Intelligence (humanoid)
- ElevenLabs launches Agent Templates
- Lovable launches mobile app
- Stripe Sessions 2026 announcements
- Cloudflare: Agents can now create accounts, buy domains, deploy
- Hightouch raises $150M Series D for marketer AI platform
- Avoca AI raises with Kleiner Perkins for HVAC/plumbing/roofing agents
- Microsoft Pushes Usage-Based AI Pricing
- Atlassian & HubSpot Shift to AI Flat Fees
- China Blocks Manus Acquisition
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Learn more here.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: I am fully convinced if the CEO isn't leading the charge, it's not going to work. Like the CEO has to be the ringleader here. Like they have to be fully in this, and it cannot just be lip service that like AI's important and we're gonna go through this transformation. It's like, no, you gotta lead the charge.
[00:00:18] Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content officer Mike Kaput.
[00:00:38] As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all.
[00:00:54] Welcome to episode 212 of the Artificial Intelligence Show. I'm your host Paul [00:01:00] Roetzer with my co-host Mike Kaput. We are recording on Monday, May 5th, no May 4th. We're off to a great start Monday, May 4th, 9:40 AM am Eastern Time. I don't think we're expecting any, like, major new models or anything this week.
[00:01:14] I don't think anybody has major conferences this week, but who knows? Always a surprise waiting for us around the corner. but we have plenty of big topics to talk about, Mike. There's a couple of rapid fire items that I'm gonna do my best to keep as rapid fire items, but I have, I have some thoughts on some things today.
[00:01:31] I've been thinking big picture a lot lately. okay. So today's episode is brought to us by AI Academy by SmarterX, which helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and an AI powered learning platform. New educational content is added weekly, so you always stay up to date with the latest AI trends and technologies.
[00:01:54] The AI four Industries collection features seven course series and professional certificates [00:02:00] designed to jumpstart AI understanding and adoption. We have ai, four professional services, healthcare software, and technology insurance. Financial services, retail and CPG and manufacturing. And Mike, I think you're gonna give us a little preview later on.
[00:02:15] Financial services, is that right?
[00:02:17] Mike Kaput: That's it.
[00:02:18] Paul Roetzer: Okay, cool. so these series that are an ideal launchpad for organizations that want to level up their teams and accelerate AI adoption and. In this episode, we're going to talk through AI for financial services. As I mentioned, individual and business account plans are available now, or you can buy single courses and series for onetime fees.
[00:02:38] So visit academy dot SmarterX.ai to learn more. Again, that is academy dot SmarterX.ai to learn more. Alright, each week we go through an AI pulse, Mike, and this is an informal poll where we just put these questions up and see what our listeners think. We have actually, I dunno if we're gonna do it this week, Mike, but we talked about expanding this poll and put actually putting it out on LinkedIn.
[00:02:59] So we may actually [00:03:00] expand this and start gathering more responses. We've just basically been testing it for the last couple months and just kind of seeing how people react to these things. But now we're thinking about, you know, expanding this to get more real time research in. So if you see this poll also on LinkedIn, or wherever we decide to put it, definitely check that out as well.
[00:03:15] Or you can go to SmarterX.ai/pulse and participate in this week's survey, which we will give you at the end. So last week we asked, where is your organization at when it comes to deploying AI agents? This is an intriguing one. Hmm. let's see. We got. 44% say selectively piloting in a few areas, 38% waiting and watching for now, and then 9% and 9% on aggressively deploying across workflows and not on our radar yet.
[00:03:44] So that's fascinating data that I, we might want to ask that one again, Mike, at a broader audience. I'd be really fascinated to get more responses on that. 'cause again, like we present this as an informal poll, it's not enough responses to, you know, publish as like key findings per se. but that's when I would really be [00:04:00] interested in getting more on, especially given all the talk about agents lately.
[00:04:03] Mike Kaput: Yeah.
[00:04:03] Paul Roetzer: And then the second is, what's holding your organization back from deploying AI agents more than you are today? Also, another intriguing one. Wow, this is a big one. 67% security and governance concerns. Governance concerns. Wow. Okay. 11%. Nothing. We're actively deploying them, so we're not worried. 13% use case is not obvious for our business.
[00:04:25] And 9% ROI is still unclear. That is a pretty strong majority of security and governance concerns. And I would say, Mike, that's probably us. Like, I mean, when think, oh, for sure we get what they, they can be used for. We're just trying to figure out how to do it safely and how to govern it as you scale up these agents.
[00:04:40] Mike Kaput: Well, especially after some of the topics we've got on the docket today. Yeah, I can see why that might be the case.
[00:04:45] Elon Musk vs. OpenAI Trial Begins
[00:04:45] Paul Roetzer: No doubt. All right. Um. So with that, we're gonna dive into our first topic, straight Outta Hollywood. Mike, Elon Musk versus openAI's and Microsoft.
[00:04:56] Mike Kaput: Yes. Paul said The federal jury trial in Elon Musk's [00:05:00] lawsuit against openAI's opened up this past week in Oakland before a US district judge.
[00:05:05] The courthouse was predictably packed with lawyers, journalists, openAI's employees. There were protesters lining the street outside, both urging people to quit ChatGPT or boycott Tesla. Musk is asking the court to remove openAI's, CEO, Sam Altman and President Greg Brockman from their roles and to unwind the restructuring that allowed openAI's to operate a for-profit subsidiary.
[00:05:29] The outcome could certainly upend OpenAI's race towards an I-P-O-X-I, meanwhile, is also expected to go public as part of SpaceX as early as June at a target valuation of 1.75 trillion on the stand, which Musk took this week. He argued that he had been duped into bankrolling the company. He said, quote, I was a fool who provided them free funding to create a startup.
[00:05:53] He said he co-founded OpenAI in 2015 as a donation to a nonprofit developing AI for [00:06:00] Humanity, not a startup to enrich the executives. He said, quote, I gave them 38 million of essentially free funding, which they used to create what would become an $800 billion company. He described three phases in his relationship with openAI's.
[00:06:15] First, he was enthusiastically supportive. Then he started to lose confidence. Then he concluded that they were quote, looting the nonprofit, and he testified that the turning point came in late 2022 when he learned Microsoft would invest 10 billion in openAI's. So as part of his testimony, he's also leaning hard on AI safety.
[00:06:36] He told jurors he co-founded openAI's as a counterbalance to Google and recounted that Google co-founder Larry Page once told him that if AI wiped out humanity, it would quote be fine as long as artificial intelligence survives. Musk added that quote. The worst case scenario is a Terminator situation where AI kills us all.
[00:06:56] OpenAI's lead trial Counsel William Savitt, however, [00:07:00] argued that Musk was actually suing to undermine a competitor. He surfaced a 2017 email Musk had sent to Tesla VP after hiring Tesla's vp after hiring Andre Carpathy, who was a founding member of openAI's Way to Tesla. In it, he wrote the openAI's guys are gonna wanna kill me, but it had to be done.
[00:07:19] Another dramatic moment to come outta this. Savitt asked whether Xai uses distillation techniques on OpenAI's model to train Grock. Musk answered quote partly, which drew audible gasps in the courtroom. He did defend the practice saying it is a standard practice to use other AI to validate your ai.
[00:07:39] Next week's witnesses include uc, Berkeley, computer scientist Stuart Russell, who will testify in AI safety and Greg Brockman. Paul, this whole thing is insane. So like what? What stood out to you from this trial so far? Like, I thought it was kind of interesting just to hear Musk like saying, Hey, I was a fool who was duped, but also in the same [00:08:00] kind of breath saying, Hey, by the way, we also.
[00:08:02] Try to distill openAI's model over at Xai.
[00:08:06] Paul Roetzer: I mean, the fact that the trial's even happening is the part, to me that's the most unbelievable. Right. I just, I sit on the podcast leading up this, I just, I couldn't believe this was actually gonna go to trial and they were actually gonna take the stand. But here we are.
[00:08:16] It is truly, you know, almost if it was a Hollywood movie, it would be hard to believe the script. So, I don't know. I mean, the reason I 'm now so intrigued now that it's actually gone to trial is the off chance that must get some victory out of this and it impacts OpenAI's ability to go public and it impacts their plans.
[00:08:36] Like that would have a cascading effect on the economy because of all the tech companies that are invested in openAI's and the role they're already playing in business like. This, it's potentially very significant if, if any major change is, is results in this. So, I don't know, just the uncertainty of the fact that it's now gone to trial.
[00:08:57] and it's just infinitely [00:09:00] fascinating to like, follow, I guess there's this soap opera es a he aspect to it. I don't remember if we touched on this last week, or maybe it was like, I t might have been on Friday when we did our members only like quarterly trends briefing. But, the first day, I think within the first hour, the judge scolded both parties for basically litigating this on X.
[00:09:22] Yeah. And told them, stop, like, stop posting about this. So the first day, I think it was, maybe it was, so I think it might've been the second day where the judge scolded them, but I think it was like maybe the first day where Elon Musk tweeted scam Altman, of course he's gotta have a nickname, owned the openAI's Startup Fund while simultaneously lying to the world that he didn't financially benefit from openAI's.
[00:09:45] And then on. April 27th, he tweeted, which at the time I grabbed this, was 37 million views. I'm sure it's way more than that now. This was Elon's primary argument, in, like summarized in a single ex [00:10:00] post, says Scam Altman and Greg Stockman. Oh, I didn't even notice that one when I first looked at this.
[00:10:05] stole a charity. Full stop. Greg got tens of billions of stock for himself and Scam got dozens of open. Oh my God. openAI's side deals with a piece of the action for himself. I feel like Grok wrote this one. Yeah. Why Combinator style? After this lawsuit, scam will also be awarded tens of billions in stock directly.
[00:10:27] The fundamental question is simply this, do, do you want to set legal precedent in the United States that it is okay to loot a charity? If so, you undermine all charitable giving in the giving in the United States forever. I could have started openAI's as a for-profit corporation. Instead, I started it, funded it, recruited critical talent and taught them everything I know about how to make a startup successful for the public good.
[00:10:52] All caps they stole. Then they stole the charity. So there was times when Musk would get sort of frazzled on the stand based on the journalists who were [00:11:00] in, in the room when give their explanations of kind of what was happening. So he would be getting grilled and he would get kind of annoyed, and this is what he would always come back to.
[00:11:07] It's like, regardless of what you think about me, regardless of whether or not my arguments hold up, or maybe I'm contradicting myself at the end of the day. They stole a charity and you can't set that precedent, which is a rather convincing argument and he's obviously been coached to like, just keep coming back to that.
[00:11:22] You have to convince the jury that regardless of all these other things, they stole a charity, but he got reprimanded for that by saying, answer the actual question. Stop repeating this like talking point about stealing charity. So. I don't know, like that's wild. And then there was a good, MIT Technology review summary article, Mike, that I know you and I both, read.
[00:11:43] I'll just grab a couple excerpts from that one. So it says, OpenAI's lawyer William Savitt, who once represented Musk and his electric car company, Tesla, which is an interesting side note counter that Musk was never committed to openAI's being a nonprofit, and instead was suing to undermine his competitor.
[00:11:59] In [00:12:00] 2017, Musk and other OpenAI phone, co-founders discussed creating a for-profit subsidiary to raise enough capital to build AGI powerful AI that can compete with humans on most cognitive tasks. Musk wanted a majority interest in the subsidiary and the right to choose a majority of the board members.
[00:12:16] He also pitched having Tesla acquire openAI's. he then left OpenAI in 2018. Now we've covered all that before on the podcast, but it's a good synopsis. and then I thought the other one interesting thing he said, Mike was, must claim that Xai was not a real competitor to openAI's. Quote, we're not currently tracking to reach AGI first, he told the jury that's a very narrow, description of what a competitor is.
[00:12:42] Right, right. Just because you're not gonna get the AGI first doesn't mean you aren't trying to, and you aren't competing with them. So yeah, I just fascinating. regardless of how you feel about the different per people involved, I mean there, you know, there's, there's, very extreme feelings about. A lot of these people, a lot [00:13:00] of these companies, but it's just wild to watch it play out and the fact that it's really happening and at trial and, lots more to come, I think this week as Brockman takes the stand I'm sure they're gonna grill him on his personal journal entries.
[00:13:12] Yeah. Which is gonna get really awkward for everybody. so yeah. Worth noting. But again, our main interest here is what does this mean to openAI's and then thereby what does it mean to companies that are built around openAI's and what does it mean to, the economy as a whole. If something actually happens to openAI's and they can't, IPO and they can't pay their bills and they can't build all the data centers that like boosting up the economy and where this CapEx is going.
[00:13:37] Like it's, there's lots of layers to this story.
[00:13:40] Mike Kaput: Yeah. I don't know what the actual probabilities are of the outcomes here, but it is unfathomable to me that what if you did see. Outcome where Sam Altman and Greg Proman are removed and they unwind this thing would be insane. Would've huge implications for every single business built on GBT account.
[00:13:58] Paul Roetzer: Yeah, I can't, I can't even, I haven't tried [00:14:00] to process that yet.
[00:14:00] Mike Kaput: I don't know if that's if's at all likely, but even if there's a very small chance that would be catastrophic.
[00:14:06] Paul Roetzer: Well, yeah, and that's why I was like saying up front is, even if it's not that extreme and there's like some victory Yeah. Some small victories in here, and I don't know what those would be, but it's possible there could be disruption to openAI's even without the, like paying the $134 billion fine and removal of Sam and Greg.
[00:14:25] It's, it could be just enough to slow down their momentum towards an IPO and they are burning money really, really fast. Like they can't afford to, not IPO on the timeline they're tracking toward, otherwise they're gonna have to go raise tens of billions, hundreds of billions of dollars in the private markets.
[00:14:41] So either way, I mean, unless they just come out of this totally clean, it's probably gonna be disruptive in some way.
[00:14:48] Mike Kaput: Definitely.
[00:14:49] OpenAI and Microsoft Revise Their Partnership
[00:14:49] Mike Kaput: Well in our second topic this week, openAI's also has some other changes that have been happening because openAI's and Microsoft announced a sudden amended partnership this past week.
[00:15:00] So OpenAI, CEO, Sam Altman wrote on x quote. We have updated our partnership with Microsoft. Microsoft will remain our primary cloud partner, but we are now able to make our products and services available across all clouds. He added that openAI's will continue to provide Microsoft with models and products until 2032 and a revenue share through 2030.
[00:15:22] Microsoft's official blog posts laid out the new terms in five points. Microsoft remains open. AI's primary cloud partner with openAI's products Shipping first on Azure. Unless Microsoft cannot or chooses not to support the necessary capabilities, two, openAI's can now serve all of its products to customers across any cloud provider.
[00:15:40] Three. Microsoft's license two OpenAI's intellectual property on models and products extends through that 2032 mark, but is now non-exclusive. Fourth, Microsoft will no longer pay that rev share to OpenAI. And five, OpenAI continues paying a 20% rev share to Microsoft through 2030. Now, subject to a total [00:16:00] cap.
[00:16:00] Now interestingly, this is not spelled out directly in the announcement, but the amended agreement also removes that so-called AGI clause. We've talked about in the past this provision that would've ended Microsoft's license if openAI's declared it had reached artificial general intelligence. So Microsoft at the moment retains approximately 27% of openAI's and remains the company's largest individual shareholder.
[00:16:24] So Paul, I don't know about you. Like this seems pretty sudden, doesn't it? I mean, this was basically rewriting their partnership, yet both companies released barely any information about this.
[00:16:36] Paul Roetzer: it was, it did seem like the announcement was probably rushed just in how simple and. Almost verbatim the two companies were in their presentation of the information.
[00:16:50] So it was just like, I mean, the post Mike is like 300 words. Like there's no, it is just straight up. So interestingly, the way I approached this was this morning when I was getting ready, I was [00:17:00] like, I wanna go through like the historical context here. And I was gonna go back through like a bunch of past episodes where we talked about this relationship, but I figured it's actually easier to just use AI mode than Google and just like, you know, start having a conversation.
[00:17:10] So I pulled a few relevant links here that I think are really interesting, specifically on the AGI point, but then ironically in ties to the Musk versus openAI's lawsuit right now. So Reuters November, 2024. We'll put all the links to this in the show notes. If you're curious, I'm pulling on these threads.
[00:17:26] So Reuters, November, 2024, billionaire entrepreneur, Elon Musk expanded his lawsuit against Chad CPT Maker openAI's, adding federal antitrust and other claims and adding openAI's largest financial backer, Microsoft. Musk's amended lawsuit. Filed on Thursday night in federal court in Oakland, said Microsoft and OpenAI illegally sought to monopolize the market.
[00:17:49] For general artificial intelligence and sideline competitors. So again, when we've been talking about the Musk versus openAI's, we haven't really been getting much into like the fact that he's also suing [00:18:00] Microsoft. Right? then Reuters, January, 2026. Elon Musk is seeking up to 134 billion from openAI's and Microsoft saying he deserves the wrongful gains that they received from his early support.
[00:18:14] openAI's gained between 65.5 and 109.4 billion from the billionaire entrepreneur's contributions when he was co-founding what was then a startup from 2015 while Microsoft gained between 13,000,000,020 5 billion Musk said in a federal court filing. So then I went back, Mike and just kind of like pulled a few of the key announcements from Microsoft and OpenAI through the years, and I think these are really relevant, especially on the AGI thing again.
[00:18:43] So July 22nd, 2019. So we are now. Exactly two years after the invention of the transformer. So this is two years after the attention is all you need. Paper came out from Google Brain and we are three and a [00:19:00] half years before the launch of ChatGPT. So just for time context here. So this is from Microsoft's news site, openAI's Forum's exclusive computing partnership with Microsoft to build new Azure AI supercomputing technology.
[00:19:15] So the reason I wanna share this is like, think about how they're presenting this partnership throughout this last six years and then where we are today. So Microsoft says, multi-year partnership, founded on shared values of trustworthiness and empowerment, and an investment of 1 billion from Microsoft will focus on building a platform that OpenAI will use to create new AI technologies and deliver on the promise of AGI.
[00:19:38] Now keep in mind, OpenAI was working on. GT one, like language models at that time. So they were now seeing the early progress of LLMs, in essence, like the very early formation. But we as a society didn't necessarily know that yet. Microsoft and openAI's two companies thinking deeply about the role of AI in the world and how to build [00:20:00] secure, trustworthy, and ethical AI to serve the public, have partnered to further extend Microsoft Azure's capabilities in large scale AI systems.
[00:20:09] Through this partnership, the companies will accelerate breakthroughs in AI and power OpenAI's efforts to create AGI. Over the past decade, innovation, innovative applications of deep neural networks coupled with increasing computational power, have led to continuous AI breakthroughs in areas such as visions, speech, language processing, translation, robotic control, and even gaming.
[00:20:31] Modern AI systems work well for the specific problem on which they've been trained, but getting AI systems to help address some of the hardest problems facing the world today will require generalization and deep mastery of multiple AI technologies. openAI's and Microsoft's vision is for AGI to work with people to help solve currently intractable multidisciplinary problems, including global challenges such as climate change, more personalized healthcare and education.
[00:20:59] So again, [00:21:00] as I'm reading this, like put yourself in the 2019 framework of that. We don't know about l lms yet, generally. Mm-hmm. then there's two quotes. The first is from, this one I think's from Satya. the creation of AGI will be the most important technological development in human history with the potential to shape the trajectory of humanity.
[00:21:19] Nadella Altman said Altman, CEO of openAI's. Our mission is to ensure that AGI. Technology benefits all of humanity. And we're working with Microsoft to build the supercomputing foundation on which will build AGI. We believe it's crucial that AGI just count the times they're saying AGI is deployed safely and securely, and that its economic benefits are widely distributed.
[00:21:39] We are excited about how deeply Microsoft shares this vision. And then Satya added AI is one of the most transformative technologies of our time and has the potential to help solve many of our world's most pressing challenges by bringing together OpenAI's breakthrough technology. With new Azure AI Supercomputing technologies, our ambition is to democratize AI while always keeping AI [00:22:00] safe, safety front and center so everyone can benefit.
[00:22:02] related openAI's published on that same day, which again is just really fascinating context. Each year since 2012, the world has seen a new step function advance in AI capabilities. Through these advances, uh though these advances are across very different fields like Vision. In 2012, simple Video Games 2013, machine translation 2014.
[00:22:26] Complex board games 2015, speech synthesis 2016. That was the year I started marketing a institute, by the way, like for context of when we really started like investing heavily in this area. Image Generation 2017, robotic control 2018 and writing text 2019. They're all powered by the same approach.
[00:22:47] Innovative applications of deep neural networks. Deep learning coupled with increasing computational power, but still AI systems. Building today involves a lot of manual engineering for each well-defined task. In [00:23:00] contrast, an AGI will be a system capable of mastering a field of study to the world expert level and mastering more fields than any one human and AGI working on a problem would be able to see connections across disciplines that no human could.
[00:23:13] We want AGI to work with people to solve currently intractable, multidisciplinary problems. They got the same talking points, including global challenges such as climate change, affordability, high quality healthcare, and personalized education. Go on to say they believe it's gonna be, the, we believe that the creation of beneficial AGI will be the most important technological development in human history.
[00:23:33] Fast forward 2023. January 23, we are now a month and a half-ish before the introduction of GPT-4. So we now have ChatGPT. We've had that moment two months earlier and now Microsoft and openAI's. Internally have GPT-4. They're showing it to Bill Gates. They're getting all the momentum going, and they're about to unveil GPT-4 to the world.
[00:23:56] So they said January 23rd, 2023. Today we are announcing the third [00:24:00] phase of our long-term partnership with OpenAI through a multi-year multi-billion dollar investment. To accelerate AI breakthroughs to ensure these benefits are broadly shared. this agreement is focused on supercomputing at scale, new AI powered experiences and exclusive cloud provider.
[00:24:15] That's a real important term here. openAI's exclusive cloud provider. Azure will power all OpenAI workloads across research products and API services. That later becomes the sticking point. Sam, then in an abbreviated quote says, the past three years of our partnership have been great. Microsoft shares our values and we are excited to continue our independent research and work toward creating advanced AI that benefits everyone.
[00:24:37] and then 2025, October 28th, Mike. So this is just, I don't know what, six months ago. So the next chapter of the Microsoft OpenAI partnership. So this is now the prelude to what we just learned this past week. Since 2019, Microsoft and OpenAI have shared a vision to advance AI responsibly and make its benefits broadly accessible.
[00:24:58] [00:25:00] what began as an investment in a research organization had grown into one of the most successful partnerships in our industry. As we enter the next phase, we've signed a new definitive agreement that builds on our foundation, strengthens our partnership, and sets the stage for long-term success. Now, the reason I wanna focus on this one, Mike, is because, again, six months ago.
[00:25:19] Listen to the focus on AGI. Yeah, so they talk about Microsoft supporting OpenAI's effort to move to a a public benefit corporation. Microsoft holds an investment. This is there verbatim in openAI's, valued at $135 billion, roughly 27% of the company. they said the agreement preserves key elements that have fueled the successful partnership, meaning OpenAI remains.
[00:25:45] Microsoft's Frontier model partner and Microsoft continues to have exclusive IP rights and Azure API exclusivity until AGI. So this is six months ago. AGI is still the clause. They said. What has evolved [00:26:00] once AGI is declared by openAI's, that declaration will now be verified by an independent expert panel up until 2025 in October.
[00:26:09] OpenAI's board determined when AGI and once AGI was declared, Microsoft lost rights to the technology. So that was a very fundamental thing. So now they're saying an independent panel will decide it. Microsoft's IP rights for both models and products are extended to 2032, and now include models post AGI with appropriate, appropriate guardrails and Microsoft's IP rights to research defined as confidential methods used in the development of models and systems will remain until either the expert panel verifies AGI or 2030.
[00:26:41] Then they keep going about AGI, AGI, AGI, and then all of a sudden, six months later, AGI is nowhere to be found in the announcement. It's removed from the agreement and everything. Is basically changed. So like what happened between October of 25 and April of 27, or April of 26, they don't [00:27:00] get into, but whatever it was, wall Street didn't like it.
[00:27:04] So Microsoft Stock falls 5% as soon as they announced this deal. And then the part that I thought the most fascinating was Sam Altman Tweets. post OnX April 27th, 9:24 AM While the trial's going on with Elon Musk, we have updated our partnership. Microsoft will remain our primary cloud partner, but we are now able to make our products and services available across all clouds.
[00:27:25] So the exclusivity is gone. We we'll continue to provide them with models and products, and rev share through 2030. So products till 2032 rev share. Then the one I that really caught my attention is Andy Jassy, the CEO of Amazon Tweets three hours later. Very interesting announcement from OpenAI this morning.
[00:27:44] We're excited to make OpenAI models available to directly to customers on Bedrock and the coming weeks alongside. With other things they got going on. And then the next day, the official announcement from Amazon that we are now deepening our collaboration with AWS and openAI's and we're pushing forward [00:28:00] together, da, da, da.
[00:28:01] And it's like, geez. Oh man. Like what? Like what a complicated relationship and all. I keep thinking, and we've said this many times in the show's, like there must be so much more to this story there. It seems like it's become very, combative, but they're putting on a good face publicly. But it just seems like all the goodwill between Satya and Sam and Open Microsoft is just like not there.
[00:28:27] I don't know. And we saw it coming last year when Microsoft had made this big commitment to like data center build out with openAI's and then they backed away from it and they were obviously starting to pull back. And the real, the real friction point became Sam and openAI's have these massive ambitions to build out energy and data centers, you know, for the infrastructure for they think a talent intelligence will be, and the demand will be in 2030 and beyond.
[00:28:50] And Microsoft was no longer willing to, to make that bet with them at that level. They, they didn't necessarily, share the [00:29:00] risk appetite for the amount of leverage it was gonna take CapEx spending and beyond to put the infrastructure in place that Sam wanted. And so there was no way for them to move forward.
[00:29:09] But they have a 27% stake in openAI's, openAI's, they bet everything on it. They still don't have their own competing models internally that they can replace them with. So Microsoft needed to go do deals with Anthropic and others to like, so it is, I don't know, it's just 10 years from now, I can't wait for maybe sooner the inside story of this relationship to come out because the negotiations for these companies have to just be on a whole nother level than what we're getting the glimpses of publicly.
[00:29:36] Mike Kaput: I can't even imagine. It sounds like it got kind of messy behind the scenes.
[00:29:40] Paul Roetzer: Yeah, I mean, I guess kudos to them on the PR aspect that they're managing to keep a lot of the bitterness if it exists.
[00:29:47] Mike Kaput: Right.
[00:29:48] Paul Roetzer: Internalized for now. I, but they're, I'm sure they have pretty strict clauses in these partnerships that they can't disparage each other and things like that.
[00:29:56] Like it's, the NDAs are extremely tight, I'm [00:30:00] sure, on what they're allowed to say publicly about how the partnership is really going.
[00:30:03] Mike Kaput: And Satya has a little more discipline on X than some other AI players. So, you know, he's not mouthing off of
[00:30:09] Paul Roetzer: Well, yeah, I think he learned his lesson. He learned his lesson when he made the, like 2023 comments about making Google dance.
[00:30:15] It's like that, that didn't age well. So I think he, and he's very disciplined overall. That's why I was always surprised by that comment, where it was just like, he kind of let himself, let a little ego get the better of him. I don't think that's his, I don't think that's normal for him from a communication standpoint.
[00:30:33] Big Tech Earnings
[00:30:33] Mike Kaput: All right. Our third big topic this week, alphabet, meta, Amazon, and Microsoft all reported earnings this past week. Each of their cloud and infrastructure businesses showed robust AI driven growth and lots of commentary around CapEx going up not down from these companies. So in brief, alphabet posted Q1 revenue up 22% year over year to just over 109 billion.
[00:30:56] Google Cloud grew 63% and crossed 20 [00:31:00] billion in quarterly revenue for the first time. CEO. Sundar Phai told analysts that they are compute constrained in the near term and said cloud revenue would've even been higher if they were able to meet demand. They raised at Alphabet at large 2026 CapEx guidance to between 180 to 190 billion, said 2027 CapEx will be significantly higher.
[00:31:23] Microsoft's AI business surpassed a $37 billion annual run rate up 123% year over year. Azure and other cloud services grew 40% ahead of the company's own guidance. Microsoft 365 copilot crossed 20 million paid commercial seats and CEO Satya Nadella said weekly engagement on copilot is now at the same level as Outlook.
[00:31:47] Microsoft guided to roughly $190 billion in full year CapEx citing soaring memory costs. Meta posted just over 56 billion in Q1 revenue, up 33% year over [00:32:00] year. And they also raised their CapEx guidance significantly from between about 115 billion to 135 billion, up to 125 to 145 billion CFO. Susan Lee attributed the increase to higher memory component pricing and additional data center spending to support future demand.
[00:32:21] And then here, Amazon Web Services grew 28% over year, over year to 37.6 billion. Fastest growth rate. It's seen in 15 quarters. CEO, Andy Jassy said AWS's AI revenue run rate is now over 15 billion. and they have hit a backlog of 364 billion, not including a recently announced 10 year Anthropic commitment to spend over a hundred billion dollars on AWS.
[00:32:48] Now, interestingly, Jassy also. Talked about AGI agentic AI specifically, he told analysts that most of the value companies derive from AI will be through agents. So Paul, [00:33:00] I mean, just the CapEx stuff is out of control. Every single one of these companies is spending so much on it. And it sounds like running into some serious compute, both constraints and rising costs there.
[00:33:12] Paul Roetzer: I, yeah, and I think Google is the only one that maybe has a light at the end of the tunnel in terms of their ability to control the supply of compute, you know, and that's one of the things we've made the bull case for Google in the whole AI race many times on the show. And it's largely in part because of the infrastructure that they've been able to build over the last.
[00:33:32] Two and a half decades. Yeah. so I did a quick look at just what the stocks have done over the last one week. So just did a one week period to look at that segment. Meta down 10%, basically Microsoft, down 1%. The nasdaq, by the way, overall is, plus 1.4%. So just to level set. So Microsoft down 1%, Amazon plus 2% Apple plus 5.3% alphabet slash [00:34:00] Google, plus 12.4%.
[00:34:02] Hmm. So when you look at the last, year, the Nasdaq is up 41% over the last year, alphabet is up 128%. So we do not provide investing advice on the show. If you go back and listen, a year ago though, was probably around when I was making the very public case that like betting on Google is, um. If you're gonna look at the companies that, that you have high confidence are gonna do very well.
[00:34:32] Google has a lot going for it that Anthropic and OpenAI do not. Yep. so as the startups, obviously OpenAI and Anthropic are the sexy ones. The bet on everybody's anxious for those IPOs, but when you look around at the data, the products, the distribution, the infrastructure, the chips, the talent like Alphabet is very, very hard.
[00:34:53] Specifically Google DeepMind and Google Cloud, it's very hard to, see a future where they are not a [00:35:00] dominant player in this space. And even when Elon Musk was asked last year, like, Hey, who's gonna do well? He's like, well, Google's gonna be Yeah. Like a dominant player. So yeah, I think that was the one that stood out to me.
[00:35:10] As the market seems to be starting to realize and has over the last year, that while there's ebbs and flows in terms of which model is best and which agent harnesses best, and like all this stuff. At the end of the day, Google's runway is pretty significant as long as the demand for intelligence grows, which is a pretty fair assumption.
[00:35:31] So yeah, it was, it was really the, those numbers that jumped out to me. One of the things, especially from a marketing background perspective, Mike, the thing that has been like, one of the great unknowns is like, what would AI do to search? You know, obviously Google makes a lot of money in their advertising business and from search.
[00:35:47] And so there was these assumptions that like chat, GBT would replace people going to Google, and that does not seem to be the case. said search and other advertising revenue grew 19%. Hmm. And so their AI mode and AI overviews seem to be [00:36:00] working. People seem to be adopting them, and that's driving growth.
[00:36:03] One of the other numbers I thought was crazy was they said, this is from a post, Sundar did, about the earnings. AI models have great momentum. Our first party models now process more than 16 billion tokens per minute via direct API. Use by our customers up from 10 billion last quarter. and then it said the momentum is accelerating, usage of our models over the past 12 months, 330 Google Cloud customers.
[00:36:32] So 330 individual Google Cloud customers each processed over 1 trillion tokens. 35 of them reached 10 trillion token milestones. Oh,
[00:36:42] Mike Kaput: wow.
[00:36:43] Paul Roetzer: So just the demand, like we keep saying, the demand for intelligence, when, when it was just chatbots, it was a lot. The reason I kept thinking that Wall Street was underestimating the value of these AI companies was because they were, as they were assigning a value based on demand [00:37:00] for chatbots, which don't require lots of tokens.
[00:37:02] Mm-hmm. But once you started using. Reasoning models, image generation, video generation, and now agents. the token demand for those forms of AI is so vast compared to just pure chatbot. And it was like Wall Street just was oblivious to it. And now they're starting to figure out that the demand for tokens is basically infinite and that the supply of the chips and the data centers and the energy is nowhere near where it needs to be.
[00:37:30] And so now it's like, whoa, okay, maybe we did undervalue Nvidia and Google and others, and that's why everyone's racing to get in with Anthropic and like secondary offerings and things like that is like they're trying to get in of what's gonna be a multi-trillion dollar company. one other quick note.
[00:37:45] Google has their IO developer conference. May 19th and 20th expect new models, that week. Yeah. So there's lots of chatter online now that new, Gemini models are being tested. and so I would expect we're gonna get a new version of [00:38:00] Gemini later this month.
[00:38:02] Mike Kaput: All right, Paul, before we dive into rapid fire, one other announcement.
[00:38:05] This episode is brought to us also by MAICON, the Marketing AI conference now in its seventh year, taking place, October 13th through the 15th here in our home base of Cleveland, Ohio. The conference is going to bring together this year more than 2,500 marketers and business and AI leaders, and they're focused all on one thing, which is how to actually make AI work inside your organization.
[00:38:29] So we've already announced two keynotes where the triple loan, Karen Hao, author of Empire of AI s back. She was our very first keynote in 2019, and she's returning with a deeper story about how money, ideology, and power shaped openAI's and why this matters to every business leader right now we're also featuring Dan Slagen, SVP of Marketing at Zapier.
[00:38:51] He's also back on the ma con stage, bringing us a practical, grounded view on where marketing is headed next. And we are adding new speakers basically every week. So [00:39:00] go check out the website often. You can go check that out and register at MAICON.ai. That's MAICON, M-A-I-C-O n.ai. You can also use the code POD100.
[00:39:14] To save a hundred dollars off our current rates for tickets,
[00:39:17] Paul Roetzer: I'll say we are. Very soon we'll be announcing a couple more very significant keynotes I'm extremely excited about. And then the full agenda, like 90 to 95% of the agenda, should go live later this month. So the whole thing will be able to be up there.
[00:39:32] But yeah, we are, we're very excited about the, I can't
[00:39:36] wait.
[00:39:36] Speaker line, this. Yeah.
[00:39:39] Anthropic Eyes $900B Value
[00:39:39] Mike Kaput: All right. Let's dive into some rapid fire this week, Paul. So first up we have Anthropic is weighing offers for a fresh funding round at a valuation of more than $900 billion according to Bloomberg, which would more than double the company's current valuation and a leapfrog openAI's as the world's most valuable AI startup.
[00:39:57] The company is entertaining offers but has not accepted [00:40:00] any discussions remain at a very early stage. They had previously resisted multiple inbound proposals at $800 billion or higher. So this coincides with Anthropics broader fundraising ramp as it. Hunts for more infrastructure to meet all this explosion of demand for Claude.
[00:40:18] Bloomberg has separately reported that Anthropic is considering an IPO as soon as October. this new round is going to land on top of a series of major existing commitments. Google recently committed to a fresh $10 billion, notably in Anthropic at a $350 billion valuation. They have up to $30 billion more contingent on performance targets.
[00:40:41] Now, for comparison here, OpenAI was most recently valued at 852 billion in a funding round completed in March. So Paul, if Anthropic actually closes this round, they basically become the top dog in terms of valuation in the AI world. Does that change anything about how [00:41:00] companies plan moving forward or how you see the competitive race to AI and AGI and beyond playing out.
[00:41:07] Paul Roetzer: It's interesting, I threw up just like a total random informal poll on LinkedIn and X I think it was over the weekend. And I just said, Hey, you know, Anthropic and opening I both IPO this year, you've got a hundred thousand to invest, you're gonna let it ride for five years. You gotta put all of it in one of these companies.
[00:41:23] Which one are you betting on? And it was, again, completely informal. but it was resoundingly Anthropic. Wow. Like, and I think it's just because of the traction they've got. The, outside of the government issues, the lack of drama, the lack of turnover, the fact that a bunch of tech executives are leaving very prominent CTO, CIO positions at major companies and going to work at Anthropic.
[00:41:50] So they just have so much momentum right now. and I'm starting to think their IPO is gonna hit 2 trillion, like I I, wow. Six months ago. [00:42:00] Saying a trillion would've seemed absurd. I , the run rate they're at and the velocity at which they're adding revenue every month is just unparalleled in history.
[00:42:10] And so I don't know, it's, it just, it's amazing to, to watch this happen. the one highlight I'll put here, Mike, is just the Google partnership. Yeah. So there was a CNBC article, and I think it was previously reported by Bloomberg probably, but Anthropic said the agreement expands on a longstanding partnership between the two companies.
[00:42:29] So they announced up to 40 billion that you had referenced. Yeah. more, earlier this month, Anthropic secured five gigawatts worth of computing capacity as part of an announcement with Google and Broadcom that will start to come online next year. Andro could decide to add an additional, add additional gigawatts of compute in the future.
[00:42:47] Google provides access to Anthropics cloud models through its cloud division, which competes with Amazon as we talked about, and Microsoft Azure. meanwhile, Google's Gemini is competing with Anthropic in the market for AI models and servers. That's why it's always [00:43:00] so weird. It's like Gemini Anthropic are directly competing and yet they're like massive partners on the cloud side.
[00:43:05] And then it said the relationship between the two companies dates back to 2023 when Google invested 300 million in the AI lab for a 10% stake, one of the greatest venture investments in history. Months later, Google poured in another 2 billion ahead of Friday's announcement Google's investment and Anthropic and seeded exceeded 3 billion and it's reportedly owned, at a 14% stake in the company.
[00:43:29] Wow. So like, just wild like Google, when Anthropic goes public, Google's stake in that company is gonna be bigger than probably. I don't know all, but like 50 companies in the world, just their stake and Anthropic is gonna be worth so much money. It's just wild to consider.
[00:43:47] Mike Kaput: Yeah. The scale is staggering here.
[00:43:49] Paul Roetzer: Yeah. So just crazy.
[00:43:52] Mike Kaput: Yeah.
[00:43:53] Trump's Anthropic Reversal and Mythos Fight
[00:43:53] Mike Kaput: All right, so next step, another issue related to Anthropic. We have covered the standoff between the Pentagon and Anthropic several times on prior episodes. This is when the Pentagon issued the unprecedented supply chain risk designation after Anthropic refused to allow Claude to be used for mass domestic surveillance or to develop fully autonomous weapons.
[00:44:13] So we've had some new developments here in the past week or so. Axios reported that the White House is now drafting executive guidance that would let civilian agencies bypass the designation and onboard new Anthropic models, including mythos. The company's really powerful new cyber focused model, or at least a model.
[00:44:32] It happens to be very good at cybersecurity issues. Federal agencies are reportedly clamoring for access to mythos, and the NSA is already using it. We had talked about earlier this month, white House chief of staff, Susie Wiles and Treasury Secretary Scott Besant, met with Anthropic, CEO, Dario Amedee, and what both sides described as a productive meeting.
[00:44:52] The White House is also convening companies this week for discussion of possible guidance that could walk back the Office [00:45:00] of management budget directive blocking Anthropic in the government. Yet on top of all this, the Pentagon is still battling Anthropic in court. So Paul, we've kind of been tracking this standoff for months now, at this point, um, is the administration trying to save face and just bring Anthropic back into the fold here?
[00:45:20] Paul Roetzer: Yes. But the, like, the thing that jumped out to me here, Mike, is the idea of nationalization of these AI labs.
[00:45:27] Mike Kaput: Yeah.
[00:45:27] Paul Roetzer: I've alluded to this topic a couple times on the podcast, but there was an article in Wall Street Journal that said, white House opposes Anthropics plan to expand access to Mythos model.
[00:45:36] Anthropic recently proposed letting roughly 70 additional companies and organizations use Mythos, which would've brought the total number of entities with access to about 120 people. Familiar with the matter said administration officials told the company they opposed the move because of concerns about security, wink, wink.
[00:45:55] Some White House officials also worried that Anthropic wouldn't have access to enough [00:46:00] computing power to serve that many more entities without hampering the government's ability to use it effectively. That to me is like. Should have been in bold face in the article. I'm gonna read it again. Some White House officials also worried that Anthropic wouldn't have access to enough computing power to serve that many more entities without hampering the government's ability to use it effectively.
[00:46:24] Mike Kaput: Mm-hmm.
[00:46:24] Paul Roetzer: When I read that, I was like, that sounds a lot like a prelude to nationalization of AI labs. When the government starts telling private companies they can or cannot do something with their models. In part because of the government's desire to use those models in a. In a government setting.
[00:46:43] Ironically, the Atlantic came out with an article that is called Nationalization of ai, which I highly recommend people read, and I'm just gonna read a few excerpts, but go read the whole article. It said, what happens if Trump seizes AI companies? The administration [00:47:00] could exert much greater control over the industry.
[00:47:02] But how far would it go? companies are beginning to entertain the possibility that they could cease to exist. This notion was until recently, more theoretical. A couple of years ago, an ex AI employee named Leopold Aschenbrenner wrote a lengthy memo. Situational awareness is what I call, we've talked about on the show, speculating that the US government might soon take control of the industry by 26 or 27, Aschenbrenner wrote an obvious question, will be circling through the Pentagon in Congress.
[00:47:30] Do we need a government led program for AGI and AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort. As Aschenbrenner may have been prescient earlier this year at the height of the Pentagon's ugly contract dispute with Anthropic. Pete, he's Secretary of Defense, warned that he could invoke the Defense Production Act, a Cold War era law.
[00:47:53] He reportedly suggested would allow him to force an AI company to hand over its technology on whatever terms the [00:48:00] Pentagon desired. The act is one of numerous levers the Trump administration can pull to redirect or even commandeer AI companies, and the companies have been giving the administration plenty of reason to consider doing so.
[00:48:11] Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore, quote unquote, potential nationalization of ai murmurs of possible tactics abound, including more talk with the administration on the defense, production act.
[00:48:30] Did I say that right? Prediction? Yep. Yeah. Yep. after Anthropics Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully In recent weeks, Musk OpenAI, CEO, Sam Altman, and Palantir's, CEO, Alex Karp have publicly spoken about the possibility of nationalization.
[00:48:49] Lawyers who represent Silicon Valley's biggest AI firms are paying attention. In the most extreme scenario, top researchers from across the AI companies would be forced to work in skiffs safe [00:49:00] environments. In the basement of the Pentagon, report to Heif computational capacity would be centralized under one nationalized mega operation.
[00:49:08] The work would be locked down and the focus would be primarily on defense applications as opposed to the products made for businesses and individuals that dominate the market today. While this is highly unlikely, of the full blown nationalization effort. But, that changes if a major global war breaks out or the economy collapses.
[00:49:27] Mm. During an emergency of historical scale, especially in emergency, under the Trump administration, anything is possible. Drastic measures become easier to justify both legally and politically. Couple other key notes here. Another possibility. Slightly less extreme, though still capable of remaking the industry, the government could regulate AI companies like it does utilities.
[00:49:48] perhaps the most likely fate though for American AI companies is a future of soft nationalization. I think this is a very, very important concept based on what we just talked about. A world in which the government doesn't fully [00:50:00] control AI labs in their models, but instead enacts an escalating series of policies and established close partnerships with private companies to shape the technology.
[00:50:10] Even without legislation, the White House can easily exert greater authority over industry. there's quite a lot of power. The federal government can wield Paul Shire, an executive at the Center for New American Security. Who previously did policy work, the Department of Defense told us, and even more so if you have an administration that's willing to stretch the bounds of executive power, which we have Anthropic supply chain risk designation, a label that effectively bars the military from doing business with the company.
[00:50:36] And that is typically reserved for companies with ties to foreign adversaries, was a clear example of the government flexing its muscles. So was the Biden Administration's decision to block Nvidia from selling its most advanced AI chips in China in 2022. the Defense Production Act is one of the most salient tools available to HEGs I that they've already basically threatened.
[00:50:56] and actually pursuing it is for controlling companies [00:51:00] would raise a lot of legal issues, but that hasn't stopped the Trump administration in the past. They said the final thing is the impossible truth is that no private company should be trusted to unilaterally steer the future of value development, but Americans should also have serious questions about whether government controls in their best interest, not the least of all, under an erratic and norm shattering Trump administration.
[00:51:19] So, mm-hmm. just file the idea of nationalization away in your mind. I have a sinking feeling that we may be talking more about that as the year progresses, and I don't like the idea that we're gonna have to be talking about that. but that, that might become, and I think this soft nationalization, I honestly feel like we're there and we're gonna look back and be like, oh wait, that started
[00:51:43] Mike Kaput: right
[00:51:43] Paul Roetzer: at this point.
[00:51:44] And now we're gonna start to see the ramifications of that. The government holds a lot of leverage. There's a lot of leverages they can pull without actually doing the extreme things. and this administration has shown they're willing to pull levers. [00:52:00] I 'll say,
[00:52:01] Mike Kaput: and if you would like one version of what soft na nationalization looks like, you can look no further than the People's Republic of China, which routinely takes minority stakes in companies, from a government perspective and weighs in on a lot of decisions as well.
[00:52:18] So it's interesting to see play out.
[00:52:20] Paul Roetzer: Well, the US just took a 10% stake in Intel last year.
[00:52:22] Mike Kaput: Exactly.
[00:52:22] Paul Roetzer: Like we're, we're seeing it happen already and it's like being put underneath other terms.
[00:52:28] Mike Kaput: Yeah. So it may not be, to your point, it may not be like outright compulsion of do all these things. Correct. But heavy, heavily influencing certain directions.
[00:52:36] Paul Roetzer: Yes. Lot of suggested forward
[00:52:39] Mike Kaput: suggestions that aren't suggestions. Yeah. Yeah, exactly.
[00:52:43] Agents Gone Wrong
[00:52:48] Mike Kaput: Alright, next up we have kind of a cautionary tale about AI agents. That's pretty, brutal pocket os founder Jar Crane published a viral postmortem on X this past week describing how an AI coding agent deleted his company's [00:53:00] entire production database and all of its backups in nine seconds.
[00:53:03] So Pocket OS is a SaaS platform for rental businesses. They basically help places like car rental businesses run their reservations, payments, and customer management. The agent they were using was Cursor running, anthropics, flagship, Claude Opus 4.6. At the time, according to Crane, the agent was working on a routine task in a test environment, went ran into a credentials problem and decided on its own to quote unquote fix it by deleting a chunk of company data stored within railway pocket OSS infrastructure provider.
[00:53:36] It went looking for an access key. The agent did found one in an unrelated file and used it to issue a single command that wiped the data. crane had complained, you know, there was no confirmation prompt, no warning, no check that the data being deleted was for testing rather than production. That Access Key had been created for a small, specific job, but it turned out to have full permissions across the company's entire [00:54:00] account, including the ability to delete it.
[00:54:02] And Crane asked the agent afterwards to explain itself, and it produced what he described as a written confession. New writing, every safety rule. It had violated writing things like I guessed. Instead of verifying, I ran a destructive action without being asked. I didn't understand what I was doing before doing it.
[00:54:19] And Crane said ultimately the rules the agent referenced match both cursor's own published guardrails and his company's internal safety instructions, both of which were not followed by the agent. So. Paul, I mean, this is like really extreme and you can, you can argue that maybe the company itself should have had things set up differently, but the fact remains like, I don't know, a single person who is like fully confident in figuring out like, what, how do I prevent this from happening to me?
[00:54:51] Paul Roetzer: Yeah. Yeah. I , so I shared this, this happened last week when I was in Denver, doing the keynote Acquia.
[00:54:57] Yeah.
[00:54:57] Paul Roetzer: and so I just kinda [00:55:00] like real time shared, shared this as an anecdote to something and, uh. And there's just audible gasps from the audience and I was like, yeah, I wiped out the production database in nine seconds.
[00:55:09] And you could just, like, the air came out of the room and these were, these are smart, technically minded people and they, so they knew what that meant and the significance. Yeah. I think the key takeaway, and I've been trying to stress this as a theme lately, is we can all vibe code apps and agents, like we can all build stuff now.
[00:55:24] We don't all have the knowledge of how to move these tools into production and then safely govern them in public domain. Mm-hmm. Like they're, just because you can build something doesn't mean you all of a sudden are also an expert in how to take them. Live, especially when they start collecting payment data and customer data.
[00:55:39] and so it's just like a cautionary tale. Like we're just very early and there are people who are taking a lot of risks and doing a lot of things. And it's cool and I'm glad there's people on the frontiers trying this stuff, but there's gonna be a lot of hard lessons learned. And you gotta know your role and whether you wanna be one of those people that is learning the hard lessons or [00:56:00] if you want to, like, the way I look at this now is I love vibe coding, like minimum viable products and prototypes.
[00:56:06] And then I love having technical partners who know what the hell to do with it once I've done that. And so that's how I think about what we're doing internally is like, rather than me spending months on a creative brief and saying, here's what I want it to do and here's examples of apps, I'm just gonna go build a sample app and then I'm gonna take it to my technical partners and I'm gonna say, here, can you build this for me safely and help us get it into the public domain?
[00:56:24] And like, so that's how I'm approaching. Agents and a lot of this stuff that every week we read these stories and I'm just like more and more convinced that that's the right path for us at the moment.
[00:56:35] Mike Kaput: Yeah. Yeah. It sounds like it would've been the right path for this company too. I don't even know how, but I think they
[00:56:39] Paul Roetzer: were more technical.
[00:56:40] That's the thing, is like, I think they actually knew what they were doing and it still happened.
[00:56:43] Mike Kaput: Yeah. Yeah. I don't know how you recover from something like that.
[00:56:47] Myths We Tell Ourselves About AI and Jobs
[00:56:47] Mike Kaput: Alright, next up, this past week, Clara, she who is the founder of the New Work Foundation and a former Salesforce ai, CEO, published a widely shared essay laying out what she calls six.
[00:56:58] Myths about [00:57:00] AI and jobs that we are telling ourselves. And her argument is that these myths are letting executives and policy makers kinda wait and see what's going to happen with AI instead of taking aggressive action to build the training, the safety nets and the policy framework she says we very much need right now.
[00:57:16] So I'll just quickly touch on these myths that she writes about. We'll kind of dive into this a little more, but myth one is that AI layoffs are just a hangover from the cheap money, like low interest rate era. She says Research from Stanford last fall shows that even after controlling for macro factors, AI is still a primary driver of statistically significant cuts, especially to entry level jobs.
[00:57:37] Myth two is the so-called jevons paradox, which that this, idea in economics that as a technology in this case, AI makes work cheaper. Demand for that work expands and creates more jobs. She counters that the supply growth tends to outpace demand growth and that compresses wages. She cites London Black cab drivers.
[00:57:57] Real income has fallen 50% since [00:58:00] GPS and Uber commoditized their craft, even as overall ride demand has exploded. Myth three is the AGI timeline debate itself. She calls it a convenient substitute for the harder conversation about what we need to start building today. Basically, arguing that look, workforce displacement is happening regardless of whether or not we agree on what AGI means or if it's going to arrive.
[00:58:22] Myth four is that headline unemployment numbers tell the story. They miss more than 2.3 million underemployed recent grads, which she argues will compound into structural economic drag through depressed lifetime earnings, delayed household formation and reduced consumption. Myth five is this idea of just send people to trade school.
[00:58:40] She notes that the, while the the trades do have demand, the Bureau of Labor STA statistics projects only about 38,000 net new trade jobs per year nationally against, again, those 2.3 million underemployed grads. And myth six finally is this idea. She says that AGI will bring great abundance for everyone.[00:59:00]
[00:59:00] She argues that productivity gains accrue to capital owners by default, and the industrial revolution generated abundance, but its distribution required decades of labor organizing, progressive taxation, and social insurance. So Paul, basically, she's outlining this argument that there's these kind of really comforting things people fall back on when they're looking ahead at what disruption AI could cause.
[00:59:24] And, you know, she is not, I would say someone that's like anti AI or super ethical is Salesforce, ex Salesforce, CEO of ai. Like, what did you make of this? I thought it was really well done.
[00:59:35] Paul Roetzer: I thought it was excellent. I shared it and said like, everybody should read it. and I keep honestly trying to find the counterpoints.
[00:59:42] Like I'm, I'm, I'm, I'm very open-minded that this is gonna work out really well and I want it to, but like, people keep leaning on, everybody's going to have, like, work in the trades. They're gonna do manual labor or they're gonna become entrepreneurs. And like, that seems to be the [01:00:00] best explanation I hear from most tech leaders is like, well, everybody can just be entrepreneurs.
[01:00:04] And it's like, yeah, that, that's really freaking hard. Like, being an entrepreneur is not for everybody. It takes Yeah, will and vision and perseverance and a desire for, you know, for other people's livelihood to depend on you. Like it is not for everyone. but yet, I see, I feel like in the last month, like it's the efforts, the PR efforts are ramping up by the tech leaders.
[01:00:26] To convince society that like, everything's gonna be great and this future of abundance is right around the corner. And they're either glossing over or straight up denying the negative impact on jobs. There was a Jensen Wong interview last week that was like all over my feet on X. Like everybody was retweeting it as like, see this is what's happening.
[01:00:43] where he was saying like, it's not gonna, you know, be bad for jobs, it's gonna be amazing for jobs. So then there was a Ezra Klein, editorial in, um. In the New York Times and the headline was, why the AI Job Apocalypse Probably Won't Happen. I was like, oh, cool. Like Ezra's generally like [01:01:00] a pretty reasonable rational thinker.
[01:01:01] Like, yeah, let me go see his editorial. What's he saying? Like maybe he knows something. I don't know. And so I went and read this one and it started off with a poll in March that found 70% of Americans think that AI will lead to fewer job opportunities for human beings up from 56% a year ago. 14 percentage points is not an insignificant jump.
[01:01:20] 30% say they're worried for their own jobs. Now, interestingly, we are about to release our state of AI for business research next week. Yep. Mike? Yeah, yeah.
[01:01:29] Mike Kaput: Next week,
[01:01:29] Paul Roetzer: May 14th, we're gonna release our state of AI for business. We had almost 2100 people and we actually asked the same question. What do you believe the net effective AI will be on jobs?
[01:01:39] Over the next three years, this is a bit of a teaser for some of the data. more jobs will be eliminated, 71%. So it almost like mirrors this other poll, which is crazy. More jobs will be created 13%. AI won't have a meaningful impact on jobs. 4% and I don't know, was 12%. But then interestingly, we also asked about the impact they thought it would have on their jobs.[01:02:00]
[01:02:00] Yeah. And so it asks about their own, well, only 21% expressed concern. So it's like. They all think you more jobs are going, but it's not gonna be my job. Yeah. But then, so back to Ezra Klein's editorial. So he said, you know, it's worth being cautious. Tech companies might be unwinding a hiring binge and telling the stock market the tail likeliest to excite or appease investors.
[01:02:19] The AI leaders might understand neural nets better than they understand labor markets, or they might have bought too deeply into their own marketing materials. But then Ezra went and talked to economists and he said, economists I found are quite skeptical about the mad mass joblessness is on the horizon.
[01:02:33] Now this isn't surprising to me at all. I've said on the podcast many times, I've talked to leading economists over the last, like six years and literally been laughed at that AI would have an impact on jobs. Like they thought it was an absurd thing to be thinking about. Mm-hmm. So I'm not always a hundred percent sure that economists are the right people to be asking about this, but regardless, I'm very, very open to the perspective and like I wanna be wrong.
[01:02:55] so. He specifically said, he said, in what will be scarce. Alex, [01:03:00] amass an economist at, at the University of Chicago tries to clarify the mistakes most AI discourse and his views makes the answer to any question about the future. Economics of advanced AI begins with identifying what becomes scarce, amass rights.
[01:03:13] But something is always scarce. People are looking at the economy as it exists and asking which tasks AI can do. They should be asking which jobs people won't want AI doing, or which services AI will make us want more of. So the premise is like, Hey, it's okay. Like there's gonna be other things that we can go do as humans.
[01:03:31] so amass story suggests a place where human labor might move amid mass automation toward more human roles, but it's also possible that human labor won't need to move that much at all. Um. And then Ezra kind of editorializes every enthusiast enthusiastic AI adopter I know is working harder than ever because there's more they can do.
[01:03:51] Whether they're working smarter is arguable. Studies differ on whether AI is making people more productive or simply giving them and their bosses the illusion of productivity. While I [01:04:00] don't believe full automation of the economy or even mass unemployment is likely, I don't totally discount the possibility.
[01:04:05] So this is what I'm talking about, like there's gotta be a rational middle. Like it's like both can be true. Like, let's see. He goes on and says, AI is a different kind of technology than what has come before. Perhaps it's flex flexibility and conversational nature will make it a substitute when previous tools have been proven to be compliments.
[01:04:23] What's likelier though is that AI doesn't take all or most of the jobs, but it does take some, and that strangely is the possibility we're least prepared for. I think this is a super important point. He said, A world where AI displaces 8 million workers might be harder to handle than a world where it displaces 80 million workers.
[01:04:44] A mass unemployment event would force a wholesale restructuring of the economy. He paralleled this to C like when something affects everyone, there's immediate action. When it affects a segment, then it's their problem, or it's like, eh, you know, it's, it's [01:05:00] not gonna affect everybody else, and so you don't feel obligated to move with urgency.
[01:05:04] So then he ended it with, when I'm feeling optimistic about the world, AI might make possible. I imagine a world in which we are richer than we are today and are encouraged to live more fundamentally human lives, doing more fundamentally human things. When I'm feeling pessimistic, I imagine something like that.
[01:05:20] Same world, but the wealth will be hoarded and we will be, we will value a depth of human connection that we know longer know how to provide. So yeah, go read, both sources. I think it's like really good stuff. And again, we're doing our best to provide all the perspectives possible here. What could be, maybe the tech bros are right and it's a glorious future where we don't lose millions of jobs and everybody becomes entrepreneurs and everybody else, you know, becomes, plumbers and electricians and they're happy with those choices.
[01:05:51] Or maybe that's not what happens. Like it's, you know, maybe it's something in the middle, but.
[01:05:57] Mike Kaput: To, I will say it's at least encouraging to see [01:06:00] more diverse perspectives and more people talking. Yeah. About this. It's not enough. Not by a long shot, but it's way better than this was six months ago or a year ago.
[01:06:09] I think
[01:06:10] Paul Roetzer: it's at least a weekly conversation.
[01:06:12] Mike Kaput: Yeah.
[01:06:12] Paul Roetzer: So that's good.
[01:06:15] Why Shopify CEO Tobi Lutke Says "Saying The Thing Matters"
[01:06:15] Mike Kaput: All right, so next up we've got an update from Shopify, CEO, Tobi Lutke, who posted a follow-up this past week to something we covered almost, a year ago. His fir AI first memo that he released to his company, and he wrote in that original note, he wrote that the original note quote made a tremendous difference inside of Shopify since then, and that everyone has agents as exoskeletons for their own creativity now and knows how to use them.
[01:06:43] Most importantly, which is why we wanted to talk about this. He said this great quote, which is saying the thing matters. So he added that his note about people using AI at Shopify and it being mandatory, he said that note seemed mildly controversial a year ago, and seems obvious in retrospect. That's always [01:07:00] the best sign that you got something right and early.
[01:07:02] So that original memo published on X in April, 2025. It was published after it started leaking. It was an internal memo to start. It said, quote, reflexive AI usage is now a baseline expectation at Shopify. And said, learning to use AI effect effectively was a fundamental expectation of everyone at Shopify.
[01:07:21] Luke Key wrote that opting out was not an option and that stagnation was slow motion failure. That memo also laid out specific operational rules. So AI usage questions would be added to performance and peer review. Teams would be required to demonstrate why they could not get a task done with AI before asking for more headcount or resources.
[01:07:40] Prototypes were to be dominated by AI exploration and the rules applied to everyone, including the CEO and the executive team. And as we covered at the time, that memo prompted similar AI first declarations from CEOs at Box five or Duolingo and a bunch of others in the weeks and months that followed.
[01:07:57] So Paul, I just wanna come back to that [01:08:00] phrase where he's just, you know, saying the thing matters. We've said that so many times, but it really seems like it had an impact at Shopify. Going all in really early on this.
[01:08:09] Paul Roetzer: It doesn't surprise me at all, you know, after that happened. I was, referencing this a lot on stage during my talks, and you could just see a lot of head nodding of it.
[01:08:16] Just made sense to just be direct about this. I had published an AI forward CEO Memo template. I think I created it last year for AI Academy courses, but I put it on LinkedIn. We'll put the link to that, in the show notes that I would just download that, like, if your CEO hasn't publicly said this, or if you are the CEO and you have not publicly stated your beliefs about AI and what it means to the company and what your expectations are, you need to do it like that.
[01:08:42] I can't even imagine a company making it through 2026 that hasn't had the CEO clearly state the vision for. AI in the future of work at a company, like just do it. So make that a priority. Or if you have influence in the C-suite, like push for that to happen. I've been personally working on it kind of alluded to this a couple [01:09:00] times, but I've been personally working, working on, like an AI maturity assessment for months, and really years, but, intensely over the last couple months.
[01:09:08] And I've centered on eight pillars of business AI transformation. And I'll, I'll share more about this in the next month or two. We'll probably release something soon. But the first pillar that I devised is vision, and I'll just give you a few of the statements within the assessment related to vision.
[01:09:24] Leadership deeply understands a I ncluding the full capabilities and potential of leading generative AI systems. leadership has shared a clear vision for the future of work and how AI will impact our people and organization. Leadership has set clear expectations for employee ai, literacy and capabilities.
[01:09:40] Leadership models, AI best practices through their own usage and leadership is cultivating AI literate workforce. These are the foundations for success. Now there's others within the vision category, but this is so fundamental and so often when we go in and meet with organizations, this has not happened.
[01:09:56] Like none of those are true. And I , again, I don't know [01:10:00] how you build an AI forward company, whether you're just starting out or you're trying to like emerge as an AI forward company from a legacy business. Yeah. If, if, if those aren't statements aren't true, then you got major problems. Like you have to start there before you do everything else.
[01:10:13] So, yeah, I t was nice to see'em and say, like, sometimes saying the thing matters. I think it's a hundred percent true.
[01:10:19] Mike Kaput: Why do you think so many CEOs or leaders at this stage haven't said the thing that matters?
[01:10:24] Paul Roetzer: I don't think they understand AI deeply enough to understand the urgency.
[01:10:27] Hmm.
[01:10:28] Paul Roetzer: Like I f you understand what it's capable of and the change it's going to drive in your own business and in your industry and for your people, you would've done something last year.
[01:10:37] Hmm.
[01:10:37] Paul Roetzer: So I feel like. Until you have that moment where you realize that everything is different and it's not gonna come back and it's accelerating you, you're not gonna move with urgency. and so that's what it needs, it just needs that sense of urgency where the CEO, and again, like I'm, I am fully convinced if the [01:11:00] CEO isn't leading the charge, it's not going to work.
[01:11:02] Like the CEO has to be the ringleader here. Like they, they, they have to be, fully in this. And it cannot just be lip service that like, AI's important and we're gonna go through this transformation. It's like, no, you gotta lead the charge.
[01:11:20] AI's Public Backlash Problem
[01:11:20] Mike Kaput: So in our next topic this week, we saw two major pieces this past week.
[01:11:24] One from the new, the New Republic, and one from the New York Times that kind of talked about the same growing pattern, which is a populous backlash against AI gathering momentum across the political spectrum. So tech journalist, Jasmine Sun quoted in the New Republic define this worldview as one in which AI is viewed not only as a normal technology, but an elite political project to be resisted.
[01:11:49] A thing manufactured by out of touch billionaires and pushed onto an unwilling public. And we have some polling that kind of backs that up. So the Stanford 2026 AI [01:12:00] index showed 73% of AI experts are positive about AI's long-term effect on jobs versus 23% of the general public. another poll found 55% of Americans see AI as a force for harm rather than good.
[01:12:15] March Gallup Poll showed Gen Z's share that feel excited about AI fell from 36% to 22% in a year. The share who fell feel angry rose from 22 to 31%. We talked about some violent backlash. We talked about how in April, someone threw a Molotov cocktail at Sam Altman's San Francisco house. three year.
[01:12:39] Three days earlier, an unknown perpetrator fired 13 shots into the home of Indianapolis. Democratic Councilman Ron Gibson, who had supported a local data center project and the person left a note reading note data centers. The Times actually profiled a bunch of regular Americans who have started organizing against ai.
[01:12:56] There's a Texas evangelical pastor pushing for AI [01:13:00] regulation, a Boise musician who started a local chapter of a group called Pause Ai after AI tools, made songs with copyrighted music. There's a group of Indiana farmers who are suing the stop at data center being built 300 yards from their homes. And that group PAWS AI has actually expanded from five active city groups last year to 30 today.
[01:13:20] and then we have Bernie Sanders himself kinda leading the charge on the populace left, telling the times that given AI and robotics are going to impact every man, woman, and child in this country, one might think there'd be a massive debate in the US Congress. What does it mean? Where do we go? How do we deal with it?
[01:13:36] There has been minimal, minimal discussion. He said, so Paul, these numbers and sentiments are getting pretty grim. It does seem like the backlash is not only real but growing
[01:13:48] Paul Roetzer: and it's not, it's not gonna help that people are gonna fund the extremes. Yeah. You know, they're gonna try and do this for political points.
[01:13:55] And that was always my concern was once a, I got poli, you know, became political, [01:14:00] you know, you're gonna push extreme views and that never works out well for society. No. So. I, yeah, this one concerns me a lot. you can see it every week in the resources we look at, and you can just, you know, I don't need to go to Google Trends to see the negative sentiment growing.
[01:14:18] you can feel it, you can see it in the headlines. I don't know, I don't know way around this one. This was, this was sort of like a inevitability to me years ago. And unfortunately, you know, I think we're just heading into a point in society where it's, it's going, people are gonna have very extreme views about ai and usually in society that means we'd stop listening to each other, and Congress is not doing their job.
[01:14:43] Mm-hmm. We'll, we'll talk about the Ben SaaS interview toward the end here for 60 Minutes, but I thought he very eloquently pointed out that the Senate's job in the United States is to work on three to five major things and to be collaborative and solve things that affect society and they are not doing their [01:15:00] job right now.
[01:15:01] And, um. I think, you know, there's no doubt AI and jobs should be front and center, and right now it's just gonna be used as, like a political game to win votes. Yeah. And that frustrates me. Yeah.
[01:15:16] AI Use Case Spotlight
[01:15:16] Mike Kaput: All right, so a little more positive news. Next up we have our AI use case Spotlight segment where every week we give you a quick look under the hood at the real AI use cases we're exploring, building, or deploying in our own work at SmarterX.
[01:15:28] So Paul, I've got one quick one to share and then definitely want to hear from you what you've been working on.
[01:15:33] Paul Roetzer: Go for it.
[01:15:34] Mike Kaput: So this past week I was lucky enough to speak at Experience Inbound, which is a Wisconsin marketing event run by our friends at Stream Creative and wider to great agencies in that area, if you're in the market for a HubSpot or a marketing agency partner.
[01:15:48] there I gave a talk called 40 AI Tools in 40 Minutes, which was really fun. but what was cool was after that I actually sat in on several of the sessions at the event. One of them in particular from [01:16:00] Brian Brinkman, who's a good friend of ours and a partner at Stream Creative. And Brian taught a whole session on vibe coding.
[01:16:05] So he actually. Spent the session literally showing off real apps. He built building real apps and websites live on stage in Google AI Studio, which is awesome. So while Brian was talking, I opened my laptop and decided to actually do a little more vibe coding. I've done some in the past, but not a huge amount.
[01:16:23] So this was a good opportunity to get a little more up to speed. So I just like kind of randomly took material that was on hand, which was my 40 tools talk. So what I did is I literally took the. PowerPoint with like a bunch of scripts and information and notes and drop that all into Google AI Studio and asked it to build me an app that helps people find AI tools based on the advice and the talk.
[01:16:44] So it used those 40 tools as the core database. And what was really cool, Paul is like in like a minute. While Brian was speaking, Google AI Studio spun up an app that looked really good and also just like came up with a UX I just did not think of at all, [01:17:00] 'cause I was very non-specific. I was like intentionally vague.
[01:17:03] I'd been kind of imagining it would do like a simple directory, which it did and you can click in any of the 40 tools, see all their info, their pricing, all this great stuff. But what was really cool is it like actually just the main interface was a chat interface, presumably powered by Gemini, where it just came up with having a chat box where I could type in like, Hey, I'm looking for an AI tool for surveys, and it would just go pull every talk from my tool that fits in some way and tell you why and show you like a vendor card with the tool name, the vendor, the URL info from my talk pricing, et cetera.
[01:17:35] It was just really cool to see like it come up with that. And it also just kind of made me think a little bigger about. You know, we give these talks and the four A tools talk is technically about tools, but really it's kind of a Trojan horse of like a methodology for thinking about technology. It's like a framework, it's a mental model.
[01:17:52] And that's really interesting, almost like IP or kind of proprietary ways of thinking about things that [01:18:00] do really well as like the brain behind an app, not just like content on slides. So I'm kind of thinking about content in a little bit of a different way now that anyone can vibe, code anything.
[01:18:10] Paul Roetzer: Yeah, I love that example.
[01:18:11] Brian's awesome. Yeah, it's a great event they've been putting on, I've spoken out a couple times as well. It's all, it's, it's really good. Um. Yeah, it makes me think years ago, Mike, you and I built that, the like AI assessment tool where it would try and match like vendors is back in like 2018, 2019, we were trying to match AI vendors to use cases based on rating systems.
[01:18:33] Yeah. And it's like, oh my God, the hundreds of hours that we spent trying to hack that, that tool we paid for through to do that sort of stuff. And now, yeah. One of the best things you can put into a prompt is like, make it interactive and it's just like, and let it go, let it develop the user experience.
[01:18:48] Like it's so amazing. It constantly like just shocks me. I had a somewhat similar last week I was in Denver for that talk and I was at dinner. Monday night, I just sitting by [01:19:00] myself, eating, and I had Claude building up a slide deck for me on the side, and it was just shocking. Like I'm, I'm sitting there watching the reasoning.
[01:19:08] It's like, oh, I'm gonna fix this margin now. And, oh wait, I did this wrong. I'm gonna go back and do, and it was just like those, so that surreal moment where you're like, what? Like this? Yeah. Just alien technology. It's like working with a senior designer and watching them think and like, I'm not that, like I'm not a visual person to be able to like, have this vision of what to create and then to go create it.
[01:19:29] That's not my superpower. Yeah. But to like have that superpower on my phone is so wild. and then the other thing I would mention is like, I shared this post that ended up getting quite a bit of engagement. I 'd originally, wrote on Saturday morning the newsletter for, exec AI newsletter that I publish every Sunday and.
[01:19:50] I didn't have any, I didn't know what to write Saturday morning. Like I woke up and I had like, it was a very, very crazy week for me mentally and personally and stuff. and so I just, like, I [01:20:00] started, I literally just typed, I'm struggling to keep up with ai. Like, it was just how I was feeling at the moment.
[01:20:04] Mike Kaput: Yeah.
[01:20:04] Paul Roetzer: And then I was like, all right there, I said it like, now here's what's going on. And I'm like, I kind of explained that, you know, despite all of our efforts and the podcast and the speaking and running an AI native company and researching this area for 15 years, like, I'm overwhelmed. There's just, the agent stuff is too much.
[01:20:19] There's too many options. We don't know what to build. and so I just felt like I'm like falling behind despite everything. And so I then put that editorial condensed version of it on LinkedIn Sunday morning, and it's like, I'm looking at it right now, it's got 120 comments. Yeah. Wow. And 8,000 impressions.
[01:20:36] So there it's, the comments are worth reading. You go kind of look at it. But my whole point was in this frustration in this like. Feeling like I was almost drowning in opportunities. I just made a decision to like do something. And so I decided on a flight back, I was like, I'm just gonna create SmarterX Labs and we're gonna start running what I was calling vibes, which are just builder sessions for non coders where we get together and just [01:21:00] rapidly prototype something to optimize a workflow, to solve a business problem, to accelerate growth.
[01:21:04] And so I just said like, Hey, we're just gonna start hold, holding these. And so like last Friday, you and me and Jeremy got together, we held one. And the whole point with that one was ChatGPT, BT agents, like what are they, how do we use 'em? And what can we build in under an hour, as a proof of concept.
[01:21:18] And then in that process we realized, oh, actually maybe, and I enterprise is better because we can use their app, agent Builder and it's already connected to our data and so whatever. But it's just that idea of, Hey, listen, we, we hear you. If it's like overwhelming, which based on the comments on that LinkedIn thread, I am not alone here that a lot of people feeling the same way.
[01:21:36] And two, I think my point was like. Just do something like, part of it was like therapy for myself. Yeah. It's just like, it can be overwhelming, but like, just pick something and do it. Like, feel like you have control of the situation by taking an action. And for me, developing these vibes was the thing. And and so I said in like some of the following comments on LinkedIn, I was like, you know, my thought is we're gonna run these things internally through, through the Smart [01:22:00] X Labs, but I also wanna empower our teams and individuals to do 'em theirselves.
[01:22:03] It's just like, give yourself an hour, build something to solve a problem, fix a workflow, whatever it is. And then we may, with our ai, academy, start running them for members is, is kind of my, my near term vision. So yeah, that was kind of cool. I was just excited to like, pick something and I was glad that the editorial resonated with people.
[01:22:20] You know, again, I wrote it in like, I don't know, 30 minutes Saturday morning and just kind of throwing thoughts together. You never know how that stuff's gonna land. so I was, you know, I appreciate all the response it got.
[01:22:29] Mike Kaput: Yeah, I can't emphasize enough just how important it is to sit down with the actual problem.
[01:22:34] Like you typically would look at a problem like this, like, Hey, how do we use agents? Or, I mean, you're thinking about it, you're reading about it, or you're trying to carve time out to do it and like it just doesn't get done the way you want it to get done. So like sitting here, sitting down for an hour and just saying, we're gonna bang our head against the wall here, and just, we learned so much so quickly.
[01:22:52] Paul Roetzer: Yep.
[01:22:53] Mike Kaput: That we wouldn't, I don't think we would've gotten to otherwise
[01:22:56] Paul Roetzer: Agreed
[01:22:57] Mike Kaput: personally.
[01:22:59] AI Academy Spotlight
[01:22:59] Mike Kaput: So next up, each week we spotlight one of the courses in AI Academy and give you real actionable takeaways from that course. Whether or not you ever take it, you're just gonna wanna share the love of all the stuff we're creating in academy.
[01:23:10] So, really quickly, Paul, I'm gonna go through AI for financial services. This is a newer course series we've released that is taught by our director of research here at SmarterX, Taylor Radey, and it opens with some really interesting data. So 40% of wealth managers, if you can believe it, say that within the next 12 months, next 12 months, AI will be able able to deliver financial advice and planning.
[01:23:34] Sophisticated enough that they'll be competing for clients with ai and that pressure is showing up all over the industry. 'cause across asset and wealth management, especially profitability is getting squeezed. A bunch of banks, the majority of them, have already launched generative AI applications and tasks that used to anchor a financial professional's value, like tax implications, portfolio rebalancing, standard financial productions.
[01:23:59] All of this [01:24:00] is increasingly being generated by AI at near zero cost. And so this course teaches a framework for navigating all this called the three strategic shifts that maps out basically exactly where the value of a financial services pro is moving and gives you a way to assess where you and your firm actually.
[01:24:18] Stan, so there's much more in the course, but the first of these is the data shift. We're moving from periodic reporting to continuous context. So with generative ai, we can actually finally read unstructured data, things like emails, contracts, receipts, et cetera, and create a live data and decision making flow.
[01:24:36] Second is the automation shift. We're moving away from doing tasks and actually orchestrating agents, something we've talked about quite a bit on this podcast. So AI agents handle multi-step work and humans move to oversight judgment and handling exceptions as already happening at places like Morgan Stanley has an internal advisor assistant that is age genically.
[01:24:58] advising clients and [01:25:00] freeing up advisors to do higher value work. And the third is the value shift. We're moving from input based fees to outcome-based values. The traditional model, very much like some other industries, is pricing services on hours and assets under management. As AI commoditizes the production of routine advice, the new economic basis is focused on paying for outcomes.
[01:25:21] So proving your value by showing tax savings achieved, volatility managed goals reached, or plans actually implemented. So the course has all sorts of actionable, advice on how you can actually rate yourself, your team, your firm, on each of these shifts, and actually take action to become an AI forward financial services professional.
[01:25:43] So if you're interested in that full course series, head over to academy dot SmarterX.ai. We'll also include a link to this specific course, series in the show notes.
[01:25:53] Ben Sasse’s Parting Words on AI
[01:25:58] Mike Kaput: Alright, so Paul, another segment we wanted to focus on is what you had alluded to a few minutes ago. [01:26:00] former Nebraska Senator Ben Sas, actually appeared on 60 minutes this past week in an extended interview that unfortunately is framed as kind of his final words because sas, who's 54, has stage four pancreatic cancer.
[01:26:13] It has metastasized to his lung vascular system and liver, and he is basically given three to four months to live in mid-December, a clinical trial drug has extended his time and so he used this interview to talk about the issues he thinks Congress is missing. When asked about that SAS went straight to AI and the disruption of work.
[01:26:32] He said that the digital revolution is both glorious and horrific at the same time, and warned that quote, anything that can be reduced to a series of steps, which is most economic activity, is going to be routinized and become really, really cheap. Really fast and really ubiquitous. And he mentioned that we have never lived in a world where 22 year olds couldn't assume the work they did or would be able to do until death or retirement, and [01:27:00] we're never going to have that world again.
[01:27:02] Now he said that this issue should be dominating national politics, but it's being missed entirely. He said, quote, neither of these parties really have very good or big or good ideas about 2030 or 2050 at a national security level, at a future work level, at an institution building level. He said, and he ended by saying, Congress is not wrestling with big or important questions right now.
[01:27:25] Like the disruption of work for good or for ill should be front and central Congress doesn't even know how to have that conversation. So Paul, this is a pretty powerful interview, especially, it would've been good regardless, but given kind of the heavy context of him not having much time left, it sounds like.
[01:27:42] Paul Roetzer: Yeah, his framing of the role of Congress and AI thought was a brilliant, the main reason I wanted to like highlight this though was actually the human side of this. And yeah, I just think it's an incredible interview. It's like 40 minutes long, 60 minutes release, the full interview. And I do think just regardless of political [01:28:00] affiliation or religious beliefs, you should watch it because it actually helps keep things in perspective on what really matters.
[01:28:05] And there's a, there's a very personal side to my pursuit of AI that I'm not sure I've ever fully shared in, in part because I actually still kind of struggle to talk about it, all this time later. But this seems like as good a time as any, so when I, so I started PR 2020 in 2025 or 2005. Yeah, that was 27.
[01:28:26] And my father-in-law actually died of cancer the following year. He was 53. So somehow it's been 20 years since he passed, but it was less than three months from his diagnosis to his death. So my wife and I started dating in high school, and he was like a second father to me. So 18 months later, one of my best friends died in an accident.
[01:28:43] It was six days from his accident to his passing, and he never regained consciousness in that time. So I was 29 at the time of his death. And so. While, it was a very difficult time, there was this acute awareness of mortality that hit me [01:29:00] at a very early age and time became very precious. So I went through these years, like in retrospect, you didn't really realize like the challenges of the mental load at the time, but you go through these years of kind of like being somewhere between the darkness and the light.
[01:29:13] And like the happiest moments in life were also the sadness, you know, the saddest moments. And then our daughter arrived five years later and everything sort of changed. So life took on new meaning. So coincidentally, her arrival in early 2012 coincided with my pursuit of understanding ai. something I wrote for my make on 2023 keynote opening that I I wanted to share things I was excited about with ai.
[01:29:37] And at that time I listed an explosion of entrepreneurship, new career paths, change agents emerging from everywhere, like people, a renaissance and human creativity, and then more time. So. And just read what I wrote back in 2023. So I said part of the reason I began pursuing AI in 2011 was because I saw it as a path to extend time.
[01:29:59] I'd always wondered [01:30:00] why time seemed to move faster. As we got older, I realized, at least for me, that the busier I was, the longer hours I worked, the faster the days, and the weeks seemed to fly by. When our first child was born in 2012, I began to truly appreciate every second of every day. I knew I couldn't get more than 24 hours out of a day, but I thought it might be possible to slow those 24 hours down.
[01:30:22] I didn't understand exactly what AI was back then in 2011, but it seemed to hold the potential to unlock productivity gains, which would allow us to redistribute the time saved and live more fulfilling lives. What I learned since then is that AI on its own won't extend time for me or anyone else. It will increase efficiency and productivity at a scale we have never seen in human history.
[01:30:42] But we have to make the choice to use the increases to benefit humanity. Otherwise, we'll just fill the time with more work and find new ways to maximize profits at the expense of people. We have one chance to get this right. AI can give us the greatest gift of all more time, or it can be just another technological revolution that expands our [01:31:00] work, fills our hours, and leads us down the path of never ending productivity gains for profits.
[01:31:05] So today, like the reason I choose to remain optimistic about AI's potential is because of this. I wanna spend more time with my family and friends. I wanna enjoy my career journey, not long for the end of it. And I think AI can give us that. So I don't know Ben SaaS, but I'm grateful for, to him for his willingness to share his story and his precious time with such grace and humility.
[01:31:27] And I pray for him and his family, and I hope everyone takes a break from the craziness of life and the AI world to listen to the full interview because I think it's a gift to all of us.
[01:31:36] Mike Kaput: Yeah. That's really, really cool to hear. I mean, yeah, it's, definitely worth, some minutes outta your day to hear his lessons that he shared.
[01:31:45] It's really
[01:31:45] Paul Roetzer: cool. Yeah, I definitely passed it along to family and friends. I just,
[01:31:47] Mike Kaput: yeah,
[01:31:47] Paul Roetzer: again, put religious beliefs and political affiliations aside. It's irrelevant to the, to human side of it. Yeah.
[01:31:56] AI Product and Funding Updates
[01:31:56] Mike Kaput: Alright Paul, so we're gonna wrap up here with a ton of AI [01:32:00] product and funding updates. I'm gonna just run through these quickly in the final minutes here, and if there's anything that jumps out to you, you let me know.
[01:32:07] But first up, former DeepMind reinforcement learning lead, David Silver came out of stealth this past week with Ineffable intelligence the com. His company is raising a $1.1 billion seed round and a $5.1 billion valuation. This is the largest seed round in European history. His pitch is basically today's LLMs are stuck on the wrong path and that ineffable will instead build a super learner that learns purely through reinforcement learning with no pre-training on human data and no imitation learning.
[01:32:37] openAI's published a prompt guidance doc this past week for GPT 5.5 telling developers that shorter outcome first prompts work better than long process heavy instructions. They also released cybersecurity in the intelligence age, A five pillar action plan covering democratized cyber defense, government, and industry coordination, security around frontier model weights, deployment, [01:33:00] visibility, and control, and consumer protection.
[01:33:04] OpenAI also published a research post called Strangely, where the goblins came from explaining that recent models have developed this strange habit of mentioning goblins and other creatures in their metaphor, and they traced this back to a reinforcement learning signal for the nerdy personality that the model generalized way beyond its intended scope.
[01:33:26] Paul Roetzer: That was a wild analysis to read.
[01:33:28] Mike Kaput: It's really straight. Yeah. You read stuff like that and you're like, we, we know nothing that is about how these things work. You know Anthropic also launched Claude for Creative Work this past week. This is a set of new connectors that led Claude work directly inside professional creative tools like Adobe Creative Cloud Blender, Autodesk Fusion, Ableton for music production, SketchUp, and more Google at the same time, rolled out file generation inside the Gemini app for all users worldwide.
[01:33:59] So you can create [01:34:00] downloadable Google Docs, sheets, slides, word docs, Excel files, and more directly from a chat prompt. Microsoft brought Microsoft Agent 365 to general availability on the on May 1st. This is a new control plane price plan price at $15 per user per month, and basically can also let IT teams discover every AI agent running across their organization and monitor them through the same admin tools they already use for users and devices.
[01:34:29] The information reported this past week that Microsoft is shifting more of its AI products to usage-based pricing. As heavy co-pilot adoption squeezes Azure Cloud margins. Meta acquired assured robot intelligence on May 1st for an undisclosed amount. This is a startup building foundation models for humanoid robots that can perform physical labor At the same time, Chan China's National Development and Reform Commission also blocked meta's.
[01:34:56] Separate $2 billion acquisition of Manus, Chinese founded [01:35:00] Agentic AI startup that had relocated to Singapore. It ordered the parties to unwind a deal where employees and capital had already transferred to meta. 11 labs launched agent templates. These are pre-configured starting points for building voice agents across customer support, education, and onboarding.
[01:35:18] Go to market and receptionist roles. Lovable. The AI app building platform launched a mobile app this past week that lets users build full apps and websites from their phone. Stripe used its annual sessions 2026 conference to announce a sweeping set of agent focused payment products, including the Agentic Commerce suite for selling through AI agents and a machine's payments protocol.
[01:35:42] Co-authored with Stablecoin Network tempo for agent to agent transactions. CloudFlare also announced that AI agents can now autonomously create a CloudFare account, start a paid subscription, register a domain, and get an API token to deploy code all without a human [01:36:00] touching the dashboard. And they used a new protocol for this co-designed with Stripe that gives agents a default a hundred dollars per month spending limit.
[01:36:07] A couple final announcements here or updates. Hightouch has raised $150 million Series D at 2.75, $2.75 billion valuation this past week. This is to expand its composable customer data platform into what it calls an AGI Agentic marketing platform, which is a system that combines customer data, brand aware, AI content generation, and 24 7 autonomous orchestration of marketing campaigns.
[01:36:34] Avoca AI hit a $1 billion valuation this past week after raising a total of more than 125 million for building voice agents that handle inbound calls, scheduling and dispatch for hvac, plumbing, roofing, and electrical companies. And finally, the information also reported that HubSpot and Atlassian are leading a broader shift in AI agent pricing towards outcome-based fees.
[01:36:57] We had talked about previously, our HubSpot's Breeze [01:37:00] customer agent is actually moving to. Pricing based on resolved, per resolved conversation, and their prospecting agent is pricing per recommended lead. Atlassian is shifting to consumption based billing as well. Alright, Paul, that is all we have got for the updates this week.
[01:37:18] That's a lot there.
[01:37:19] Paul Roetzer: One note, the, I'll probably get into this another episode, but the David Silver thing is, yeah, I think a really big deal. I haven't been able to unpack this fully yet, but him leaving DeepMind is very significant in part because his relationship with DemisAsaba.
[01:37:34] Yeah.
[01:37:35] Paul Roetzer: so they met at Cambridge.
[01:37:37] They started a gaming company together in the late nineties called Elixir. He then joined DeepMind in 2013, not too long after it formed, and then he led the building of AlphaGo. So, like David Silver is a legend Yeah. In the AI world and deeply connected to Demis Hassabis. And yet from everything I've been able to gather, Des didn't [01:38:00] invest in.
[01:38:01] This new company individually, at least in not public knowledge. Google did. Yeah. But like, it's just weird to me. He left to go focus on reinforcement learning, which was the core of AlphaGo and alpha fold and all these other things. I don't know, like I I, there's just something else here and I'm not sure what it is yet, but like this is a major, major person in Demis life and in DeepMind to leave.
[01:38:24] And there, there's something of note there. I just haven't solved it yet.
[01:38:28] Mike Kaput: Yeah. I'm sure we'll be coming back to that, especially at the scale of the seed round alone is crazy.
[01:38:33] Yeah.
[01:38:33] Mike Kaput: all right. One more final announcement here. We talked about the top of the episode, our weekly AI pulse survey. Go to SmarterX.ai/pulse to take that this week.
[01:38:42] We've got a couple questions around, is your company actually formally setting AI usage as an expectation kind of related to those AI CEO memos we talked about? And also, how has your personal sentiment about AI shifted, if at all in the past? Six months. That'll be interesting to see [01:39:00] answers to those, Paul.
[01:39:01] And as always, thanks for breaking everything down for us this week.
[01:39:04] Paul Roetzer: Yeah, thanks everyone for joining us. Have a great week. We don't have a, oh, we do have a second episode this week. There's actually, yeah. Yeah, we're gonna have an AI answers episode that drops on Thursday, so if you wanna double up this week, we'll have new content for you on Thursday.
[01:39:16] They we're really good questions. I recorded that one I think on Friday morning. so it was, it was a lot of really good questions, so check that out. Alright, thanks Mike.
[01:39:24] Mike Kaput: Thanks.
[01:39:26] Paul Roetzer: Thanks for listening to the Artificial Intelligence show. Visit SmarterX.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI academy and engaged in a SmarterX slack community.
[01:39:50] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
