68 Min Read

[The AI Show Episode 214]: Musk v. OpenAI Round 2, Coinbase AI Layoffs, AI “Soft Nationalization" & xAI Folds Into SpaceX

Featured Image

Courtroom drama, an AI intelligence explosion prediction, and an unexpected compute deal between xAI and Anthropic — Episode 214 covers a week that somehow kept getting weirder.

Paul and Mike dig into the second week of Musk v. OpenAI, where testimony from Brockman, Murati, and Zilis revealed how messy OpenAI's early days really were.

Then: Coinbase cuts 14% in an AI-native restructure, the White House briefly floats model vetting before backing off, Jack Clark puts a 60% probability on AI doing its own R&D by 2028, and Musk dissolves xAI into SpaceX while striking a compute deal with the company he was calling "evil" two weeks ago.

Listen or watch below—and see below for show notes and the transcript.

This Week's AI Pulse

Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.

If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.

Click here to take this week's AI Pulse.

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:05:37 — Musk v. OpenAI Round 2

00:23:27 — Coinbase AI Layoffs

00:33:09 — AI "Soft Nationalization"

00:47:10 — State of AI for Business Report Preview

00:54:14 — xAI Folds Into SpaceX, Does Compute Deal with Anthropic

01:00:49 — Has Recursive Self-Improvement Arrived?

01:09:38 — Anthropic and OpenAI Enterprise Joint Ventures

01:15:11 — Stripe's New Forward Deployed AI Accelerator Role

01:20:48 — AI Use Case Spotlight

01:24:15 — AI Product and Funding Updates


This episode is brought to you by AI Academy by SmarterX.

AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Learn more here.


Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: It's so hard to wrap your head around an exponential and actually try to comprehend it. It's like looking up at the stars at night and trying to actually comprehend the size of the universe. Like you can see it, like they're out there, but like you cannot envision the size, and that's kind of what an exponential feels like.

[00:00:17] Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:47] Join us as we accelerate AI literacy for all.

[00:00:54] Welcome to episode 214 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike [00:01:00] Kaput, we are recording in an unusual time, Mike. It is Saturday, May 9th at 8:40 AM Eastern time. If you're new to the show, we usually do this on Mondays, but I looked at my schedule Thursday night and realized I'm not here Monday.

[00:01:13] So I am doing a crazy thing I do every year, which I think this is my seventh year. Mike is the Orange Effect Foundation, a wonderful nonprofit started by our friends, Joe and Pam Pulizzi, that they have us play a hundred holes, a hundred golf holes in one day. It's a hundred hole golf marathon. We tee off at 7:00 AM we end around 7:00 PM if you count the holes correctly, which I did not do two years ago, I actually.

[00:01:41] Miscalculated and we ended up playing like 109 holes instead. It is, it's wild. It is fun. if you're a golfer, you can appreciate a hundred holes is a lot. I mean, we were talking like three, 400 swings in a day. but it's a blast and it's, [00:02:00] you know, it's for a wonderful cause. The Orange Effects Foundation is a nonprofit that empowers children and young adults with speech disorders by providing grants for speech therapy and technology.

[00:02:10] So it's a amazing cause, amazing people. I am, oh, I always look forward to it. I always start regretting it around the 50th hole, but then you, like, you power through. So, but it's, it's so much fun. It's like speed golf, it's just two people and you just play as fast as you can. Like, it doesn't, scores don't matter, it's just, just finish.

[00:02:29] So, so that's what I'm doing on Monday. So here we are on Saturday morning. I messaged Mike and I was like, dude, any chance we can't do Mother's Day? By the way, happy Mother's Day to all our moms out there. belated Happy Mother's Day. so we're not gonna do this on a Sunday. No way. We're spending the day doing that.

[00:02:46] So here we are, Saturday morning. Alright. but we, there was so much going on, there's no way we were gonna just like, skip a week. Okay. So this episode is brought to us by AI Academy by SmarterX, which helps individuals and businesses accelerate AI literacy and [00:03:00] transformation through personalized learning journeys.

[00:03:02] And an AI powered learning platform. New educational content is added weekly, so you always stay up to date with the latest AI trends and technologies. Our AI four departments collection features seven course series and certificates designed to jumpstart AI understanding and adoption. So when you become an AI Academy member, or you can actually buy these course, the course series individually, if you don't wanna become a member, it's actually cheaper to just become a member though, if you wanna do a couple of these.

[00:03:28] So we have marketing sales. Customer success, hr, finance, operations, and legal is actually the latest one, Mike, that we dropped. related to that,

[00:03:37] Mike Kaput: yeah.

[00:03:37] Paul Roetzer: So yeah, the whole idea is you can get in, you can take your I Fundamentals and then you can personalize your experience based on what departments you're in or departments you support.

[00:03:45] You can go by industry, you can do by tools. So the whole concept of our academy is just to allow you to build these personalized learning journeys. And then if you have a business account, our team, our, our customer success team, will actually work with you to customize [00:04:00] learning journeys for your, teammates and employees.

[00:04:03] So it's really cool. If you haven't been to it, check it out. At Academy dot SmarterX dot ai, there are individual, plans as well as business account plans available now. And as I mentioned, you can become a member for, you know, an annual fee, or, or you can do single courses in series for onetime fees. So that's academy dot SmarterX dot ai to learn more.

[00:04:24] Okay. And then we have our AI pulse survey. Every week we put up a quick pulse. These are informal polls. we ask our listeners a couple of questions based off and on things that we talked about on that episode. So last week we had, has your company set AI usage as a baseline expectation?

[00:04:41] Interesting. Okay, so we have four 41% say yes informally expected. So it's not a formalized thing. 23% say it's being discussed. 27% say no and no plans to, and 9% say yes with a formal mandate. That's interesting. [00:05:00]

[00:05:00] Mike Kaput: Yeah.

[00:05:01] Paul Roetzer: And then the second one is how, how has your personal sentiment about AI shifted in the past six months?

[00:05:07] This is interesting, like a complete actual exact split. 41% say about the same. 41% say more excited. 9% say more worried. 9% say more cautious. That is, that is bizarre looking pie chart. all right. So, yeah, I'm like, let's get into it. We have, the continuing courtroom drama, which is just really, like I said, elastic.

[00:05:32] I think like something out of a Hollywood movie, but the script is. Maybe better in real life. That's,

[00:05:37] Musk v. OpenAI Round 2

[00:05:37] Mike Kaput: I was gonna say, you got some more quotes for that script, this past week because, you know, we had covered this past week, the opening of the Musk versus openAI's trial. And last week Musk had kind of taken his first days on the stand.

[00:05:52] So since we recorded this past Monday, the trial has basically wrapped its second week in Oakland and a lot more has come out. So OpenAI [00:06:00] disclosed in a court filing this past week that Musk had actually texted Greg Brockman two days before trial to gauge his interest in a settlement. And when Brockman replied, suggesting both sides dropped their suits, must wrote back, quote, by the end of this week, you and Sam will be the most hated men in America.

[00:06:20] Now the judge on the case ultimately ruled that text inadmissible. she told OpenAI's lawyers, they should have submitted it during. Musk's own testimony like we alluded to last week. Uc, Berkeley computer scientist, kind of AI luminary. Stuart Russell took the stand Monday as Musk's only AI s expert witness.

[00:06:41] Russell actually told the jurors there is a tension between the pursuit of AGI and safety and walk a long list of AI risks from misalignment and cybersecurity. To discrimination, job displacement, and people becoming emotionally attached to ai. Greg Brockman took the stand on Tuesday and [00:07:00] Rebutted Musk's account of OpenAI's early years.

[00:07:02] Brockman testified Musk had pushed for majority control of openAI's, in part to fund what Musk had kind of pitched as his city on Mars and his Mars ambitions for SpaceX. He also alleged that Musk had openAI's employees do months of secret self-driving work on Tesla's autopilot team back in 2017, even as he was publicly framing openAI's as a charity.

[00:07:24] Musk's lawyers spent a lot of time on the fact that Brockman's personal stake from the for-profit restructure. Now makes, is now worth about $30 billion. And they surfaced excerpts from those journals of Brockman's we talked about last week. He asked to himself in those journals, quote, financially, what will take me to 1 billion on Wednesday?

[00:07:45] Former openAI's board member Siobhan Zilis, who importantly as four children with Musk, testified that she had served as a years long intermediary between Musk and openAI's leaders. She told the court also that during the 2017 [00:08:00] negotiations, Musk wanted openAI's to merge into Tesla and offered Altman a seat on the Tesla board.

[00:08:05] And finally, the jury also saw video deposition from former openAI's CTO, Mira Murati, who testified. Altman had been creating chaos inside the company by saying one thing to one person and completely the opposite to another. She specifically said Altman lied to her about safety clearances for a new model and falsely claimed OpenAI's legal team had determined the model did not require review by the deployment safety board.

[00:08:32] So Paul, as we covered this last week, second week is even messier it seems like. What stood out to you most from what we learned this past week in this trial?

[00:08:42] Paul Roetzer: the Musk attempt to do the settlement was interesting just because, you know, I had said leading up to this, I, I, I just couldn't imagine this actually goes to trial.

[00:08:50] There's just too much risk here for everybody, including Microsoft and Musk and Zilis and Mira, like, no, Greg Brockman's personal journals, like nobody wanted [00:09:00] it, this stuff to come out. I don't think, I, I think this was must. Tried to like, call their bluff and, you know, push this far enough and they finally were like, all right, whatever my personal dinners are out there.

[00:09:11] Like, let's just go, let's get this all out. Like, I think that was Sam's quote, like the week before is let's just go like, you want this all out in the world? Let's, let's have it. I I, I did, post on X at one point, I maybe on Thursday, and I said, I expected this trial to be crazy, but the stuff coming out in evidence is so far beyond anything I thought we would learn.

[00:09:30] And I was specifically referring to the Max Zeff article, and Tweets. So he said, with Sivan Ziliss on the stand opening Eyes Lawyers presented new evidence detailing Tesla's 2017 plans to build an AI lab to compete with Google DeepMind. Tesla executives discussed recruiting Sam Altman for the effort and even suggested trying to get Demis Hassabis.

[00:09:55] So rewind 2017. Again, the transformer [00:10:00] is just being invented at Google. Brain, Google DeepMind. So the attention is all you need. Paper is just coming out in 2017. We don't have GPT one yet. Most of the work in deep learning is focused on gaming. The DOTA team, DOTA, if you're not familiar with the gaming, that has been talked about a ton in the trial.

[00:10:19] And that's what OpenAI was working on at that time was actually like gaming. As you know, Google Deep, mine was big into gaming and so it wasn't clear that the language stuff was gonna hit. And so at that time they're talking about like, well, let's just do this at Tesla and like, let's bring Sam over here.

[00:10:35] So Sam's not the CEO yet, they haven't had their falling out yet. That happens in 2018. And then Sam becomes the CEO of openAI's, I think in 2019. So before I'd never heard any of this, like that there was this effort to recruit, so I thought this was brand new. And then Demis, who, you know, Elon all admires like, you know,

[00:10:57] he, he, he and Demis had a relationship, [00:11:00] but at the same time, he, Demis was working with like the evil empire, which was Google to Elon Musk, that they were trying to, you know, create this super intelligence that was gonna destroy the world. and then another one in another email. To Musk in February, 2018.

[00:11:16] this is some where some of this comes from. Zilis listed out some brainstorming ideas to quote, run for an AGI counterbalance. One of the ideas is to have em Altman run Tesla ai. Another is to recruit Des. Now, one of the ways they concocted to do this was to actually host an event at. nips, which is a big machine learning conference every year, and they wanted Sam to like, be like the moderator and in essence, almost like force function him into announcing this Tesla AI initiative.

[00:11:44] So it's like, whoa. Like they, they had like concocted all these ways they were gonna get him to do it. yeah, so that there was a, then there was a Wired article we'll put in there. it also talked, the one I thought was interesting. I wonder if this is gonna come back at all. Zilis [00:12:00] testified on Wednesday that Altman never ended up joining Tesla and the AI lab and the NIPS launch event never came to fruition.

[00:12:06] She also testified that Musk reached out to Karpathy Andrej Karpathy, which we've talked about, to recruit him to Tesla, which is actually in conflict with Musk's own testimony the week prior that said Carpa Karpathy. Came to them to, to come there. So something is off a misremembering by Musk of how it transpired or just straight up not telling the truth about it.

[00:12:31] I don't know. but usually not telling the truth under oath isn't a, isn't the ideal thing. yeah. So then there's just like these inside stories about how they pursued this and how they tried to get 'em. so then two years later, after those efforts in 2018, in January, 2020, Zilis is appointed to OpenAI's board of directors.

[00:12:51] Now no one knows that she has a romantic relationship with Musk, which apparent again so much is like soap opera. Like, I don't wanna go get into all of it, but like, [00:13:00] apparently there was a romantic relationship at a time. And then, it became an in vitro fertilization thing where she, she decided she couldn't have children and he offered to father her children.

[00:13:13] . And so that's how they ended up having four kids together, unbeknownst. To Sam Altman and Greg Brockman. Right? So now a board member at openAI's after the split, so has since left under not good terms. His a woman who has had four of his children is now a board member and hasn't disclosed this to anyone because she has an NDA with Elon not to tell anybody.

[00:13:40] And then a Business Insider article comes out in 2022 that said this is what was happening. And so she then had to tell Altman that this was true. Wow. And so she was basically a plant within the board for Musk to keep tabs. And they have [00:14:00] messages back and forth about what should I do? Should I stay on the board?

[00:14:03] You know, here's what's going on. And it, I mean, while like you're sick, like an operative on the openAI's board. So then she finally has to resign because she's aware of Musk's plans to build Xai, which we'll talk about later in this episode. And so once it comes out that Altman figures out he's gonna do this, and he's the father of her babies, she.

[00:14:29] Resigns from the board. So like what? Like, again, like I said, like this is so beyond anything. Then there's another one that, you know, moving on to Mira Mirati. there's these tech emails that the message text messages that came out. and this goes back to the time when Sam gets fired temporarily and Mira for like 24 hours becomes the CEO.

[00:14:54] And Sam is unaware that Mira is one of the driving forces [00:15:00] behind him getting fired. Hmm. And so Ilya and Mira are corresponding with the board at the time Sam gets fired and Mira actually had created an executive brief about his. shortcomings as a CEO, let's say. So they have text messages as this is transpiring.

[00:15:20] So, counsel, by fall 2023, did you perceive Altman was not candid with you? Truthful, honest, Mira, after a very long pause, not always counsel. Did you, did he undermine you as CTO? Yes. Did he pit other execs against one another? Yes. Now, mind you, there was articles at the time saying all of this, which openAI's denied.

[00:15:40] So like, everything like, oh, media is just like running with Sensationalizes. Okay, well, here's the text messages that were like verbatim what they would be reported. Counsel asks if her views on Sam's management has changed since she was asked by the board. Mira, I've, I've been gone from openAI's for over a year up until the moment I left.

[00:15:57] My views have been consistent. [00:16:00] Mira says the problem she had with Altman's management style after he was reinstated still persisted as well as with Brockman though to a lesser degree in the latter case, as we already know, in 2023. Mirati asked by board to collect info on Sam's management style, which she did, and wrote an extensive memo on it.

[00:16:16] Mirati testifies. She pressed the board for why they fired. Altman Board responds saying their lawyers had advised them not to give more info. Mirati at the time, in absence of the info, starts thinking something criminal or national security risk had occurred. So she doesn't connect the dots that her own.

[00:16:32] Executive brief is actually now what's driving this and her distrust of him as a CEO. So this is all coming from Mike Isaac, a New York Times writer who's in the courtroom, who's like live tweeting this stuff and it says, written about it. Then an interesting one is Helen Toner, who was a board member at the time, Sam gets fired.

[00:16:48] She testifies in a video deposition and it was really fascinating. She said, Helen Toners deposition in Musk versus Altman includes some striking quotes about Mira Murati's involvement with Altman's Ouster. She [00:17:00] said Mira was totally uninterested in telling her team that her conversations with us had been a significant factor in firing.

[00:17:08] Altman also claimed that Mira sat on the fence, and this was the quote that I was like, holy shit. she was waiting to see which way the wind would blow, and she didn't realize that she was the wind. I was like, man, that is like,

[00:17:22] Mike Kaput: yeah,

[00:17:22] Paul Roetzer: that's poetic.

[00:17:24] so that was an openAI's board member basically saying like, Mira was the reason we were doing this, and she was like.

[00:17:30] Unaware of it. And then the last one I'll highlight Mike, 'cause I think this is significant, is Microsoft. So another thing that comes out of all this is Microsoft's own internal struggles in 2018 before they decide to fund the building of openAI's. So the Verge has this, but it was in court documents, but it will link to the Verge article.

[00:17:50] So it said, court documents from the ongoing Musk versus Altman trial have provided a rare look at the communication between Microsoft's top executives about investing in openAI's and [00:18:00] fears. The AI startup would quote, storm off to Amazon and shit talk Microsoft. Those are the quotes. Just days after openAI's showed a bot beating Dota two professional in summer 2017, which is the same exact time period that the transformer paper comes out as the origin of GPT.

[00:18:17] Altman responded to Nadella's, congratulations email with a proposal for a much bigger partnership with openAI's to fund its next phase of AI research. So again, he's asking Microsoft for money to fund. This continuing effort to do video games as a way to build intelligence opening. I needed larger sums of compute to expand the Dota two project far beyond the Azure credits.

[00:18:38] It was using for Microsoft, quote, probably something like 300 million at Azure list prices. According to Altman, this initially spooked some executives inside Microsoft. For those numbers to make sense, we'd have to be generating significant incremental revenue directly due to the deal. Million plus that couldn't be gained in a more [00:19:00] efficient way.

[00:19:00] Said Jason Zander, who was Microsoft's Azure chief at the time and in August, 2017, email to Nadel. These are internal Microsoft emails. then another quote, I guess the other thing to think about here is the PR downside of us not funding them and having them storm off to Amazon in a huff and shit talk US and in Azure on the way out.

[00:19:21] So this is Scott, the CTO of Kevin Scott, the CTO of Microsoft in a January 28 email. So this is now. Like six months later, they are building credibility in the AI community. Again, this is Scott very fast recruiting well and are going to be an influential voice. All things equal, I'd love to have them be a Microsoft and Azure net promoter.

[00:19:42] Not sure that alone is worth what they're asking. A year later, Scott admitted it in an email to Nadella and Microsoft co-founder Bill Gates. now it's time period 2000. Yeah. So we only, we probably had GPT one at this point. Scott admits to Nadella and Gates that he had been highly [00:20:00] dismissive of AI efforts of at openAI's and Google DeepMind when the companies were competing to see who could achieve the most impressive game playing stunt.

[00:20:07] That's a quote. Scott became a lot more impressed when OpenAI moved towards natural language processing models and feared Microsoft would slip behind Google Google's AI efforts. A month after Scott's thoughts on OpenAI email, Microsoft announced a $1 billion investment in openAI's. So again, I don't think that's ever been public, like the internal debates and how they kind of didn't, they basically were like, well, let's just do it so we don't get shit talked.

[00:20:32] And so again, the discovery and the evidence in this case is just so far beyond what I was expecting. We were gonna see, and it's not done yet.

[00:20:42] Mike Kaput: It's unreal. And we've said this before, but like soap opera, I wonder. Or like h Hollywood thriller doesn't even give it justice. This is like Game of Thrones over here.

[00:20:53] It really is. There's people backstabbing each other.

[00:20:55] Paul Roetzer: Yeah. And the texts are nuts. Like I, I don't, I [00:21:00] think there's another one maybe later on I had, oh wait, is this it? Lemme I click over real quick. Oh, here, here it was. alright, hold on a second, Mike. I gotta put this in my other one. So these were the Mirati and Altman texts that were going on.

[00:21:16] So as Sam is fired and ti. Without telling Sam she was involved. She's in meetings with the board and they have these text messages. Sam, can you indicate directionally good or bad? Satya and others? Anxious ti directionally very bad, Sam. Okay, Sam, can you wrap up soon? Lots of pressure from Microsoft for an update.

[00:21:37] Mira, Sam, this is very bad. Sam, can I come in? They don't want you to, Sam, what? Do you want to make it better? I'm still willing to just walk away if that helps. If they are ramped up for crazy lawsuits against me that I'm not sure what can you please tell me? I just wanna resolve this however, and how, and would like to join Mira?

[00:21:56] They're convinced about their decision. Sam, for me to be fired or [00:22:00] some new thing? Yes. For you to be gone, Sam. Okay. Then I can come in and talk about a path forward. They're saying no and they need more time or time for what? They've walked me through. All the reasons and the issues with you and why you can't be the CEO.

[00:22:13] Can you ask them why they've been saying all weekend they wanted me back. She said they want a new CEO in place. Said, can I call you back in 10 minutes? Said they want a new ca in place tonight. not me. Mira saying, Sam, do they know who can I tell Satya, is this final or would you should you add Satya?

[00:22:31] I'm trying to add Satya out. Still don't want me. New guy is Rando Twitch guy who was Emmett Sheer. That was Mira saying who the new CEO is gonna be. He's like, Emmett question mark. Yeah, I mean, just nuts. So sorry. That was, I forgot I had, that was pulled up three.

[00:22:49] Mike Kaput: Yeah. And we'll, we'll see how this evolves over the next few weeks too.

[00:22:52] But with Elon Musk saying, I'm gonna make you guys the most hated men in America. I don't know if that only applies to him trying to [00:23:00] say things during the trial. I could see

[00:23:02] Paul Roetzer: no 'cause there was nothing that came out this week that would like. Follow onto that threat that I saw. You know, it's like there's something else still.

[00:23:10] Mike Kaput: There's something else

[00:23:11] Paul Roetzer: coming.

[00:23:11] Mike Kaput: Yeah, that was my sense of it too. Like he didn't just say that because of what came out about Brockman's journals or whatever, which we already kind of know.

[00:23:19] Paul Roetzer: Yeah. We may, when we talk about the XAI stuff with Anthropic, we may start getting into this a little bit more. Yeah.

[00:23:24] Might be a prelude to the later topics.

[00:23:27] Coinbase AI Layoffs

[00:23:27] Mike Kaput: All right, so next up this week, Coinbase, CEO, Brian Armstrong sent an email to all employees this past week announcing the company will cut roughly 14% of its workforce. That's about 700 jobs He cited two converging forces. One, the company is in a crypto down market.

[00:23:44] They are exclusively a crypto. Company in exchange and they need to adjust their cost structure as a result. But also he said AI is fundamentally changing how the company works. Armstrong wrote that he's watched engineers at Coinbase use AI to quote ship in days what used [00:24:00] to take a team weeks and that non-technical teams are now shipping production code.

[00:24:05] He also said, Coinbase is fundamentally changing how we operate, rebuilding Coinbase as an intelligence with humans around the edge aligning it. So he's recommending some aggressive structural changes to do that. He is flattening the org chart to no more than five layers below the CEO and COO with leaders owning as many as 15 or more direct reports.

[00:24:30] He said every leader has to be a strong individual contributor as well. What Armstrong calls a quote player coach, he kind of alluded to the fact pure managers are out. He is also concentrating the company around what he calls AI native talent, who can manage quote fleets of agents, including experiments with one person, teams that combine engineering, design, and product management into a single role.

[00:24:56] So this is kind of consistent with a pattern [00:25:00] he's been talking about for a while. He revealed on Stripe co-founder John Collison's podcast last year that he had gone rogue on coin base's Slack, mandating that every employee onboard with AI tools by the end of the week, and that some employees had been fired for not adopting AI fast enough.

[00:25:17] This has also drawn some pushback though. One analyst told Bloomberg the crypto winter is probably the real reason for most of these cuts and that AI is likely an easy excuse. This also lands along Cloudflare's announcement this past week of 1100 layoffs framed around the age agent AI era. So Paul, two aspects I wanted to get your take on here first.

[00:25:39] Every time something like this happens, we have people just on fire in the comments section be like, oh, they overhired during COVID or erp. Like this is a reduction of that. This is AI washing, what they wanna do anyway. Or you know, their market sucks or they have a bad business and this is an excuse. So I'd like to kind of talk about that.

[00:25:58] Is it or is it not [00:26:00] really due to ai? And then also we had flagged you and I offline there, regardless of what you think here, there's some really interesting ways he is considering restructuring a company around AI native talent. So maybe. Talk me through those two pieces.

[00:26:15] Paul Roetzer: I, I feel like this whole, is it AI washing?

[00:26:18] Is it because they overhired? Is it because they're in a crappy market? Like it's it's taking the same. Extremist positions that I always try and push people to not take it both things can be true. Like yeah, the crypto may be down and yeah, maybe they overhired like fine, we, you know, it's like, all right, we'll concede to that might be true.

[00:26:39] Yeah. But when you look at the reasons why they're doing it. Those are true regardless of what you think of Coinbase or Brian Armstrong or whether or not these are layoffs because they haven't managed the company properly. Like let's just, okay, I I, let's do this. Let's assume you're right. Like if that's your belief that this is AI washing, I'll just give that to you.

[00:26:58] Okay. Let's, let's just [00:27:00] accept it now. Let's go through the key points of this. AI is changing how we work. A hundred percent true. Over the past year, you've watched engineers use it to ship in days. Totally. Non-technical teams are now shipping production. Mike and I give you examples every week, like, right, this, those are true.

[00:27:14] Regardless of who's saying this and what their situation is next. The biggest risk now is not taking action. A hundred percent true. We are adjusting early and deliberately to rebuild Coinbase. Absolutely. And become lean fast. AI native, every company should be doing that. Like that is, you can say whatever you want.

[00:27:31] That is what. Companies should be doing right. We need to return to the speed and focus of our startup founding. Yes, we are not just reducing headcount and cutting costs, we are fundamentally changing how we operate. Rebuilding Coinbase is and intelligence with humans around the edge of aligning. That is what forward thinking companies should be thinking about.

[00:27:47] What is the future of the org chart? How are the humans and agents working together? So again. You can think he's lying about reducing hand count, not being the reason, but like the fundamental idea of why true fewer layers, faster decisions a hundred percent. That should [00:28:00] be happening. Flattening orgs, enterprises are stacked with layers of crap and like inefficiency.

[00:28:05] So if you want to do this right, you are going to flatten those layers. Like that is a given. No peer managers, a hundred percent, anybody who's a manager who has expertise in a domain, has expertise in their field, can absolutely also be a builder. They can be the person that's doing the work, creating things.

[00:28:25] Orchestrating. Like, I, I agree. I don't wanna pay anybody who isn't also doing something right, like there's no reason for it. Right? So, and then AI native pops, I love the concept concentrating on AI native talent, who can manage fleets of agents, work in super small teams, like do these things, maybe even single person teams.

[00:28:41] Like, Hey, you own this, you can, you're the builder, you're the architect. Like, you're the person doing the things. So regardless of what you believe about why they're doing these layoffs, the fundamental ideas he's presenting here as to how they're gonna structure Coinbase. Feel directionally correct to me.

[00:28:56] Yeah. And having spent a lot of time with VC [00:29:00] firms, PE firms, and the companies they fund and own, this is going to ring very, very true to those people. I can promise you this memo was sent if those companies weren't already doing this, this thing was on every Slack channel of every VC funded company in the world on Friday.

[00:29:17] So this is like a prelude to where the market goes. And even if you don't think AI is causing layoffs, this gives the cover for more companies to do a 10 to 20% cut because their assumption is there's at least that much efficient efficiency in legacy companies at least. And that's the first cut. Now I think I saw, posts from Armstrong saying, Hey, listen, we're still hiring.

[00:29:45] Like this isn't like we're just getting rid of everybody. We're just gonna hire AI forward professionals who can come in and fit this new model. We're just getting rid of the administrative layers and the management layers who don't do stuff, who don't understand. AI can't [00:30:00] build agent fleets. Can't manage fleets.

[00:30:02] So I would not overlook this like this. Absolutely. With PE and vc, like it's, it's gonna be front and center. And my guess is there's a lot of traditional enterprises who look at that enviously and think, shit, I wish we could move that fast. I think I, I wish we could get rid of layers of management. I wish we had managers who actually did stuff.

[00:30:23] Like this is it, like this is the kind of stuff we should be really thinking about as leaders of companies. So Yeah, I don't, I mean, I would not. Underestimate the importance of a memo like this. It's almost like the Tobi Lutke one from last spring, Mike, where Yeah, it sort of set off, yeah. People saying, oh, okay.

[00:30:40] It's okay to talk about this as a leader now that the future looks different. And I think this one could create a whole new spiral of people just being like, you know what? Let's go.

[00:30:50] Mike Kaput: Yeah. And that's why we wanted to talk about too, just the detail of how he has this vision for what organizations look like.

[00:30:57] Because the AI washing debate, [00:31:00] I feel like you hit on this, just misses the forest for the trees. Yeah. Even if it's all AI washing, the fundamental question is yes or no. Does AI change how organizations have to operate? And if you believe the answer is no, I don't know what to tell you, but if you believe the answer is yes, it doesn't matter why the layoffs are happening,

[00:31:18] Paul Roetzer: right?

[00:31:19] Mike Kaput: 'cause we're headed in the direction where maybe this crop of layoffs is all AI washing, the next one won't be if you believe that to be true.

[00:31:26] Paul Roetzer: Yeah. And you should, as a leader, you should be. Taking elements of all of these and saying what is, what rings true to you? . What feels like something you should also be experimenting with.

[00:31:37] Just, you could read this and throw out five of the seven things I just highlighted as like, you don't agree. Okay. But two of those might actually inspire you. To make a change within your team, department or company. . And so I think that's the whole point. And why we surface a lot of this stuff is like, just to put it out there, provide some context because no one has all the right answers.

[00:31:57] You just have some people who are willing to put themselves out [00:32:00] there, take the PR hits, and like say, here's what we're doing and why we're doing it. And everybody else can kind of learn from it and be like, Hey, I actually like a couple of those ideas. It fits what we were thinking internally or like, hadn't thought about that.

[00:32:12] So, yeah, I mean, we're never gonna have someone who just has nailed the whole thing. And, you know, we'll talk about this actually with the next topic, with the nationalization of model. Like nobody has all the answers, but you just have some people are willing to like put out some ideas. Like we announced Andrew Yang is coming as a, a Mayon keynote and that was my whole thing with, with Yann.

[00:32:30] We've talked about him on podcasts recently. Like he's one of the few people he ran for president in, in 2020. But he also has been on the. the push for the idea that we need new economic futures, we have to think about things like universal basic income. And so while you might not agree with Andrew Yang's solutions, he's at least someone who's out there saying, Hey, let's talk about possible solutions.

[00:32:52] Let's not just say we got a problem. And so, like, this is the kind of stuff that I, I think is really important, that people are putting it out there and that we're talking [00:33:00] about it and we figure out what this sticks and what matters. But it's often gonna be pretty subjective. Like you're gonna have to figure it out for your company and your team.

[00:33:09] AI “Soft Nationalization”

[00:33:09] Mike Kaput: All right. So to that point, the third big topic this week is that the Trump administration disclosed at one point that it was considering an executive order to create a federal review process for new AI models before they're released to the public. So the New York Times reported that this plan as initially set out, would have set up a working group of tech executives and government officials to design these oversight procedures with the White House, discussing the framework with Anthropic, Google and openAI's in meetings last week, this kind of escalated a bit midweek when National Economic Council Director Kevin Hassett.

[00:33:45] Confirmed on Fox Business that an executive order is being studied. And importantly, he likened this regime to FDA drug testing. The administration was also found to be discussing tapping the intelligence community to [00:34:00] pre-assess models, partly so US agencies can study new capabilities before Russia and China see them, but after has it's FDA comparison, started rattling the industry.

[00:34:12] Chief of Staff, Susie Wiles posted that night of the interview saying the White House is not in the business of picking winners and losers. And she said that it is leading an America first effort that empowers America's great innovators, not bureaucracy. A senior official also told Politico that has its remarks were taken out of context and that the White House is looking for partnership with companies, not regulation.

[00:34:36] And then we had on Friday, the night of Friday, May 8th, Bloomberg then reported. Quote, the Trump administration is preparing to order US agencies to partner with AI companies to protect networks from AI enabled cyber attacks. Though the directive would stop short of requiring government approval for cutting edge models according to people familiar with the matter.

[00:34:57] So, Paul, we started talking last week about this idea of [00:35:00] soft nationalization. It sounds like there's been some back and forth here. It doesn't sound like right out of the gate the Trump administration is going to be reviewing models, but the fact this was even discussed, I, is this a sign we're headed more down the road of soft nationalization?

[00:35:15] Paul Roetzer: Th They have to, they have to find some way to do this, but the administration definitely seems like it's trying to thread the needle. Test different messaging points. It's like they put it out there by Tuesday, they're thinking about like basically approving the models and by Thursday or Friday it's like, no, no, no.

[00:35:31] Like we're, we're, we're just like still in draft forms on these things. 'cause I'm sure they got massive blow back that day from tech community. So I don't know. I mean, the way I think about it's, there's probably agencies and advisors within the administration who are truly spooked. They, they've seen the Anthropic mythos model.

[00:35:46] They have real concerns about how more advanced models could be used by bad actors and nations to target individuals and businesses and governments and the nation's infrastructure. And I think those concerns are well placed. Like there, there are real [00:36:00] unknowns ahead that we're not sure how to handle.

[00:36:02] And then there's probably advisors who hate the idea of government regulation and would see these government efforts to put more controls in place as a threat to innovation and our ability to compete with China. Unless they can put the thumbs on their thumbs on the scale and like. Influence how the regulation happens, which is probably what's trying to happen.

[00:36:19] Regulatory capture, I think would be, you know, the term we've previously talked about, right? Yeah. so then there's a whole bunch of people in Congress who are at beginner level understanding of what AI is, what it's capable of Today, maybe they have some chats with ChatGPT, but they have no concept of agent stuff and reasoning models and self-improvement that we're gonna talk about.

[00:36:36] Like they, they don't understand any of that stuff. And so when they try and look out to one to two years out, you know, this administration's got, what, three years left or something? We got midterms happening right now when you just look one to two years out. Like they have no concept of what these models mean to the economy, jobs, national security, things like that.

[00:36:53] So the one thing that really concerns me and it sort of jumped out to me with this idea of the government vetting these models, is [00:37:00] if. It was a purely scientific process. If we had experts in place who everyone agreed, like these are absolutely experts on, on safety and alignment and, you know, the threats of these models and they were doing truly apolitical work, then I could see a version of a future timeline where this works in today's political climate.

[00:37:20] I see zero chance of that happening. So just this week, the New York Times reported that the Food and Drug Administration blocked publication of several studies supporting safe, the safety of widely used vaccines against COVID and shingles. In recent months, a spokesman for the Department of Health and Human Services confirmed this.

[00:37:38] The studies which cost millions of dollars in public funds were conducted by scientists that should be apolitical at an agency. Who worked with data firms to analyze millions of patients' records, they found serious side effects. To be very rare. In October, the scientists were directed to withdraw two COVID vaccine studies that had been accepted for publication in February.

[00:37:57] Top FDA officials did not sign off on [00:38:00] submitting abstracts about studies with Shingrix, which is for shingles, to major drug safety conference. To withdraw the studies is the latest step by the administration to try to limit access to vaccines. So that's just like straight outta the New York Times.

[00:38:12] . But to me it's an example of maybe, maybe the reports were wrong. I don't know. Maybe you study millions of thing, you know, people and the reports are wrong, but the government is. Is intentionally trying to keep that data from the public. And that shouldn't be a super controversial topic. It's, it's kind of like a cut and dry scientific study.

[00:38:35] You do it, these show that they're, they're, they're positive. Like it should be a relatively objective thing. When you get into models, we're talking about a lot of subjectivity. Yeah. The out, the outputs of the models. Are they biased toward one political party or religious affiliation? Like you have all of these other surface areas that come into play where if we can't agree on what should be relatively objective it is, or is it is not effective [00:39:00] against these.

[00:39:00] Conditions to this whole spectrum of things that would need to be evaluated models. I can't imagine a scenario in today's climate where these things would be unbiased, right? And truly scientific. So then you get into, like, even without any formal nationalization efforts, the government already has tremendous ability to exert influence on these labs.

[00:39:20] We saw it with the Department of Defense attacks on Anthropic. now, I don't think we should put the future of the nation and humanity in the hands of the five private companies with no government oversight. Like, I'm not an advocate like, Hey, these companies just do whatever the hell they want. But I don't think we're on like, the current path.

[00:39:35] So the, I'll reference this Dean Ball thing. Mike, I don't know if you've read this one yet, but before, Leviathan Wakes was the title of this X Post. Now Leviathan, I had to look this up. To be honest, massive, powerful sea monster from biblical theology often symbolizing chaos, evil and untamable nature.

[00:39:53] So Dean Ball, just read a few of these excerpts. So. he said, my political, [00:40:00] philosophy, as with many reflective people on the right, is characterized by a fundamental and irreconcilable tension between libertarianism and conservatism. Fundamentally, I view the state as a kind of tragic necessity, something we must merely tolerate, because without it no civilization we can conceive of as possible.

[00:40:16] Here's what that means. In practice, I oppose literally all, almost all AI regulation. I do not think there should be new laws to regulate algorithmic discrimination or algorithmic addiction or algorithmic pricing. I despise the notion of regulating algorithmic design, and I especially despise the idea of judges and juries, second guessing the algorithmic design choices of others.

[00:40:35] As seems to be the current direction of US tort law, I'm opposed to efforts by bureaucrats to inject ethics or rules against misinformation into information technologies. I reject most regulation of AI use by businesses and consumers believing as I do that existing law plus private sectors, simply figuring it out will resolve most mundane AI governance challenges.

[00:40:55] I also am not a dor. I do not believe AI is going to kill everyone or at least [00:41:00] being unable to prove a negative. I'm skeptical of existential risk. I'm opposed to pauses and bans of AI development. I am uncertain about the labor market impact of AI in the future, but I'm skeptical of the notion that AI will destroy human work and strongly opposed to regulations or taxes designed to remedy the problem.

[00:41:16] In short, I believe almost every single idea in AI policy is bad, and I disagree with the vast majority of AI policy proposals. That being said. My preferences for light regulations extends beyond ai and it gets into like, you know, existing laws are kind of bad and like sometimes they just don't work.

[00:41:32] But love my country, wanna save my country from being strangled through the conservative in me fears that saving it may require radical change. There's no solving this tension, no way. Out of the paradox. The classic liberal in me is always driving to solve problems. The conservative me knows that the most important problems in life have no solution, which brings him to the niche of AI regulation that he does affirmatively support today.

[00:41:52] The management of potential catastrophic risks by, from ai, by the state. The potential of AI posing catastrophic [00:42:00] risks is not hypothetical. We have seen AI systems that might allow malicious actors to penetrate devastating cyber attacks and critical systems like hospitals, banks, power plants, and the like.

[00:42:09] And it seems probable other domains of catastrophic risks such as biological, we biological weapons development will become live, live problems soon. So he supports regulations to try to mitigate the catastrophic risks of ai. He then goes into like four reasons. And the fourth one I thought was the one worth highlighting, is practical.

[00:42:27] Once AI models have this potential, of course the state will get involved. Do you think national security apparatus of the United States will ignore models? The potential to be weaponized both by America and against America. Obviously not. And then he gets into his fear about the Anthropic, the fight with Anthropic and the national security apparatus, realizing that they have to have some control over this.

[00:42:47] And then he goes into, citing Tyler Cowen, who put it recently. We thus want sustainable methods of, perpetual interference that are actually somewhat useful from a safety perspective and give governments some [00:43:00] control and feeling of control, but not too much. that's why he supports air regulation in which, in brief summary, involves the creation of private institutions to sit between the state and the frontier labs precisely so that they can mediate between the inevitable power seeking impulses of the state.

[00:43:16] And the private business of Frontier Industries, sustainable methods of perpetual interference. So that's what it comes down to. And I would recommend reading the whole thing. There's a lot more to it, but his basic premise is his proposal for how we solve this is not to let nationalization happen, not even to let soft nationalization happen.

[00:43:33] You have to have an intermediary, like a, an unbiased body in the middle that is truly objective, that figures this out, allows America to compete, allows them to keep innovating. But does not give the government control to where someone in the government doesn't like the output of one of the models or doesn't like the CEO of one of the model companies, and all of a sudden their model isn't getting approved.

[00:43:57] And the other one is, and if you think that's an [00:44:00] unrealistic outcome, you do not follow government very much because that is what is already happening. So it's a, it's totally messy it, I don't see any near term resolution to this, but I think this is at least like a, again, what we wanna do on this show is present people with possible solutions.

[00:44:17] Talking about the problem is fine, but we want to hear ideas of how to solve it, and this is a direction of an idea that's worth highlighting. I would say,

[00:44:27] Mike Kaput: yeah, I could not recommend this essay more because I just was like. Nodding along, even though I probably, you know, disagree on certain parts of Yeah.

[00:44:35] Totally. The political assumptions, but it's like I, and you know, I realize this is not presenting a solution, but I think we have to be honest about the fact that if you're following this closely enough, I don't see a single pathway forward where increasing nationalization doesn't happen based on the factors that he outlined.

[00:44:53] Now that doesn't mean it is inevitable necessarily, but I just don't see a path where it's suddenly like national [00:45:00] security apparatus and states stop operating the way they've always operated. You know? Yeah. It's really, which is like, I think what he's getting at is like, you have to act now because this will happen otherwise.

[00:45:12] Paul Roetzer: Yeah. My, I'm not like a huge conspiracy theory guy. My assumption is the government is already building their own lab, that they're, they're, they're going to. If they haven't, they will, they will, they will pull in the resources necessary to build their own models. now how you disguise that is hard given the compute power and the energy that would be needed to, to that.

[00:45:41] maybe they'll do it in the open. I don't know. Maybe they'll, they'll just build an open like us model. I could see that being, but yeah, I think either they nationalize in some way or the government builds their own. And I mean, you could go read, oh, what is the. [00:46:00] The brain one, the brain book, Mike, that we liked.

[00:46:04] the Pentagon's brain.

[00:46:05] Mike Kaput: Oh, Pentagon's Brain.

[00:46:06] Paul Roetzer: Yeah. Yeah. Go, go read that. Like if I, the government's been trying to do this stuff for 25 years. Like, it's a, a widely sourced, incredible book about the government's efforts to simulate the human brain going back to like the nineties and even earlier. So like.

[00:46:21] They're not new to trying to do this, and they're pretty good at keeping stuff under reps when they want to. So I would not be surprised at all if the government's just gonna keep kind of an arm's length and just say like, we're not gonna rely on these labs either. Like, we're gonna go build our own. And that's where you get into the production act stuff, where it's like, all right, we, we need 200,000 ships from Nvidia and we're gonna take first in line before the rest of you get yours.

[00:46:41] I, I don't know.

[00:46:42] Mike Kaput: Yeah. It's, the, all the DARPA stuff in that book is so interesting and it's like not being conspiratorial at all, but the CIA has a venture funding arm called Intel that has invested in technology. Like, if you think they're not paying attention to this, you're crazy.

[00:46:55] Paul Roetzer: And they also took a 10% stake in Intel last year.

[00:46:58] Mike Kaput: Exactly. So it's. [00:47:00] I'm not saying that that means they own that company. It just means like you think people aren't making moves and paying attention to these things. I, I think you have got another thing coming.

[00:47:09] Paul Roetzer: Yes.

[00:47:10] State of AI for Business Report Preview

[00:47:10] Mike Kaput: All right. So Paul, before we dive into rapid fire, this week's episode is also brought to us by our 2026 State of AI for Business Report webinar.

[00:47:19] This is taking place live on Thursday, May 14th at 12:00 PM Eastern. So for the sixth year running, we are collecting data on how. AI is actually being used in organizations. And this year, for the first time, we actually went beyond marketing, which is what we used to capture data on, to figure out how AI is being adopted across every function, industry and company size.

[00:47:41] And the result is honestly the most comprehensive. Look at where business actually stands with AI in 2026. in the live session, the webinar, myself, Paul, our Director of research, Taylor Radey, we're gonna walk through, based on all the data, we're gonna reveal all the top findings and talk through where adoption stands, the [00:48:00] gap between how mature organizations think they are and how mature they actually are.

[00:48:04] How adoption varies by role industry and company size, what the biggest barriers are to adopting ai, and also talking a bit about where AI is heading and how to prepare. So registration is free for the webinar. Everyone who Registers gets ungated access to the full 2026 state of AI for business report, which is dropping on Thursday.

[00:48:25] The webinar is the launch event for this, and we're also sending out on-demand recordings. So go ahead and register even if you can't make it live. To do that, we'll put a specific link in the show notes, but you can also go to SmarterX dot ai, click on education, and then webinars, and you'll find it right there.

[00:48:43] Okay, Paul First, rapid fire very closely related to this. So we wanted to actually spotlight and preview in advance some of the findings from the state of AI for business report. So we have actually, like I alluded to in our mid role ad, have expanded this into a fully [00:49:00] cross-functional view of AI adoption across every type of business and role.

[00:49:03] We had more than 2100 people answer 34 questions this year for the survey. So full report is dropping next week during that webinar. But I wanted to tease two really important findings today. The first is one of the most striking numbers in the entire report. 71% of respondents believe that AI will eliminate more jobs than it creates over the next three years.

[00:49:29] That is just 13% expect net job creation. And this is remarkably consistent across every role. And seniority level, CEOs and entry level employees alike agree in roughly equal numbers. And this number we have measured for years in the state of marketing AI report. It is going up by double digits. The pessimism is going up by double digits every single year.

[00:49:52] And we are seeing the exact same trajectory when we expand this to non-marketing roles. interestingly, when we asked [00:50:00] respondents about the impact on their own specific role. Only 21% said they were seriously concerned. So the workforce broadly expects disruption. They just don't think it will happen to them.

[00:50:11] So Paul, I kind of wanted to just highlight that as like a top line finding here. Just maybe get your initial thoughts. I know we'll unpack all this a lot more on the webinar on Thursday.

[00:50:22] Paul Roetzer: Anytime we feature research, we always say like, understand how it was conducted, who responded. And so the thing Mike and I are always very clear on with our own research, especially when we do this state of the industry, is the people taking this are likely people who subscribe to our newsletter, who listen to our podcast, who are part of our AI academy, who come to conferences.

[00:50:43] These are AI forward professionals who are seeking knowledge about AI is the more likely person to take this, right? So while I totally get the 71% worried about elimination, but the only 21% saying that they weren't concerned about their own, that that could definitely [00:51:00] be looked at as like, okay, maybe people are just, aren't like registering that they're in the cross hairs too.

[00:51:06] Maybe it's also that. That the people taking this feel that they have put themselves in a position to thrive through the disruption because they're doing all the things we always talk about as an ai poor professional. Yeah. And so they're looking around at their peers being like, you ask me like, are my coworkers in trouble?

[00:51:26] I'm gonna say yes. Yeah. You ask me if I'm in trouble and be like, I don't think so. I'm good. Like I'm doing five x the work I was doing last year, I'm taking all the course. So, you know, it's just like, it's something we always have to think about when we think about the data. but I'm super excited to, to see the final report and go through with everyone.

[00:51:43] Because one of the things I found so fascinating, Mike, is we have these qualitative questions at the end, like you alluded to it, this is 30 some questions. It's not an insignificant, commitment of time. I don't know, it maybe takes like 10 or 12 minutes or something to go through it. But the completion rate was through the [00:52:00] roof.

[00:52:00] Mike Kaput: It was wild.

[00:52:00] Paul Roetzer: But yeah. But then the qualitative part where we ask questions like, what concerns you about ai? What are you most excited about? What was the data point? Mike? Like 90%,

[00:52:08] Mike Kaput: 90 plus percent filled in all of these.

[00:52:11] Paul Roetzer: Yeah. So at the end of the survey, after you've already taken all the questions we then ask you, like, open-ended to fill things in, and 90 plus percent out of almost 2100 people took the time to write things.

[00:52:23] And I saw a report, it was like 70 some pages of just the, what are you excited about when it comes to ai?

[00:52:29] Mike Kaput: Yeah.

[00:52:29] Paul Roetzer: So we have these, this amazing qualitative data set that we're gonna share as well, that is like these insights people provided into their concerns, their struggles, their like, so it's like, it's just amazing.

[00:52:41] And so I'm, I'm so grateful for everyone who took the time to be a part of the research. you're, you're gonna, you're gonna be really, interested in seeing the final product and hopefully it helps you make the case internally to like pull other people along.

[00:52:55] Mike Kaput: And you know, Paul, I'm almost equally, if not more excited for those [00:53:00] qualitative responses.

[00:53:01] the data's amazing. We're gonna learn a lot from that. But I think it's also so nice to be able to see 'cause people were very candid. Yeah. Based on responses I've reviewed, you're gonna see that people feel the same things you're feeling and are struggling with the same things you're struggling with and maybe are hopeful about similar or different things than you're hopeful about.

[00:53:20] And that's really helpful, I think, to feel at least like, okay, I'm not totally alone in figuring this out.

[00:53:29] Paul Roetzer: And I, you know, so much of what we talk about here, like we say, we're always trying to present all these different perspectives and we're trying to cut through the political sides of it, like not taking sides in anything.

[00:53:38] We're trying to just give the information as, you know, as factually as we possibly can. And then you get an opportunity to research like this where you can get thousands of people and now all of a sudden you have all these other perspectives. and then, you know, our goal is to like, help you form your perspective, give you enough information as unbiased as we possibly can so that you can feel that and figure it [00:54:00] out.

[00:54:00] And like Mike said, like maybe you find those sentiments that aligns perfectly with how you're feeling. It's like, okay, cool. Like, make those connections. So yeah, it's, it's always like, one of my favorite things we do each year is to, you know, do this research and then release this report.

[00:54:13] Mike Kaput: Yeah. All right.

[00:54:13] The same.

[00:54:14] xAI Folds Into SpaceX, Does Compute Deal with Anthropic

[00:54:14] Mike Kaput: Okay, next up. We alluded to this before, big announcement from Anthropic and SpaceX because they announced this week that Anthropic has signed an agreement to use all of the compute capacity at SpaceX's Colossus one data center in Memphis, which is more than 300 megawatts in roughly 220,000 Nvidia GPUs on the same day.

[00:54:34] The reason I mentioned that SpaceX's Data Center, Elon Musk, posted that Xai will be dissolved as a separate company and folded into SpaceX with the combined entity rebranded SpaceX ai. So this is interesting because Musk, even as early as this year, or as recent as this year, earlier this year, was calling Anthropics models, quote, misanthropic and Evil on X.[00:55:00]

[00:55:00] This past week though, he wrote, he had spent a lot of time last week with senior members of the Anthropic team. No one set off my evil detector. So as long as they engage in critical self-examination, Claude will probably be good. Anthropic CEO Dario Amodei said at a conference that the company has seen a DX growth in annualized revenue and usage in Q1 alone is and is working as quickly as possible to secure more compute.

[00:55:27] So this new capacity, good news for Anthropic users is letting them double. Claude Code's five hour rate limits for Pro Max team and enterprise plans. They're removing peak hours limit reductions on Claude code for pro and max accounts, and significantly raising API rate limits on Opus. So. This is an addition to the enormous compute stack that Anthropic is assembling.

[00:55:52] They have an up to five gigawatt agreement with Amazon, a five gigawatt agreement with Google and Broadcom, a $30 billion Azure [00:56:00] partnership with Microsoft and Nvidia, a $50 billion US infrastructure investment with FluidStack and a reported $200 billion commitment to Google's cloud and TPU chips. As part of this announcement, Anthropic and SpaceX also said they have expressed interest in partnering on multiple gigawatts of orbital AI compute capacity, which is a fan fancy way of saying AI data centers in space.

[00:56:26] So Paul, this is, I didn't have this on my Bingo card personally. Like what did you make of this? And I have kind of a dumb question here. Like if they're using all the compute at this. Colossus data center, does that mean XI doesn't need it. Doesn't want it.

[00:56:42] Paul Roetzer: Yeah. I, this is one of those ones where I was like, my timeline is just drunk, like the X, you know, I'm scrolling on May 6th, and I was like, what?

[00:56:48] Like, you, you're seeing these, like the stuff at the trial, the evidence of the trial and then like the dissolving of XI was like a reply to somebody's tweet. It wasn't even like a, it wasn't [00:57:00] an announcement from SpaceX or XAI, it was just like, yeah, we're gonna dissolve. It's gonna be, you know, separate company.

[00:57:05] It's gonna become SpaceX ai. I was like, what,

[00:57:07] Mike Kaput: by the way? Yeah.

[00:57:08] Paul Roetzer: Yeah. And you gotta like, click through and make sure it's actually Elon Musk and this is like really happening. so bizarre. Like just series of just bizarre stuff. the only thing I initially thought was that it seems like a concession that they're not gonna try and compete with open and Anthropic now.

[00:57:25] Right?

[00:57:25] Mike Kaput: Right.

[00:57:25] Paul Roetzer: Like, it truly becomes, they're gonna build the AI for SpaceX for Tesla. Like, it's, it's just gonna focus on product. And, you know, the bigger thing versus trying to be a dedicated lab. Now Colossus 2 is gonna allow them, it has the more advanced Nvidia chips. I think it's gonna allow them to still train.

[00:57:46] My understanding is that this Memphis plant that Colossus one has like a symphony of Nvidia chips. They're not all the same. Okay. And thereby it's hard to do parallel training on these chips. [00:58:00] So they're better for inference than they are for training. And it might be that, that's cool. With Anthropic and my, again, just from reports, they weren't.

[00:58:10] Fully utilizing the data center. So you had like hundreds of thousands of Nvidia chips just sitting around not being used.

[00:58:17] Mike Kaput: Okay.

[00:58:18] Paul Roetzer: And Anthropic is shopping for compute wherever they can get it to do inference for Claude Code and Cowork and all of the demand that's there. And so it's like what I think I tweeted something like a few hundred billion can make, you know, hating people go away pretty fast.

[00:58:32] Especially if you're SpaceX and you're trying to IPO next month and you're losing 6 billion a year and all of a sudden you get a 6 billion deal that walks in the door and you can wipe out the loss. And now your valuation at IPO jumps. . Plus, if you're selling with a data center that is not using it full capacity, and instead you position yourself as a cloud company, so now you're basically competing with Azure and stuff, and Google [00:59:00] Cloud and AWS like, you're an alternative to that.

[00:59:02] Now your valuation skyrockets again. So I actually think it's probably a genius move by Elon Musk. It's like, well, let's get rid of the cap. Like, oh yeah, they're fine. Like, Michael Detector didn't go off like, it's all good. We made up. Right. I still hate Dario, but like, whatever. and he, like he was trashing, Amanda Asco, the lady, like, I mean, just literally trashing them.

[00:59:21] Everybody calling 'em names, everything like two weeks ago. But it's like that all goes away. And again, from a business perspective, probably super smart. You're gonna use the capacity. You actually now have leverage over them if you decide they're evil again, like yank their, their, their compute power. and I think in the process, he also gets to stick at the openAI's.

[00:59:39] It's like, oh yeah, I'm gonna go do a deal with your main competitor and give them the compute they're looking for so they can then improve their product and pricing and screw you guys. So, I don't know. I mean, it's kind of evil genius shit from Elon Musk, honestly.

[00:59:54] Mike Kaput: It really is. And, you know, total speculation because this is such a novice opinion, but it is interesting to see.[01:00:00]

[01:00:00] These companies lean into their strengths here. Right? Because it's like nobody else. I mean, I, who knows, a data centers in space, right? Every becomes a thing. But it's like there's only one company that can do that. And that's the, that is SpaceX.

[01:00:12] Paul Roetzer: Yeah.

[01:00:13] Mike Kaput: There's many companies that can do decently good AI models.

[01:00:16] So it's like why bother going toe to toe with Anthropic and openAI's there? Becoming the cloud provider is amazing. Plus then you can focus on robotics and then use people's models inside robots or your own

[01:00:27] Paul Roetzer: right? And then you know, throughout one other thing. So. You know, we've talked about Google owns 14% of Anthropic.

[01:00:34] They also happen to own 6% of SpaceX.

[01:00:36] Mike Kaput: . Nice.

[01:00:37] Paul Roetzer: So it's like who, who always wins Google? Like no matter what we're talking about, just assume there's a footnote that Google also wins in this deal. Like it's,

[01:00:47] Mike Kaput: yep. Yep.

[01:00:49] Has Recursive Self-Improvent Arrived?

[01:00:49] Mike Kaput: Okay, so next up we have this past week, Jack Clark, one of Anthropics co-founders and head of the company's recently launched.

[01:00:58] Anthropic Institute, [01:01:00] published an essay in his import AI newsletter, arguing there is a roughly 60% chance that AI systems will become capable of end-to-end, no human involved AI r and d by the end of 2028. So he says this is basically a model that can quote plausibly autonomously, build its own successor with an early proof of concept expected on his end at the non frontier scale within a year or two.

[01:01:27] He said he believes we are living in the time that AI research will be end-to-end automated, and that crossing this threshold means crossing into a nearly impossible. To forecast future. So he makes this argument based on a bunch of public benchmark data that he has seen going up into the right over the last several years.

[01:01:46] And he is talking about the idea that AI is already starting to improve ai, and we can expect the code to be fully cracked here by 2028. So he talked about how an anthropics internal LLM training optimization [01:02:00] task, for instance, Claude Mythos preview, gets a 52 x speed up over the starting code where a human researcher took four to eight hours to achieve just a four x improvement.

[01:02:11] Anthropic also recently demonstrated automated alignment research where teams of AI agents beat a human designated baseline on a scalable oversight problem, and he notes that the frontier model industry is openly chasing this outcome. He talks about how openAI's has said it wants to ship an automated AI research intern by September, 2026, which we've talked about.

[01:02:34] A new lab called Recursive Super Intelligence just raised 500 million with the explicit goal of automating AI research. And Anthropic and DeepMind are both publishing on automated alignment research. So Paul, this really comes to this concept. We've talked about recursive self-improvement and the importance of that.

[01:02:53] It sounds like Jack Clark is pretty convinced there's a better than average chance. We're going to get that in the next couple [01:03:00] years. What did you think of this? I

[01:03:01] Paul Roetzer: think it's important to, again, just stress. This is a co-founder of Anthropic. This is not like, this is someone who sees the future models before we all do.

[01:03:11] He sees the trend lines of internal data. Now he's making the case with publicly available data.

[01:03:17] Mike Kaput: Yeah.

[01:03:18] Paul Roetzer: But he also has firsthand knowledge and experience of where these models are going. Metr just dropped, I think this was on Friday. Their updated analysis of mythos, and they said that it was like 16 hours at 50%, but that they only have five benchmarks that exceed the 16 hour mark.

[01:03:35] So they can't really even assess mythos until they create totally new benchmarks because it's basically like off, yeah, it's off their charts. Like they have, they have no way of actually assessing it. so yeah, so the recur, recursive self-improvement is an i one of those dimensions of AI progress we've talked about many times, but the whole premise is like, right now, humans with AI are writing the code, but then they clean the data, they design the experiments, the recursive [01:04:00] self-improvement does all this on its own.

[01:04:01] Like, it just, it starts doing everything. And so you basically have AI that spends its day writing a better version of its own. Source code and then the better version goes live, and then it's faster at writing the next version. So you, you basically enter intelligence explosion where every day the model is basically getting smarter and then making itself smarter, making the new version of itself.

[01:04:24] And so this means the timeline accelerates, you know, this explosion, the intelligence explosion timeline sort of moves out of sci-fi and starts becoming quite real. And that presents near term strategic. Challenges to companies, governments, workers, educators, like what are we, we're having enough trouble dealing with today?

[01:04:44] what if these things start getting smarter, fast, changes? Who can control AI progress? So if the systems can perform AI r and d, the Frontier Labs may gain enormous leverage. So a small number of companies with the compute, the data, the talent, and the [01:05:00] deployment channels can move even faster. So if, if one of the labs unlocks this first, in theory, they can hit escape velocity from the other labs.

[01:05:09] If they're months behind, you can just basically like always now stay ahead of them. It obviously raises immense safety and government governance concerns because again, we're having trouble managing. Today's and figuring out the political systems and the oversight systems to keep those safe. What happens if there's just like every week they're redesigning, like, is the government gonna vet those?

[01:05:32] Like how do you, how do you do that? And then the upside becomes massive. Science, medicine, cybersecurity, robotics, energy per like, we're there like we we're getting to AGI and super intelligence and like. All the insolvable problems become solvable, but now the downside hits biosecurity risk, market disruption, concentration of power, loss of human oversight.

[01:05:51] So, I mean, the best way to think about it for me, like to make it tangible is, you know, to go from GPT-4 to GPT five took like 18 months I [01:06:00] think.

[01:06:00] Mike Kaput: Yeah.

[01:06:00] Paul Roetzer: We're now, we've been on GPT five for a while. We'll be on it for a while longer. We're just getting these point increases, like 0.4, 0.5, whatever. So, you know, let's say you're taking, we'll just ballpark, it's say nine to, you know, 15 months to get the next iteration of a model to move up a full point from five to six, as an example.

[01:06:19] Imagine that the AI is able to do that for itself in months or days. So like, or weeks or days. So something that was taking me months or years. Technologically, if it's provided with the right energy and computing power, it can achieve those outcomes in weeks and days. And so the bottleneck becomes moving from like, I don't have enough smart ai, human researchers.

[01:06:42] There's only 10,000 in the world to, well, I got all the AI researchers I need, I just don't have enough chips and energy. Yeah. And then that's where you start to, you know, really deal with these issues. So if the chips and energies are there, if the government doesn't put the guardrails in place to slow the building down, then you truly get the intelligence explosion, [01:07:00] which I, I, my mind has a hard time accepting that that will happen in that timeframe.

[01:07:06] I think technologically, the labs will probably have the ability to make it happen. I just feel like one of these obstacles, if not multiple of them, the chips, the energy, the government regulations, something is going to not allow it to, whereby 2028, we're getting new models every three weeks, like. it, I don't know.

[01:07:27] It's, it's, but it's again, to his point, it's like, it's so hard to wrap your head around an exponential and actually try and comprehend it. It's like, it's like looking up at the stars at night and trying to actually comprehend the size of the universe. Like you can see it like they're out there, but like you cannot envision the size, and that's kind of what an exponential feels like.

[01:07:47] You just, you can sit here all day and like, wow, that'd be weird. It's like you have no idea how weird that would be.

[01:07:53] Mike Kaput: Yeah. It's interesting to think that the two biggest bottlenecks are essentially like people in physics, right? Yeah. At this stage is like, and those [01:08:00] are really significant because they're not something that AI can get around by improving itself.

[01:08:05] You have to build these things, or you have to have people's minds change or. Incentives change.

[01:08:10] Paul Roetzer: Yeah. And the, I mean, the only hope for humanity and society at the moment is that to do that you are going to have to be one of those five companies. . Or the, or the government, right? Like the, to, to, to build at that scale.

[01:08:22] You, you're gonna have to, but if you also build a, man, I don't even wanna think about this. Now, if you also build an automated AI researcher and you can do it in an efficient way, then in theory you, you could be an individual and build it, and you would just build it slower. But you could do it with some GPUs

[01:08:39] Mike Kaput: and

[01:08:40] Paul Roetzer: man, that would be weird.

[01:08:41] Like if you had basically a, it escapes the lab. Like if, if you enable any individual creator with some GPUs in their basement to like have an automated AI researcher.

[01:08:53] Mike Kaput: Yeah.

[01:08:53] Paul Roetzer: And you take an open source model. Shit.

[01:08:57] Mike Kaput: Well, look, I mean, we're doing, I don't think

[01:08:58] Paul Roetzer: about that stuff today.

[01:08:59] Mike Kaput: It's a [01:09:00] super sci-fi, but genuinely, it's like if you get to the level of systems we're talking about, there may be some very creative and unanticipated ways a system like this would want to solve its own power and compute constraints.

[01:09:12] Paul Roetzer: Totally.

[01:09:12] Mike Kaput: Stuff like that.

[01:09:12] Paul Roetzer: Which is the concern of, like, you just, you, what's that guy's name? the Doomer guy, begins with the Y

[01:09:18] Mike Kaput: Oh yeah. Yudkowsky.

[01:09:19] Paul Roetzer: Yeah. Yeah. I mean, this is the scenario. It's the, it's the lab escape. Yeah. It's where it just, it wants out and

[01:09:25] Mike Kaput: it wants out. It wants, it's

[01:09:26] Paul Roetzer: smarter than us

[01:09:27] Mike Kaput: and machine or borrow compute from every machine type thing.

[01:09:30] And obviously it's is like sci-fi, but it's. You can kind of start to see where you could get to that perspective from one point point of view. Yes. Yeah.

[01:09:37] Paul Roetzer: Yes.

[01:09:37] Mike Kaput: Interesting.

[01:09:38] Paul Roetzer: Yes.

[01:09:38] Anthropic and OpenAI Enterprise Joint Ventures

[01:09:38] Mike Kaput: But in the meantime, bringing things a little back down to earth here, this past Monday, both Anthropic and openAI's announced nearly identical joint ventures that are aimed at selling enterprise AI services to portfolio companies of major private equity firms.

[01:09:56] So these two announcements literally landed within hours of each other. [01:10:00] Anthropic is producing a vehicle that is a $1.5 billion joint venture. It is anchored by Anthropic, Blackstone, and Hellman and Friedman each putting in roughly $300 million. Goldman Sachs is in for about 150 million. General Atlantic Apollo Global Management, Leonard Green, GIC, and Sequoia Capital, round out the cap table.

[01:10:19] And what this does, this entity basically acts as a consulting arm that helps mid-size companies, especially PE backed ones, incorporate AI across their operations. Now right on the heels of this, or right at the same time, rather, Bloomberg reported, openAI's is raising for a similar venture called the deployment company.

[01:10:39] Their vehicle is a bit bigger. It's $4 billion raised from 19 investors at a $10 billion valuation. Some of the participants in call include TPG, Brookfield Asset Management, advent and Bain Capital. The structure of this is basically the same. The JVs raise capital from alternative asset managers, then channel that capital into building [01:11:00] enterprise AI deployments inside those investors, portfolio companies with the investors capturing more value from any resulting contracts.

[01:11:07] So both ventures are expected to lean on the forward deployed engineer model popularized by Palantir, where engineers sit inside customer organizations to build into existing workflows. So Paul, I found this super fascinating. We kind of had anticipated we're going this direction. They had announced a couple other.

[01:11:26] Initiatives to sell into the enterprise, but what stands out to you most here? I thought this was a really cool idea. It seems like the, at least one future that every organization is gonna go towards

[01:11:38] Paul Roetzer: there. There's a number of really interesting elements to this, but I'll kind of keep it brief for now.

[01:11:44] so one is there's this concept that, you know, again, if people are new to the show, I used to run a marketing agency and we were HubSpot's first partner back in 2007. So we were sort of the origin of their, partner, [01:12:00] what do they call it? Solutions partner ecosystem now today. And so back in 2007, we were, we were the first one and then they kind of built it to like thousands over time.

[01:12:08] and HubSpot always touted this research, at least in the later years where I had my agency that for every dollar of software there was $6 of services. And so that was what the solutions partner ecosystem was. There to do. So if HubSpot sold a hundred million in software, there was 600 million in services to be done by this ecosystem.

[01:12:27] Yeah. So I think that that's directionally where they're looking. they're also looking at, for our, our, our technology to be used fully for them to get the full value and us to squeeze more kind of traditional software type revenue out of them, we need to go in and do the work, not rely on outside people to do the work.

[01:12:48] the other component is, I, I'm sorry, but like, you're not doing deals with PE firms to just go in and optimize firms and [01:13:00] hire tons of people, right? Right. That this is, this is explicitly to go after the 6 trillion in human labor of knowledge workers. And so if you're a PE firm and you have 200 companies within your portfolio, you, you bring them in and then there's this compounding value because .

[01:13:18] They can make sure that all 200 companies are utilizing OpenAIr Anthropic in a fully optimized way. They're driving innovation and growth, but you're also looking at the replacement of people. It's like way easier to do if you have the people who know the models and know what they're capable of. Just come in and say, okay, let's look at the sales function.

[01:13:36] And it's like, okay, we don't need these seven roles anymore. We're gonna build agents to do those roles and here's what's gonna evolve. So I think it's just their play to figure out what the future of the org chart looks like. But I mean, PE firms are, are there to maximize returns,

[01:13:52] Mike Kaput: right?

[01:13:53] Paul Roetzer: you know, reduce costs, increase revenue, but do it where you want the revenue per employee number to skyrocket.

[01:13:59] And so [01:14:00] if, you know, let's say you have a portfolio and the average revenue per employee number is 400,000. Let's pick a number. What if it was 4 million instead?

[01:14:08] Mike Kaput: Right?

[01:14:08] Paul Roetzer: And so you're gonna go in with these super aggressive goals to just like, change the financial dynamics of all this stuff. It's gonna be very disruptive, I guess, is, but long story short, it is gonna be a very disruptive model.

[01:14:20] It'll work like they're gonna, they're gonna generate a ton of money doing this. I, yeah, I have a lot of other thoughts. I'll stop there. I've, I have lots of thoughts on this.

[01:14:29] Mike Kaput: Well, to our Coinbase conversation guys, this one is not AI washing. This is

[01:14:35] Paul Roetzer: No, this is the real deal.

[01:14:36] Mike Kaput: Exactly what we're talking about.

[01:14:37] Not this won't happen at every company necessarily, but this is going to lead to some of the reductions and streamlining we've talked about.

[01:14:46] Paul Roetzer: Yeah. I'm really curious what this does with like, I mean, McKinsey and

[01:14:50] Mike Kaput: Yep. Right. Because that kind of is competing directly at the types of things they would be selling. Right?

[01:14:55] Paul Roetzer: Right. Who also resell the models. Yeah. Like I, I haven't really had time to like. [01:15:00] Dive into that, but I would be really curious to think through the competitive dynamics of this and

[01:15:05] Mike Kaput: Yeah,

[01:15:05] Paul Roetzer: how the traditional consulting firms are responding.

[01:15:11] Stripe's New Forward Deployed AI Accelerator Role

[01:15:11] Mike Kaput: So next up, very closely related to this, and this kind of, we talked about forward deployed engineers, in the previous segment, but we also saw this week Stripe posted a new role called Forward deployed AI Accelerator Marketing.

[01:15:24] So these are, this is a new role it's hiring for, and each of these accelerators, according to the job description, is embedded with a cohort of about 20 marketers organized by functional teams shared workflow or location. And the goal is to permanently change how that group works. Stripe describes it as a fundamental transformation in how its marketing organization operates.

[01:15:45] So the success metrics Stripe defines for this role is the number of workflows the accelerator transforms, and the extent to which the marketers in the cohort start every task with an AI tool. So responsibilities include identifying the highest leverage workflow [01:16:00] transformations, building custom tools, agents and automations, tailored to each marketer specific work.

[01:16:04] Then coaching them through a maturity model that runs from awareness to first win, to regular AI integration, to full workflow transformation to self-sufficiency. Stripe says the role is meant to make AI the default mode for all work. Not an occasional tool, but the foundation of how every marketer at Stripe executes.

[01:16:22] And to prepare marketers for an agentic future of designing, building, and overseeing autonomous multi-agent workflows. The role requires five plus years of experience and demonstrated hands-on AI building not chat, not just chatbot use. The base salary is 132,000 to $198,000 and Stripe is hiring in Toronto, Chicago, or remote in the US or Canada.

[01:16:46] So Paul, we had talked about the fact this sounds a lot like the labs concept you were outlining on previous episodes that you've started to kind of toy with at SmarterX.

[01:16:55] Paul Roetzer: Yeah, I definitely, I like this idea a [01:17:00] lot. I think combined with the Coinbase stuff we were talking about, this is exactly what I was saying.

[01:17:04] Like, you, you just gotta like look around right now. People are starting to experiment with different things. This is. One of those, you know, where the job's gonna come from. This is kind of actually a cool example of being an AI forward marketer, and let's say you're really, really good at optimizing workflows and solving problems as a marketer.

[01:17:21] this isn't a job that existed a year ago, and I could see this being a, a common thing now. The Ford deployed thing, I, I'm. Kind of done with like, I feel like we're just, it's like Jesus, can we just call 'em like AI ops people or something like, so I, I don't know, it's like a little bit overdone already in, in my opinion, but I might just be kind of sensitive to seeing that, now watch, I'm gonna like define a job title for us as a Ford Deploy thing, like two months from, I don't, let me, if I put that in a job title, Mike just like, remind me that I don't like it.

[01:17:51] Mike Kaput: Blacklisted words. Yeah.

[01:17:52] Paul Roetzer: Yeah. but anyway, like someone whose job is to sit with a group within a company, within a department and just [01:18:00] optimize the shit out of that company or that team like that is, that's a great role. And that is not enough money to pay them. Like I'm telling you right now, like if you're, if you're in an enterprise and you take like an AI forward marketer and you put them on a team with demand gen or product or whatever.

[01:18:17] And their job is literally just to drive optimization innovation. They're going to make a massive impact if they know what they're doing fast. Like if you allow them the freedom to do their job 132,000 years. No way. Yeah. Like first the person who does this is gonna have to have like seven to 10 years experience, I would think to understand like deeply the marketing function and the role.

[01:18:39] they're gonna have some unique knowledge set, but if I'm that person, I'm not taking that job for $130,000. I don't even know I'm taking it for $198,000. Like your, your impact is gonna be massive if you're in a big company. So, yeah, I really Fascinating again, kind of on a number of levels here.

[01:18:59] Mike Kaput: Yeah, I think about this a [01:19:00] lot because it just feels directionally like the roles of, whether we call it this or not, it feels like how my role is evolving.

[01:19:07] How a lot of people's roles at SmarterX are evolving. Yeah. And I wonder too, how much is important here for the accelerator themselves. It's not just the AI builder and systems thinking, which is critical, but they're gonna have to be pretty good at. Communicating and change management. Yes. I don't know if you can just drop someone in who is like a, a wizard with the technology.

[01:19:30] Paul Roetzer: Yes.

[01:19:30] Mike Kaput: That has no change management sense or pairing them as someone who does because if you deploy this outside of Stripe is a very unique example. Yeah. You deploy this in a more traditional organization, which is amazing. There's gonna be a lot of questions about what you are trying to do to my job. A lot of barriers that you're gonna have to overcome that have nothing to do with systems, I would argue.

[01:19:52] Paul Roetzer: Yeah. And I think maybe that's my point on the salary

[01:19:54] Mike Kaput: Yeah.

[01:19:55] Paul Roetzer: Is in my mind, if this is one person, then all those other things you just described are [01:20:00] part of that person's capabilities and roles. Right. And that is not a mid-level hire. yeah, that's good

[01:20:06] Mike Kaput: point.

[01:20:06] Paul Roetzer: If, if it's literally just someone who can build automations like.

[01:20:09] They don't need to diagnose the problems and things like that and don't have a deep understanding of the business. They're literally just like, oh yeah, I'll go and build that for you tomorrow. And then they build it and you know, hand it to them. That's a different story and great opportunity if that's you.

[01:20:21] Like get in, get a job like that. Like just go build some stuff. But like, know your benchmarks ahead of time because your resume is gonna look so damn good like six months from now. It's like, I went in and they were spending 150 hours a month on this thing. I cut it to five. I went in and they were doing this.

[01:20:35] I cut it to this like, you are gonna be able to show amazing impact right away. And I would like. Try and negotiate some sort of, performance based comp as it really relates, relates to that role.

[01:20:46] Mike Kaput: Yeah, exactly.

[01:20:48] AI Use Case Spotlight

[01:20:48] Mike Kaput: Okay. So next up Paul, we have our AI use case Spotlight of the week. Every week we give you a quick look under the hood at some real AI use cases we're exploring, building, or deploying in our own work at SmarterX.

[01:20:58] So I'm gonna share one this [01:21:00] week that I found particularly valuable. So, like I've mentioned a few times this past week, we finished our 2026 state of AI for business report. The full port report is over 50 pages of data charts, graphs, tables, and analysis pulled, like we mentioned, for more than 2100 professionals.

[01:21:18] But before we shipped it or finalized it internally, we needed to verify the entire thing for two big things, data consistency between the master dataset and the final report. PDF and also the text of it that we're gonna be using in non PDF ways, and then a full line by line proofread of the report. So this kind of pre-launch verification, this, I've done a hundred at these types of things at this point.

[01:21:44] This is super tedious. It takes so much time for a small team to split up and do. we did have humans heavily involved in the process. We always will. But this year I actually gave both tasks to Claude Code. I dropped the master data [01:22:00] set and the report PDF on my desktop asked Claude to crosscheck every number, percentage chart, graph and table in the PDF against the source data.

[01:22:08] Then had it periphery the entire PDF for spelling and grammar. It returned everything as HTML, as like a list of, go check this, go fix this, go look at this, that I could drop straight into a Google Doc to share with the team. So just kind of notable here that again, it doesn't replace the team going through with a fine tooth comb, but often I've found.

[01:22:27] With this approach, you can only stare at a PDF in like tiny text over 50 pages and compare it to charts and tables so long before you start to glaze over and hate your life. And that introduces mistakes. Unfortunately, you miss things. Quad code is very good at not missing things. So it's kind of cool.

[01:22:44] Just to note here, you could do this in a number of ways. You don't need Claude code, but it is kind of cool to do it that way because you can give it, you know, access to a single folder with these files. It ran code to kind of split things up and analyze it. You could probably get similar results just [01:23:00] dropping this into Claude, but I thought this was a pretty cool way to do it.

[01:23:04] I will note also Paul, this is the exact kind of like AI workflow design and like use cases we have been building into Academy. For instance, like this past week, we just dropped three new Claude focused lessons. We have Claude skills for slide decks, Claude Cowork, autonomous Workflows, and talking about Claude Desktop as a AI work partner.

[01:23:26] So they don't teach how to do this exact thing, but you can see how with these mini lessons, these regular gen AI app reviews we do, you can kind of learn sequentially, not just, hey, what does the tool do? But we're actually focusing on different use cases as well to hopefully help you do things like this increasingly with these kinds of tools.

[01:23:45] Paul Roetzer: Yeah, that was the vision behind AI Academy. We sort of reimagined it was it needed to move to realtime education. So we have the, you know, the course series and the certificates and things that are more evergreen. But yeah, I don't know, I don't know how you do online education these days without this real or any kind of [01:24:00] education, without this realtime stuff or something new happens.

[01:24:02] Like, let's look at it, let's do a 20 minute review of it. Let's get it out to people. yeah, it's becoming invaluable for our own team just to stay up on what's going on and then hopefully help other people figure this stuff out too. And this is a really cool example.

[01:24:15] AI Product and Funding Updates

[01:24:15] Mike Kaput: Yeah, it was super valuable. alright Paul, our final segment as always is AI product and funding updates.

[01:24:22] So I'm just gonna rapid fire quickly through a bunch of these and if anything jumps out, obviously stop me and we can discuss. first step. openAI's has launched GPT 5.5 instant as the new ChatGPT. Default model this past week. The company says it produces 52.5% fewer hallucinated claims than its predecessor on high stakes prompts in medicine, law, and finance, while using about 30% fewer words per response.

[01:24:48] OpenAI also launched ChatGPT for Excel and Google Sheets. This is a sidebar app that lets plus pro business and enterprise users build, edit, and analyze spreadsheets in natural language alongside their connected [01:25:00] ChatGPT apps and data Anthropic Release. Claude for financial services, including 10 Ready to run agent templates for tasks like Building pitchbooks, screening KYC files and closing books at month end, plus a deeper Microsoft 365 integration that connects Claude directly to Excel, PowerPoint, word and Outlook.

[01:25:20] Anthropic also updated Claude managed agents with three new capabilities. There's a research preview, quote unquote dreaming feature that reviews past sessions to extract patterns and improve agent memory over time. There's an outcomes mode where a separate grader evaluates agent work against a custom rubric and sends it back if it falls short.

[01:25:40] And multi-agent orchestration that lets a lead agent delegate work to specialist sub-agents. In parallel, Microsoft has expanded co-pilot co-work with iOS and Android support. They have reusable cowork skills that capture how a user wants a recurring task done. They have new [01:26:00] connectors to things like monday.com and s and p Global Energy and Claude's Opus 4.7 is now selectable as a model option, right within cowork, Google has quietly shut down Project Mariner.

[01:26:14] The web browsing AI agent, it had highlighted alongside the Gemini 2.0 launch in late, late 2024. Bloomberg has reported that Apple plans to let users choose which AI model powers Apple Intelligence features in iOS 27 iPad OS 27 and Mac OS 27 this fall via a new extensions framework that will support models from Google, Anthropic and openAI's.

[01:26:38] Bloomberg also reported Apple's camera equipped AirPods have reached an advanced testing stage as the company pushes towards AI native consumer devices. Brett Taylor's startup Sierra raised $950 million in around, LED by Tiger Global and gv. this is the a Agenticustomer experience AI startup and their post money [01:27:00] valuation is now above $15 billion just months after its previous fundraise.

[01:27:05] HubSpot has published its vision for an open agent ecosystem committing to making every action that can be done inside HubSpot. Also accessible through APIs and MCP server and other emerging access methods so that external agents can both run on HubSpot and run. HubSpot. Harvey released the legal agent benchmark, an open source, a benchmark, rather, an open source benchmark of more than 1200 agent tasks.

[01:27:31] Across 24 legal practice areas evaluated against 75,000 plus expert written rubric criteria. And this has backing from Nvidia openAI's, Anthropic Mistral, and Deepmind. And finally, DeepSeek is seeking up to 7.35 billion in what could become the largest funding round ever for a Chinese AI startup as the company shifts from pure research to commercialization.

[01:27:58] So as a result, the lab [01:28:00] is accelerating model releases, hiring product talent from companies like Ance, and building enterprise tools as competition and computing costs rise. Paul, one final note here. We mentioned, the AI pulse survey at the top of the episode. Go to SmarterX dot ai slash pulse to go try that out and take it for yourself.

[01:28:21] Takes just a couple minutes and the two big questions we're gonna talk about are your kind of position on if powerful AI models should be vetted by the US government and talking about your own organization. Are you actively replacing any roles today with ai? So, super excited to see the results of that one.

[01:28:39] See

[01:28:39] Paul Roetzer: what the courtroom drama brings us this week.

[01:28:41] Mike Kaput: Yeah, do know. It's gonna be crazy. I feel like we're still just scratching the surface on how weird this is gonna get.

[01:28:48] Paul Roetzer: Yeah. Yeah. I don't know what else they can do. It's like, it's always a surprise, but yeah, another busy week. Like we keep saying just the product and funding alone could just be the episode each week.

[01:28:58] There's so much to [01:29:00] unpack there, so always make sure to go check the show notes. If you heard Mike list through something, obviously we're moving pretty quick. The AirPod stuff's super fascinating to me. The camera's in the AirPod, it's like, there's tons of interesting stuff. The dreaming thing from Anthropic is fascinating research.

[01:29:14] yeah, don't glaze over 'em just because we're running through those. They're real rapid at the end. If there's something that catches your interest, grab the show notes and, you know, go do some extra reading or check out the newsletter each week. There's, yeah, just so many things you can pursue each week with AI these days.

[01:29:28] So thanks Mike again, especially for doing this on a Saturday, making this work. And thanks everyone for joining us. We will be back, I think we only have one episode this week. Pretty sure. I think

[01:29:37] Mike Kaput: so, yeah.

[01:29:38] Paul Roetzer: Yeah. Okay. So we'll be back next week. all right. Thanks so much. Have a good week.

[01:29:42] Thanks for listening to the Artificial Intelligence show. Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, [01:30:00] taken online AI courses, and earn professional certificates from our AI Academy and engaged in the SmarterX slack community.

[01:30:07] Until next time, stay curious and explore ai.

Recent Posts

[The AI Show Episode 214]: Musk v. OpenAI Round 2, Coinbase AI Layoffs, AI “Soft Nationalization" & xAI Folds Into SpaceX

Claire Prudhomme | May 12, 2026

Ep.214 of The Artificial Intelligence Show: courtroom drama from the OpenAI trial, a 60% probability estimate on AI self-improvement by 2028, the White House backing away from AI model vetting, and more in our rapid fire.

[The AI Show Episode 213]: AI Answers - What AI Should Never Do, Enterprise Scaling, Governing AI & Navigating IT Roadblocks

Claire Prudhomme | May 7, 2026

Ep.212 of The AI Show: Paul Roetzer and Cathy McPhillip discuss who sets AI guardrails, where the real opportunity sits as software gets commoditized, and what kids growing up with AI are going to be capable of.

[The AI Show Episode 212]: Musk v. OpenAI Trial Begins, OpenAI-Microsoft Partnership, Big Tech Earnings & Anthropic Eyes $900B Valuation

Claire Prudhomme | May 5, 2026

Ep.212 of The AI Show: OpenAI federal trial begins, AGI clause removed from Microsoft deal, AI agent database wipe, Anthropic valuation, and Ben Sasse interview.