The Artificial Intelligence Show Blog

[The AI Show Episode 200]: Anthropic vs. the Pentagon, OpenAI's $110B Round, Interview with Claude Code’s Creator & Block’s AI-Driven Layoffs

Written by Claire Prudhomme | Mar 3, 2026 1:15:00 PM

Episode 200 arrived in the midst of a very dramatic week in AI and we recorded it live with AI Academy Mastery members for the first time!

This week, Paul and Mike discuss an eventful 72 hours beginning with the Trump administration’s ultimatum to Anthropic: remove all guardrails on Claude for military use or face blacklisting. Anthropic said no. By Friday, the Pentagon had designated them a national security supply chain risk, and hours later, OpenAI quietly stepped in to take the contract.

Beyond the Anthropic saga: OpenAI closes a record-shattering $110B funding round, Claude Code creator Boris Cherney says coding is "effectively solved" and explains what that means for every other knowledge worker, and a research essay on AI's economic impact goes viral.

Listen or watch below—and see below for show notes and the transcript.

This Week's AI Pulse

Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.

If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.

Click here to take this week's AI Pulse.

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:03:30 — AI Pulse Survey Results

00:07:08 — Anthropic vs. US Government

00:44:55 — OpenAI Funding, Amazon Partnership, Response to DoD

00:51:19 — Interview with the Head of Claude Code

00:59:38 — Block AI Job Cuts

01:04:41 — AI Doomer Essay from Citrini

01:10:54 — Politics of Data Centers

01:15:15 — Anthropic Fluency Index

01:18:29 — Anthropic Distillation Attacks

01:21:46 — NVIDIA Earnings

01:24:19 — AI Product and Funding Updates

 

This episode is brought to you by AI Academy by SmarterX.

AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Learn more here.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Mike Kaput: All these comments from the government, despite the bluster, despite the back and forth, despite the threats, they are just admitting they can't live without Claude.

[00:00:09] And frankly I agree with them on that. 'cause you'd have to pry Claude from my cold dead hands if you wanted to take it away.

[00:00:15] Paul Roetzer: welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:45] Join us as we accelerate AI literacy for all.

[00:00:52] Welcome to episode 200 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are coming to you [00:01:00] live on, well, we're recording this live on Monday, March 2nd. It is 9:30 AM Eastern time. I will often timestamp the beginning of these this week may be more important than others because our first main topic today is going to be all about philanthropic and their battle with the Department of War.

[00:01:18] Over the use of Claude, for military applications and monitoring of US citizens. So it is a wild topic. It has been a fast moving fluid topic all throughout the weekend. it's very possible that as we're recording this, some things are gonna be happening. The unique, very unique thing today about episode 200 is this is the first time we are recording one of these with a live audience.

[00:01:43] So we have invited our AI Academy Mastery members to join us for this recording. So normally it's just me and Mike hanging out doing our thing. today we are doing this through a Zoom webinar. So we actually have mastery members joining us. There is a chat I'm seeing out the corner of my [00:02:00] eye going, we are gonna take questions from mastery members at the end.

[00:02:04] So we are grateful to all of our AI Mastery members and everybody who's joining us today from around the world. this is kind of a cool way to celebrate. We were trying to think what could we do for episode 200 that would be fun and unique and we came up with the idea of let, let's just like invite some people to be here while we're doing it.

[00:02:20] So Mike and I are gonna do our usual thing. We just happened to have some friends with us today to listen in, and then we're gonna take some questions from them afterwards. So, yeah. Any, anything else, Mike, on that end before I run us through the presented by and then dive into this? No.

[00:02:36] Mike Kaput: Okay. Just business as usual.

[00:02:38] Paul Roetzer: Yeah, business as usual. Alright. So today's, episode is brought to us by AI Academy, by SmarterX, which is where our mastery members are coming from. AI Academy helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and an AI powered learning platform.

[00:02:55] There are 13 professional certificate course, series available on demand now with [00:03:00] more being added each month. And as a matter of fact, we just added two more. we added AI for financial services, and for industry and ai for finance, for departments. So we have AI department collection, collection, and an AI for industries collection.

[00:03:14] And so those two new course se series with certificates just went live in the last two weeks. so those are great. Go check them out. Academy.Smarterx.ai, you can learn all about Academy and the AI Mastery Membership Program. Okay. If you're new to the podcast,

[00:03:30] AI Pulse Survey

[00:03:30] Paul Roetzer: we start each week with a recap of the previous week's AI pulse survey.

[00:03:35] So we ask these questions, you can go to Smarterx.ai/pulse and participate in the survey each week. These are informal polls of our listeners of how they're feeling about topics we talk about on the podcast. So, Mike, we asked last week. Microsoft ai, CEO, Mustafa Suleyman says most white collar tasks will be fully automated by AI within 12 to 18 months.

[00:03:58] How realistic do you [00:04:00] find that timeline? So it looks like 53% said partially realistic. Some tasks will be, but most, not most. It's interesting. 33% said too aggressive meaningful automation in three to is three to five years out, and then 9% said very realistic. I'm already seeing it in my work and I think, Mike, what I said at the time we talked about that was, I think the tech will be able to do that.

[00:04:26] I just don't think human friction and organizational resistance to change will allow it to happen. That was kind of my general take on that. and then the second was, where do you land on AI generated video using real people's likeness? In the video, it generates 58% said the tech is impressive, but using real people without consent crosses a line.

[00:04:47] 22% said it's inevitable and the law needs to catch up. And then 20% said it's mostly a misinformation risk. That's what concerns me. Hmm. I don't see anybody that said it's creative experimentation [00:05:00] and not a big deal. Looks like,

[00:05:01] yeah.

[00:05:03] Alright, so we will, when we wrap up today, we'll give you this week's, pulse, but again, you can go to Smarterx.ai/pulse and participate in the surveys.

[00:05:13] Alright. So, um. Yeah. What, a 72 hours, Mike?

[00:05:18] Mike Kaput: No kidding.

[00:05:19] Paul Roetzer: So we, when we were preparing for episode 200, Wednesday, things started going sideways with the US government philanthropic, and Mike's gonna break down sort of the chain of events over the three days leading into the weekend. And then we'll talk about kind of where we've landed.

[00:05:34] As of Monday. We went into the weekend thinking that main topics one, two, and three were all going to be related to Anthropic versus the US government. And then we decided Sunday to sort of consolidate this. So I will just give you a warning. This is going to be a bit more extensive than the average main topic, I would say, Mike, is that fair to say?

[00:05:58] Mike Kaput: I would guess.

[00:05:59] Paul Roetzer: Okay. [00:06:00] So we're gonna do our best to, provide context here. I will say that on Saturday or Sunday, I put a post on LinkedIn that I think is relevant here. And I was basically saying like, why on the podcast we choose to try and be as objective as un and unbiased as possible, especially when it comes to things related to politics and religion.

[00:06:23] There are times on the show where we have to wade into those areas. We have to touch on things that can be politically challenging to talk about in a neutral way. and so I would say this is one of those, we're gonna do our best to just sort of present the information, as best we can describe it in a factual way, and then allow people to sort of form their own perspectives and points of view.

[00:06:46] So if you're new to the podcast, that is how we approach this kind of thing, and I just wanted to say that upfront because some of these things are hard to talk about in this neutral way. And I'll do my best, to, to [00:07:00] share, that perspective. So, Mike, I'll, I'll turn it over to you and you can kind of walk us through what happened and then I will do my best to unpack it all.

[00:07:08] Anthropic vs. US Government

[00:07:08] Mike Kaput: Sounds good, Paul. So this past week, the Trump administration blacklisted Anthropic from all federal government work, and that kind of capped off this extraordinary bitter showdown over the future of AI in warfare. So up until this week, interestingly enough, Anthropic was one of the more deeply embedded AI companies in US defense, and that's thanks to a $200 million contract that was awarded back in July of 2025.

[00:07:34] And as a result of that, Claude was actually the only frontier model approved for the military's classified networks. And it was deployed, deployed in some of those through a partnership with Palantir. But as part of its contract, Anthropic had these two non-negotiable safety conditions. One, Claude could not be used for the mass domestic surveillance of Americans, and two, it could not be used to power fully autonomous weapons.

[00:07:59] Now the [00:08:00] Pentagon led by defense secretary, or I guess war secretary at this point, Pete Hegseth, who had recently declared at SpaceX's headquarters that military AI quote will not be woke. Decided these guardrails were unacceptable and reportedly a flash point might've come after the US military raid that captured Venezuelan president Nicolas Maduro.

[00:08:20] the Pentagon claimed Anthropic raised concerns about Claude's use in that operation. Though Anthropic, CEO, Dario Ade flatly denied doing so. But regardless whether it's a miscommunication or not, there was a play by play that started right around on Tuesday and into Wednesday, and that carried through the week and into the weekend.

[00:08:40] That got very hairy very quickly. So I'm gonna kind of walk through very quickly what the timeline was of the major events. Then we'll dive deeper into this. So. On Tuesday, apparently Hegseth called Amay in a tense. And according to the reporting, not warm and fuzzy meeting at the Pentagon and delivered an ultimatum that [00:09:00] Anthropic must allow the military unfettered access to Claude for all lawful purposes and gave Amay until 5:01 PM Eastern on Friday to comply if he refused.

[00:09:11] The Pentagon threatened to invoke the defense Production Act to force compliance. Or they threatened to formally designate the company a supply chain risk. So on Wednesday overnight, the Pentagon apparently sent Anthropic its kind of best and final offer of how this could all work. Given Anthropics concerns about those two red lines that they don't want to cross.

[00:09:32] Anthropic reviewed the contract. They felt the new language was not good enough. It was a bit of a facade, and the concessions were paired with all sorts of escape patches and legalese that would allow perhaps in their view, the military to disregard the safety guardrails they found to be so important.

[00:09:48] Now, Thursday, this starts to really explode into the public view. ODAY releases a statement declaring that Anthropic cannot, in good conscience, ae to their request. This causes all sorts of [00:10:00] political backlash. The Pentagon's technology chief Emile Michael, took to X calling him a liar with a God complex.

[00:10:06] Meanwhile, the tech industry started to mobilize hundreds of employees from Google and openAI's signed open an open letter, urging their executives to also stand in solidarity with the red lines Anthropic head drawn. And then Deadline day Friday hits, and they're kind of a play by play during the day plays out.

[00:10:24] they had some kind of crazy consequences for what's going on here. So in the morning behind the scenes, apparently Heg access team offered a major concession agreeing to remove loophole phrases from the contract. In the afternoon, the deal apparently fell apart completely. When Anthropic learned the Pentagon still wanted to use AI to analyze bulk data collected from Americans that crossed that line on mass surveillance.

[00:10:47] Furthermore, Anthropic rejected a proposed compromise to simply keep Claude in the cloud to distance it from what they would call edge based autonomous weapons, desperate to prevent a collapse. [00:11:00] Top bipartisan senate defense leader sent a private letter begging the Pentagon to extend the deadline.

[00:11:05] About an hour before the 5:01 PM deadline, president Trump took to truth social calling Anthropic left-wing nut jobs, and ordering all federal agencies to immediately cease using Anthropic technology. Initiating a six month phase out period, 5:01 PM happens. The deadline passes. ADE has not caved to any of the demands, and in the evening, hegseth officially designated Anthropic a supply chain risk to national security.

[00:11:30] It is worth noting here. This is an extraordinary blacklisting tool that is historically reserved for foreign adversaries. it bans any defense contractor from doing commercial business with the company. Anthropic immediately vowed to challenge this in court and Friday night, and we'll talk through this as well in a big final twist hours after this happens to Anthropic, Sam Altman announces that openAI's has officially reached an agreement with the Pentagon to deploy its models on classified [00:12:00] networks and.

[00:12:01] The catch is that the Pentagon actually apparently agreed to open AI's terms, which included keeping the deployment strictly in the cloud and enforcing the exact same prohibitions on mass surveillance and autonomous weapons that Anthropic had just faced resistance for defending. So Paul, a lots to talk about here.

[00:12:19] Can you maybe walk us through in depth more of what is going on here, what parts of this are worth paying attention to, and importantly kind of what you think might happen next.

[00:12:29] Paul Roetzer: Yeah, as I was saying, when we, we were getting started, like this literally is an entire, we could do two hours just on this topic.

[00:12:35] it is, there's so much to unpack and it is not done. I , I feel like we are probably in the early stages of all of this. A lot happened in three days, but I feel like there's a lot more to happen. to rewind back to, even just before everything started going crazy on Wednesday, time Magazine actually had an article, and I think this is relevant 'cause my assumption is all this was already happening behind the [00:13:00] scenes and this was not ac accidental that this came out.

[00:13:02] So Time Magazine, we'll put the link in the show notes. Um. As the story says philanthropic, the wildly successful AI company that has cast itself as the most safety conscious of the top research labs is dropping the central pledge of its flagship safety policy. The company decided to radically overhaul the responsible scaling policy that we've talked about on the show.

[00:13:23] Many times that decision included scrapping the promise to not release AI models if Anthropic can't guarantee proper risk mitigations in advance. So we felt that it would actually help anyone. it it, this is a quote, we felt that it wouldn't actually help anyone for us to stop training AI models.

[00:13:42] Anthros chief Science Officer, Jared Kaplan, told time in an exclusive interview. We didn't really feel with the rapid advance of ai, that it made sense for us to make unilateral commitments. If competitors are blazing ahead, it commits to matching. So Thea, matching or surpassing the safety efforts of [00:14:00] competitors and it promises to delay Andros AI development.

[00:14:04] If leaders both consider Anthropic to be the leader in the IRA and think the risks of catastrophe to be significant, so. Again, just relevant context that ironically, 24 hours before all this happened, an article comes out saying Anthropic is shifting its safety policies that are the foundation of the company.

[00:14:22] Mm. For whatever that's worth. okay, so then expanding on some of the items you touched on, Mike. So the first is this anthro Anthropic statements. So, so basically DIO and, Anthropic get this, word from the government that they basically have like three days to concede to these, requests. So Anthropic released a statement on Thursday and all this has happened publicly.

[00:14:45] So we're like, oh, what are they gonna do? What's gonna happen Friday? So they release the statement on Thursday the 26th, and in that statement, that's attributed to Darrio. It says The Department of War has stated they will only contract with AI companies who ae to quote [00:15:00] any lawful use and remove safeguards in their cases.

[00:15:03] In the cases mentioned above, they have threatened to remove us from their systems. If we maintain these safeguards, they have also threatened to designate, designate us the supply chain risk. A label reserve for us adversaries never before applied to American company and to invoke the defense production Act, to force the safeguards removal.

[00:15:21] These latter two threats are inherently contradictory. One labels us as security risk and the other labels clawed and as, as essential to national security. The statement continued. Regardless, these threats do not change our position. We cannot in good, conscious ae to their request. Then, you know, everything's kind of goes 24 hours later.

[00:15:40] And then the tweet, because philanthropic is like, we still haven't gotten any official word from the government. All we have is a tweet from Hegseth. So on Friday at 5:14 PM this is following the Trump Truth social thing. it says, this week Anthropic delivered a masterclass in arrogance and betrayal as well as the textbook [00:16:00] case of how not to do business with the United States government or the Pentagon.

[00:16:03] Our position has never wavered and will never wave. The Department of War must have full unrestricted access to anthropics models for every lawful in all caps, purpose in defense of the Republic. And then it goes on and on. And, it says, in conjunction with the president's directive for the federal, government to cease all use of anthropics technology.

[00:16:23] I'm directing the Department of Award designated philanthropic, a supply chain risk to national security effective immediately. No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of Award services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.

[00:16:48] Dude, I could do an hour on that paragraph alone.

[00:16:51] Mike Kaput: No kidding.

[00:16:52] Paul Roetzer: I I'm gonna, I'm gonna come back around to the complexities of that paragraph, but each sentence on its own [00:17:00] contradicts itself in like five different ways, but we'll, we'll come, we'll come back to that. So then Anthropic response with another official statement, we held to our exemption exceptions for two reasons.

[00:17:10] So this is now that evening, and I'm at, I'm at a hockey game with my buddies and I'm like, like, gee, man. okay. We held to our exceptions for two reasons. First, we do not believe that today's frontier AI models are reliable enough to be u used in fully autonomous weapons. Allowing current models to be used in this way would endanger Americans, war fighters and civilians.

[00:17:32] Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights. Does designating Anthropic as a supply chain risk would be an unprecedented action? One historically reserved for us, adversaries never before publicly applied to an American company. We are deeply saddened by these developments as the first frontier AI company to deploy models in the US government's classified networks, and up, and as of this moment, still the only one capable of doing this.

[00:17:58] By the way, Anthropic has [00:18:00] supported American war fighters since 2024 June and has every intention of continuing to do so. We believe this designation would be both legally on sound and set a dangerous precedent for any American company that that negotiates with the government. Dario then did an interview Sunday morning with CBS where he, he kept using the terms retaliatory, imp punitive, which obviously is legal advice.

[00:18:21] He's being given that you have to set this up as that's what these are in the statement. They continued. No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will change, challenge any supply chain risk designation in court.

[00:18:37] Okay, so I'm gonna stop there for a second. I'm gonna go into an Atlantic article, which I found to be really solid about the real sticking point here, but anything I'm missing there, Mike, or that you wanna like double click on.

[00:18:48] Mike Kaput: No, I think that sums up really well, kind of where we're at as of right now in the timeline and what has happened so far before we kind of get into, some of the other lab [00:19:00] responses and government responses.

[00:19:02] Paul Roetzer: Okay. Alright. So then The Atlantic has this really good article, I think this was from Sunday morning, if I remember correctly. So they, they, they have inside sources at, at, obviously, when you're reading this. So they said, according to the source, familiar with the negotiations on Friday morning, Anthropic received word from Hexis team, would make a major concession.

[00:19:20] The Pentagon had kept trying to le leave, itself little escape hatches in the agreements that it proposed to Anthropic. It would pledge not to use anthropics AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loophole phrases, that's their word, loophole phrases like as appropriate, suggesting that the terms were subject to change based on the administration's interpretation of a given situation.

[00:19:47] On Friday afternoon, Anthropic learned the Pentagon still wanted to use the company's ai. This is real key. This is, this is the sticking point. They've been doing this since 2022. By the way, this is not like, Hey, we might do this. They're [00:20:00] doing this. okay. So they learned, Anthropic learned that the Pentagon still wanted to use the company's AI to analyze bulk data collected from Americans that could include information such as the questions you ask your favorite chatbot.

[00:20:13] Your Google search history, your GPS tracked movements and your credit card transactions, all of which could be cross-referenced with other details about your life. Anthropic's leadership told Hegseth's team that was a bridge too far and the deal fell apart. So the reason they want that kind of data is let's say there's like a protest and they wanna know which Americans were there.

[00:20:32] They would use Claude to analyze all this data and basically match you and figure out if you were there or not. Mm. That's the kind of thing they're talking about. I went on to say my source, who am I granting anonymity because they're not authorized to talk about the negotiations. Also, shed further light on the disagreement between philanthropic and the Pentagon over autonomous weapons, machines that can select and engage targets without a human making.

[00:20:53] The final call this, this is real key. The US military has been developing these systems [00:21:00] for years and has budgeted 13.4 billion for them in fiscal year 2026 alone. They run the gamut from individual drones to whole sway, whole swarms that can be used in the air and the sea. Now this is really important and this gets back to them dropping the thing from the responsible scaling policy.

[00:21:20] This is the Atlantic Anthropic has not argued that such weapons should not exist. To the contrary, the company has offered to work directly with the Pentagon to improve. Their reliability. But for now, philanthropics leaders believe that their AI hasn't reached that threshold. They worry that the models could lead the machines to fire indiscriminately or inaccurately or otherwise endangered civilians or even American troops themselves.

[00:21:47] At one point during the negotiation, it was suggested that the impasse sale over autonomous weapons could be resolved if the Pentagon would simply promise to keep the company's AI in the cloud and out of the weapons themselves. This gets into like some [00:22:00] more technical detail. Mike, you touched on this a little bit.

[00:22:02] Yeah, I'm not gonna go down that path right now, but just understand that's a really important distinction, that maybe we'll touch on at a later date. And then the article Continuing Anthropics leaders might have hoped that other AI companies would hold a similar line related to the surveillance and autonomous weapons.

[00:22:17] Earlier in the week, they had reason to believe that openAI's might, CEO Sam Altman had said that like Anthropic openAI's would also refuse to allow its models to be used in autonomous weapon system. Now, as I said at the start, we try our best to present all sides of debates. So I'm now going to present you something that, and I'm, I'm going to connect a couple dots here, that I think are extremely important to understand.

[00:22:43] So this, this counterpoint is gonna come from Palmer Luckey, who is the founder of Oculus, who sold that to Facebook. Made his, you know, hundreds of millions or billions selling that to Facebook. He is also the co-founder of Andel Industries. Um. Andre Industries is an American Defense technology company [00:23:00] specializing in the development of wait for IT advanced autonomous systems.

[00:23:05] The company and Andrew raised 2.5 billion at a 30.5 billion valuation, led by and guesses Mike Founders Fund in Peter Thiel.

[00:23:16] Mike Kaput: Yep.

[00:23:16] Paul Roetzer: So this is, this is the name, this is the person that you want to keep track of, founders' Fund and Peter Theil. Peter Theil is part of the PayPal Mafia. That is where Elon Musk made his first couple hundred million.

[00:23:28] it's where a lot David Sacks, like all these guys, they're all connected to PayPal and then Theo has the Founder's Fund. So Theo is, I'll get to it in a second. Theo is a very important, important character in all this. So here is Palmer's tweet. He said, do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives?

[00:23:50] Seemingly innocuous terms from the latter, like you cannot target innocent civilians are actually moral minefields. That lever lever differences of cultural [00:24:00] tradition into massive control. At the end of the day, you have to believe that the American experience is still on experiment is still ongoing.

[00:24:08] That people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corporations and their shadow advisors. I still believe Palmer says, and that is why, quote bro, just agree the AI won't be involved in autonomous weapons and mass surveillance.

[00:24:32] Why can't you agree is so simple. Please, bro, is an untenable position that the United States cannot possibly accept. So Palmer's position is, and Palmer's very outspoken. Like I actually really like listening to Palmer. I think sometimes his perspectives are very different than mine, but as I said in my post on LinkedIn, that's okay.

[00:24:54] Like We should listen. Sometimes there's things in there that it's like, I actually, [00:25:00] in theory, this is a really interesting perspective, like you could say, like Anthropic and I am, I'm gonna kind of divert here for a second, Mike. This is where I have to be careful. you can say that Anthropic is making a, a moral, ethical stand here.

[00:25:23] the precedent sets is that one company and in, in essence one CEO. Gets to tell the elected officials of our democracy what they can and can't do, what is and is not legal.

[00:25:38]

[00:25:38] Paul Roetzer: And that I think, actually gets to the heart of what is actually going on here, is there's this slippery slope. It's like, okay, if we make these two concessions, well what else does this corporate bro from Silicon Valley get to tell us?

[00:25:52] The elected officials of the government, of what we can and cannot do? So this is the counterpoint that I 've looked [00:26:00] at lots of things, read lots of things in the last like 72 hours. I thought he did the best job of just laying out this like, listen, you can take the moral high ground, but, but at the end of the day, we are electing people to make these kinds of decisions.

[00:26:12] Now, you might not like what this current administration does with that power, but our democracy depends on these people. If we don't like what they do with this, then, then I'll elect them out of office, is basically what he's saying. So I'll stop there for a second, Mike, and see if you have any thoughts on this without.

[00:26:29] I was getting down to slippery slope here.

[00:26:31] Mike Kaput: Yeah, I mean I feel like it's a, every angle is a slippery slope here, but yeah, it is interesting. I think it's important to, I like all the context you provided because I think this is very easy to get into a really quick, like, gut reaction, black and white thing about this, right?

[00:26:49] Because obviously something like autonomous weapons sounds super scary, but it, like you mentioned, it is worth noting in that Atlantic article, like this stuff is not exactly new, it's already been [00:27:00] done. That doesn't make it good or bad. Not trying to value judge that, right? But it's not necessarily like, oh, we are necessarily like coming up with this like awful sinister AI powered thing.

[00:27:10] Now maybe mass surveillance sounds a little different, but, I would say it's a helpful perspective, I think to have the nuance to this before we kind of get into where it goes, what it means. I think it is very interesting to hear Palmer Lucky who is the Peter Thiel, acolyte, raging against billionaires.

[00:27:30] Having an impact on government is a unique position, I say. So it's, it's gonna be really interesting to see people speaking out of many sides of their mouth and how this makes for some strange bedfellows as well.

[00:27:44] Paul Roetzer: So then one of the things we often try and do on this show is like, okay, what are the other perspectives?

[00:27:47] So Elia Esva, who we've talked about many times, who does not tweet often.

[00:27:51] Mike Kaput: Yeah.

[00:27:51] Paul Roetzer: He showed up and he said it's extremely good that Anthropic has not backed down. And it's significant that OpenAI has taken a similar stance. This was before [00:28:00] we, Sam jumped in and took the deal. he said, in the future there will be much more challenging situations of this nature and it'll be crucial for the relevant leaders to rise up to the occasion for fierce competitors to put their differences aside.

[00:28:13] Good to see that happen to today. So again, that was before openAI's stepped in and took the contract,

[00:28:19] Mike Kaput: right? Right.

[00:28:19] Paul Roetzer: Now Google, I don't know what's going on there. So, we have yet to see, and again, if someone sees this while we're on this, like flag us in the chat for us, I have not seen a tweet from Sundar or deas about any of this.

[00:28:37] They've been radio silent since last Wednesday. The only person, the only major leader at Google that I saw tweet anything was Jeff Dean. And this was on the 25th. He's the chief scientist Google Deep Mine and Google, Google Research. So when it came out that the government was basically presenting this ultimatum on Wednesday, Jeff tweeted, mass surveillance violates the Fourth Amendment and has a chilling effect on [00:29:00] freedom of expression.

[00:29:00] Surveillance systems are prone to misuse for political or discriminatory purposes. Somebody then said, well, what about autonomous weapons? He replied, in 2018, I signed this letter in. My position hasn't changed. We'll put the link in the show notes, but the Future of Life Institute. Published a, a Pledge on Lethal Autonomous Weapons in June, 2018.

[00:29:18] The, in that pledge, it says, AI is poised to play an increasing role in Mari military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of ai. In this light, we, the undersigned, agree that the decision to make human, take a human life should never be delegated to a machine.

[00:29:41] So, June, 2018, there was 5,200 AI leaders sign that. Okay. fun side note. So keep in mind, all right, the government wants to get rid of Claude. We're gonna, you know, as of right now, you can't work with them. They're on supply chain risk list. okay, so who steps in? [00:30:00] So this was all like, you know, there's five labs that could possibly lose unless they're gonna use deep seek, from China, which isn't gonna happen.

[00:30:07] But ironically, they're treated more favorably now than our own AM American based Anthropic company. so you have openAI's, you have Anthropic, Google Xai. Who am I missing, Mike?

[00:30:20] Mike Kaput: I guess I, do we even include meta in here these

[00:30:23] Paul Roetzer: days? Meta, there we go. Meta. Yeah. Okay. Those are the five companies that are building AI models.

[00:30:28] Powerful enough in theory to do what the government wants.

[00:30:30] Mike Kaput: Yeah.

[00:30:31] Paul Roetzer: Well, who's obviously gonna step in Elon Musk? That, that, that, that was given, ironically, I don't know who seeded this article, but nothing is a coincidence at this point. The Wall Street Journal comes out with an article, I think it was on Thursday.

[00:30:46] It says Officials at multiple federal agencies have raised concerns about the safety and reliability of Elon Musk's xAI artificial intelligence tools in recent months, Grok Ford does not meet the safety and alignment expectations [00:31:00] required for general federal use within GSA general services administration and an experimental federal AI platform said a January 15th executive summary.

[00:31:12] the warnings proceeded the pentagon's decision this week to put Xai at the center of some of the nation's most sensitive and secretive operations by agreeing to allow its chatbot Grok to be used in classified settings, which is what Claude is currently used for. So, Elon's trying to slide in and like get Grok in there.

[00:31:29] S Anthropic was the only developer approved for classified use before the deal between XAI and the military. But ironically, despite all this, it says in recent weeks, GSA officials were told to put X AI's logo on a tool called USA ai, which is essentially a sandbox for federal employees to experiment with different AI models.

[00:31:49] So while the logo is there, the tool actually isn't available. So like one possibility and the most obvious one was Elon Musk and XI. But the government apparently, and someone at the government felt like [00:32:00] leaking this to the Wall Street Journal says that tool is no way safe for our systems, even though we're being told to put it into them.

[00:32:06] Okay. Then one other piece of context before we get into, openAI's sliding in and taking the contract. Someone I follow and there's a, I think I mentioned Dean Ball last week on the episode. Yeah. Maybe a, a podcast episode with him. So it's ironic timing. Dean Ball is a former senior policy advisor on AI for the Trump administration and lead drafter of the administration's AI action plan, which I remember at the time saying like, wow, someone who knows their stuff actually wrote this like this.

[00:32:32] The action plan is pretty good. Yeah. So he provided context to the significance of this moment on a series of expos on Friday night. So right after all this happened, 5 25 Friday night, the United States Federal government is now by an extremely wide, wide margin, the most aggressive regulator of AI in the world.

[00:32:49] Congratulations everyone. Then 6 0 8. Think about the power Hegseth is asserting here. He's claiming that the Department of Defense or war can force all contractors to stop [00:33:00] doing business of any kind with arbitrary companies. In other words, every operating system vendor, every manufacturer of hardware, every hyperscaler, every type of firm, the Department of Defense contracts with.

[00:33:10] All their services and products can be denied to any economic actor at the will of the Secretary of War. This is obviously a psychotic power grab. It is almost surely illegal, but the message it sends is that the United States government is a completely unreliable partner for any kind of business. The damage done to our business environment is profound.

[00:33:30] No amount of deregulatory vibes sent by this administration matters compared to this arson. Again, this is a Trump advisor, so give yeah, all perspectives. The United States government just essentially announced its intention to impose Iran level. This is before we bombed Iran, by the way. Iran level sanctions.

[00:33:48] Or China level entity listing on an American company. This is by a profoundly wide margin, the most damaging policy I have ever seen, the US government try to take and it probably will not [00:34:00] succeed. Okay. Real quick, stop there before we get into Sam, and then like, we'll, we'll move on. Like, we're, like I said, this is a big topic.

[00:34:06] There's lots going on here. Any, any other quick notes that might

[00:34:09] Mike Kaput: Yeah, I mean, this is probably a whole topic in and of itself, but I feel like as we're doing the play by play and we're like, oh, hey, like how is this affecting the AI industry? I think Dean hits the nail on the head with this is about far more than ai.

[00:34:23] This is about the US government essentially from like imposing a command economy on the, and AI tools and technology. And it's pretty wild to see that kind of move be used. So I, carelessly, I don't know is the right word, but just so suddenly it's, it seems like it came outta nowhere, especially, my gosh, we've only talked about this administration in terms of the deregulation piece and then suddenly it's an about face like this and

[00:34:49] Paul Roetzer: they regulate their most innovative company.

[00:34:52] Mike Kaput: A hundred percent. Yeah.

[00:34:54] Paul Roetzer: okay, so then Sam, 9:56 PM Friday night, slides in. With [00:35:00] a tweet and says, tonight we reached an agreement, the Department of War to deploy our models in their classified network. I'm not gonna read this whole thing. Basically, they step in, they do the thing, they claim the language is different and gives them like comfort to do this and actually claims that by them doing this, it might actually help Anthropic and that they have actually advocated with the Department of War to not impose these terms on Anthropic.

[00:35:23] and so he basically says, we think by doing this, it's for the best of humanity and and the best of the industry. That doesn't go well for them. Obviously there's almost immediate backlash for doing this. Their own employees are like, what are we doing? There's the letter you mentioned, Mike.

[00:35:39] People are signing that like, we're, don't divide us, we're in this with Anthropic. so Sam feels the need to then do an ask me anything session Saturday night on X and we'll put a link to this. There's things he touched about, like y the rush deal, like why did you have to sign this so fast? concerns about the precedent set with philanthropic, the Department of War [00:36:00] blacklisting them.

[00:36:00] He gets into that. He talks about red lines and lawful use. talks about the designation of supply chain risk and how he will continue to advocate. They should not do this to philanthropic even though they're competitors. And then he summarizes it with like three key takeaways. He said, there's more open debate than I thought there would be.

[00:36:17] At least in this part of Twitter about whether we should prefer a de Democrat, a democratically elected government or unelected private companies to have more power. That goes exactly the point Palmer Lucky was making. He said, I guess this is something people disagree on, but I don't, this seems like an important area to discuss.

[00:36:32] So he's saying he is in favor of the government, are the elected people and they're the ones that should decide how to use the technology. Two, I think the question, behind a lot of the questions, but I haven't seen quite articulated what happens if the government tries to nationalize openAI's or other AI efforts.

[00:36:48] He said, I obviously don't know. I have thought about it. Of course, it seems to me for a long time it might be better if building AGI were a government project. So he's actually in a passive way kind of saying [00:37:00] like, it's really not a bad idea that the government should probably be doing this, but it doesn't seem super likely on the current trajectory.

[00:37:05] That said, I do think a close partnership between governments and the companies building this technology is super important. And the third, he said, people take their safety in the national security sense. More for granted than I realized, which I think is a good thing on balance, but I don't think shows enough respect the tremendous work it takes for that to happen.

[00:37:24] this is a side note, again. You may not think this is as significant as I do, but in 2025, and we've talked about this on the show, OpenAI President Greg Brockman and his wife Anna, became mega donors to the Trump administration. In fact, the largest single donors giving $50 million to, leading the future, a bipartisan super PAC focused on combating state level AI regulation, and 25 million to Mega Inc.

[00:37:48] A troop, Trump Super pac. Take that for what it's worth. So, I will say it's very surprising to me that open and the government could arrive at terms on a contract so quickly. [00:38:00] This is the government. Nothing in the government contractually gets done in 12 hours. So I don't even understand how this actually is even possible.

[00:38:09] But I wanna wrap up here, Mike, so we have time for all this other stuff. What I tried to answer is what happens next? So here's a couple of thoughts here. Open the open letter from openAI's, Google and others supporting philanthropics. So we talked about this, this not divided.org. We'll put the link in the show notes.

[00:38:26] as of this morning, 645 Google employees and only 94 openAI's, which I thought was interesting, have signed this letter basically saying they support that Anthropic did not make these concessions. I do think that openAI's is in a race to save their talent. I think they're gonna lose staff quickly over this.

[00:38:45] I think this is the greatest. A talent recruiting, coup that Anthropic could have ever done is like, if you care about safety, they already were the place to be you. They're probably getting flooded with resumes from top researchers. Like, I'm [00:39:00] out, I'm coming over to Anthropic. Like, let's go. another note, if the Democrats take back the house and the Senate in the midterms this year, this is gonna get insanely messy.

[00:39:12] Like, right. So something to keep an eye on. The other thing that I think is hilarious, and this goes back to these like the statement made by Hegseth and how contradictory it was of itself. They're, they're so much of a supply chain risk that they're mandating everyone else. Stop working with them, but we're still gonna use them in the bombing of Iran over the weekend.

[00:39:33] Yeah. But it was, Claude was fundamental to that, and we're gonna keep 'em around for six months. So they're so dangerous that we're actually gonna allow them to stay in our systems for the next six months.

[00:39:46] Mike Kaput: Yeah,

[00:39:46] Paul Roetzer: the six month wind down, I think is laughable. Like, I don't think this ever happens. I think they find a deal.

[00:39:52] What this administration does is they, they take extremes, extreme positions as negotiating [00:40:00] ploys. You can go read the book that they literally wrote on how to do this. Yeah. They take the most extreme position. They do the most extreme thing, say the most extreme thing all to get you to meet somewhere in the middle or closer to like their end game.

[00:40:13] So. I don't think there's any way that this actually happens. I don't think legally they can designate them this way and I don't think they will. I think they will find a deal. I think they're probably still talking through the weekend and somehow they find a way to make this work. 'cause the government needs Anthropic.

[00:40:30] Yeah. That is very obvious. Anthropic wins hearts and minds. They jumped to number one in the app store over the weekend from, I don't know, like two hundreds. so if you go look right now, Claude is number one in the app store ahead of chat, GBT. if the Hgsi post is to be taken, literally it would be catastrophic for philanthropic.

[00:40:49] They would literally bankrupt them. And a massive mess for the whole industry. Go back to what I just said. Anthropic raised $30 billion in February, co-led by Peter [00:41:00] Thiel's founder fund. So the dude who put JD Vance in office as the, assistant, as the vice president, the guy who put Palmer Lucky, where it is like the puppeteer of all of this co-led every round philanthropic has raised.

[00:41:16] He was the first Silicon Valley guy to support Trump. In 2016. So Anthropic is position is this, what do they call them? The left, left wing nut jobs or whatever it is.

[00:41:26] Mike Kaput: Oh, right, right.

[00:41:27] Paul Roetzer: Feel is the guy that's funding them. Yeah. Like the, it makes no, none of this makes any sense. So when you, when you zoom out, it, there's just no way, this is how this plays out because the other one who, who, who is the chairman and co-founder of Palantir, which is what the Claude is being used, right, right.

[00:41:45] Through Palantir. Peter Thiel is the chairman and co-founder of Palantir, right?

[00:41:50] Mike Kaput: Like,

[00:41:50] Paul Roetzer: so Google owns 14% of Anthropic. Amazon owns 15 to 21% of Anthropic. Microsoft owns some single data percent. There's just no [00:42:00] way that, that, this is how this plays out. So I'm gonna stop there, Mike, if it does proceed, and if they do actually, you know, officially designate these things besides just a tweet, the legal case on this will run for years.

[00:42:15] I think it's just, it's one of the craziest things I've ever, not just in ai, like one of the crazier news stories I've ever followed, and it's only like three days old, so,

[00:42:24] Mike Kaput: no kidding.

[00:42:25] Paul Roetzer: I wanna make sure we end up having like, time for all the other topics today. But this is why I say we were gonna just do this as three main topics.

[00:42:31] Yeah. Like, it's so much to con unpack, but hopefully that gives some perspective. But Peter Thiel, Palantir Founder's Fund follow the money like anything in, in, in business and government follow the money. Um. I don't see any way that Theo isn't talking to Vance and Vance isn't talking to Trump and Heis and all this stuff just gets, somehow you make it go away.

[00:42:56] Mike Kaput: Yeah,

[00:42:56] Paul Roetzer: it's convenient that there was a, a bombing of another country over [00:43:00] the weekend and you kind of like forget about some of these things or they steal the headlines by the time you get to Monday and I don't know, the whole thing's wild.

[00:43:08] Mike Kaput: Yeah. The only other thing I'll say is, all these comments from the government, despite the bluster, despite the back and forth, despite the threats, they are just admitting they can't live without Claude.

[00:43:19] And frankly I agree with them on that. 'cause you'd have to pry Claude from my cold dead hands if you wanted to take it away. So that alone, that fact may confirm everything you just said where like, good luck when you try to take this away from people. Right.

[00:43:34] Paul Roetzer: And think about the enterprises. So like, yeah, this is the greatest marketing thing they've ever done.

[00:43:38] Someone in the office this morning when you came in, it was like, oh, they didn't even need the Super Bowl ad. Like, so if you're a massive enterprise in a highly regulated industry and you now know that the only company for two years that the government trusted with classified settings was Anthropic.

[00:43:53] Mike Kaput: Right? Right.

[00:43:54] Paul Roetzer: Who are you gonna trust? Sure as hell is not Xai who can't even get GROCK approved to do this [00:44:00] stuff. openAI's apparently wasn't already there. Like how were they not already there?

[00:44:04] Mike Kaput: Right,

[00:44:04] Paul Roetzer: right. And I have to admit, I have no idea where Google comes out in all of this. I just assumed Google had already Gemini classified.

[00:44:13] I don't know. Like I'm really, really. That's the most bizarre part of this. Like I'm very anxious to see Google's official positioning on all this, and they are. the silence is deafening. I would say from Google's perspective. It's very weird.

[00:44:26] Mike Kaput: Didn Google have a couple years ago already some issues where employees yes, were like revolting against defense contracts and stuff.

[00:44:32] I forget all the details, but I think we talked about it. So they might be really gun shy on that, I guess. No.

[00:44:36] Paul Roetzer: Yeah, and Eric Schmidt, the former CEO and the chairman is very, yeah, aggressively in the building out of military capabilities with ai, so,

[00:44:44] Mike Kaput: yeah.

[00:44:44] Paul Roetzer: Yeah, it's a, it's a tricky topic.

[00:44:46] Mike Kaput: Well, I suspect that this will not be the last time we talk about it.

[00:44:49] I probably have a bunch more to discuss on next week's episode as well.

[00:44:53] Paul Roetzer: Seriously. All right.

[00:44:55] OpenAI Funding, Amazon Partnership, Response to DoD

[00:44:55] Mike Kaput: Alright, so let's get into some other main topics this week. So, very [00:45:00] closely related though, we're kind of talking about some other updates related to openAI's. So obviously they're highly involved in the Anthropic story with their decision to accept a deal with the Pentagon.

[00:45:10] But also they had a pretty big week in some other ways. They closed $110 billion funding round this week, which is the largest private financing in history. They got 50 billion from Amazon, 30 billion from Nvidia, 30 billion from SoftBank. And this round values openAI's at a $840 billion post money valuation and remains open for additional investors.

[00:45:34] So this Amazon investment's really interesting. It's kind of the centerpiece. So of this 50 billion they're putting in 15 billion is upfront, 35 billion is contingent on openAI's meeting. Certain milestones. The two companies expanded an existing cloud agreement by a hundred billion dollars. Over eight years, AWS will become the exclusive third party cloud distribution provider for something called openAI's Frontier.

[00:45:58] We talked about this on a [00:46:00] previous episode. This is the enterprise agent platform. That openAI's launched in early February. It helps, basically helps enterprises specifically deploy AI agents as configurable AI coworkers on top. All this openAI's also committed to consuming two gigawatts of capacity on Amazon's custom training chips.

[00:46:20] And one final note about that openAI's Frontier product, the enterprise push is extending there further because they also launched Frontier Alliances, which is a program where they're partnering with McKinsey, BCG, Accenture, and Capgemini to deploy this platform at scale. So Paul, busy and controversial week for openAI's.

[00:46:39] Lots to unpack year that largest private financing round in history. I think that deal with Amazon and especially pushing Frontier into enterprises in these partnerships also merits, you know, some, some consideration here.

[00:46:53] Paul Roetzer: Yeah, I mean, talk about a wild day for Sam Altman. He tweets in the morning we raised 110 billion, which if I'm not mistaken, the largest private round [00:47:00] was what, like their own 30 billion round or 40 billion or something?

[00:47:03] I think

[00:47:03] Mike Kaput: it was. Yeah. Yeah.

[00:47:04] Paul Roetzer: So like two and a half times the largest ever. And then by 10 o'clock that night you're tweeting. We just stepped in and took this contract from the government.

[00:47:11] Mike Kaput: Yep.

[00:47:12] Paul Roetzer: and just for context, like the largest. IPO ever was like 25 billion. So I mean, this is unprecedented amounts of money.

[00:47:22] in the scaling AI for everyone, blog posts where they announced this funding, a couple of interesting notes. They said there are 9 million paying business users, for ChatGPT PT for work right now. I think that's the first time I've seen that number in a while. They, they've moved from 800 million weekly active users to 900 million on ChatGPT, 50 million, consumer subscribers.

[00:47:45] So that's where they're at now, I think is the paid number. Within there. and then they say the valuation for this round increases the value of the openAI's Foundation stakes. That's the nonprofit to over 180 billion further strengthening what has already one of the [00:48:00] most well-resourced nonprofits in history.

[00:48:02] Wow. And then the frontier stuff I thought was interesting because we've talked a lot about how one of the big challenges right now isn't the technology itself and what it's capable of within the enterprises. It's enterprise adoption and change management. And so openly, I just straight up said like, this is the problem.

[00:48:19] So there was, some quotes in. Tech crunch from the AI Impact Summit. So one of the interesting things and some of the inspiration for the work we've been doing lately around openAI's Frontier is we have not really seen enterprise AI penetrate, enterprise business process.

[00:48:35] Mike Kaput: Hmm.

[00:48:35] Paul Roetzer: So, in essence they're trying to think about like, how do we get this into the enterprises, which leads to this Alliance partners thing you talked about, Mike.

[00:48:44] Yep. So they said the limiting factor for seeing value from AI and enterprises isn't model intelligence. It's how agents are built and run in their organizations. Real impact with AI also requires leadership alignment, workflow redesign, integration across systems and data, as well as the kind of change [00:49:00] management that drives adoption.

[00:49:02] So that's where they announce these four components and they basically say they're gonna work with openAI's forward deployed engineers, or FDE, which we talked about on a recent episode, Mike, about how they're staffing up to like put these people. In essence, imagine taking an engineer who can actually build stuff, custom agents, they go work in office with these major brands and they basically find ways to automate work.

[00:49:24] And so what they're now saying is they're gonna do this in partnership with BCG, McKinsey, Accenture, and Capgemini. And they highlighted, the capabilities of each of these in this post. But they said McKinsey and BCG each bring deep experience to help leader. And decide how to start redesign their operating model, embed AI, and drive adoption while Accenture and Capgemini both advise on strategy and then help Wire Frontier into the systems and data enterprises actually run un securely and reliable.

[00:49:53] So that's the premise of this alliance, is they're trying through, through these companies that already have the relationships with all the [00:50:00] enterprises and are trusted by them, let's bring them in, match 'em with our forward deployed engineers and let's like rapidly push adoption in the enterprise.

[00:50:08] So openAI's needs adoption in the enterprise. It's hard. They've accepted that and now they're trying to kind of put these things together to enable it.

[00:50:17] Mike Kaput: Yeah. Two just quick final notes there. fun fact, the forward deployed engineer, that's actually originated with Palantir. That's Oh, that's right. They embedded people at the military to, deploy their technology.

[00:50:30] They've been doing it for a decade or so. but also it just kind of struck me, you know, we will talk about this with a number of other topics coming up, but when we talk about on the podcast, this idea that the total addressable market that these companies are going after is not software. It's not SaaS, it's employment, it's salaries, right?

[00:50:50] This is the start of that. This is how agents getting into the enterprises as coworkers. This is not to be alarmist, but that is how this [00:51:00] happens.

[00:51:00] Paul Roetzer: You can get alarmist if you want, because the next two topics are gonna save straight up.

[00:51:02] Mike Kaput: Yeah, I was gonna say save it for that, but yeah. But I don't even mention that in a super negative way outta the gate.

[00:51:08] It's more like you are seeing the actual steps be put into place for these things to become your coworkers at the enterprise level.

[00:51:16] Paul Roetzer: Yep. And it's happening faster than most people wanna admit.

[00:51:19] Interview with the Head of Claude Code

[00:51:19] Mike Kaput: Yes, it is. All right. So let's kind of get into some of that, because our third big topic this week is about an interview with the creator of Claude Code.

[00:51:28] So Boris Cherney, who leads Claude Code at Anthropic, sat down for an interview on the podcast called Lenny's Podcast this week, and he made this really interesting claim, basically saying coding is effectively solved. He said he has not edited a single line of code by hand since November, 2025. Every line has been written entirely by ai.

[00:51:48] interestingly, Cherney built Claude Code as a side project at philanthropic in September, 2024. And within five days of releasing it internally, half of Anthropics engineering team was [00:52:00] using it. The product is now generating a billion dollars in annual run rate revenue, and 4% of all public GitHub commits are authored by Claude Code.

[00:52:09] Cherney predicts that we'll hit 20% by the end of 2026 as a result of using cloud code. Cherney has said on the podcast that engineering output and Anthropic has increased 200% per engineer. And that there's also a real shift at the organizational level where on his team, everyone, product manager, designers, finance people, everyone codes.

[00:52:31] And he predicts that by the end of the year, everyone is going to be a product manager. Everyone will code the title, software engineer will actually start to go away. And he actually said it's going to be painful for a lot of people. So Paul, I'll let you kind of run with your takeaways on this episode.

[00:52:46] So I know you had a ton of them. I'm just curious though, obviously neither of us are software engineers, but, you know, I'm wondering do you buy that claim coding is basically soft. I know there's some bigger implications there for knowledge work at large.

[00:52:58] Paul Roetzer: Yeah, that was what I wanted to highlight [00:53:00] this one as a main topic.

[00:53:02] I tweeted after I listened to this episode that like some Lenny's podcast episodes you listened to twice and this is, this is one of them. Yeah. and I have listened to it twice now. the, so it was, I think it was the first time I heard Boris talk. I'm pretty sure this is the first interview of his that I've, I've listened to, but following him now for price, six months, closely on, on X and he's pretty active, so you learn a lot from him.

[00:53:28] but to hear his, his story. Kind of how he created Claude code almost as like a hack. Yeah. He was just like, messing around. I thought it was fascinating, but I think the real key, and the reason we wanted to highlight this, and I would recommend people listen to it, is replace code with whatever you do.

[00:53:43] And when you're listening to this podcast, so let's say you do marketing or consulting or whatever it is, just every time he says code, replace it with the things you do, the tasks that you do for your, for your living. because that's what he's implying is like, whatever we just did with coding is now gonna come to the rest of knowledge work.

[00:53:59] And he says [00:54:00] that basically.

[00:54:01]

[00:54:01] Paul Roetzer: So I think that, first it's interesting when this interview dropped and then re relevant to everything else we're doing, it said the, that, the thing that drew me to Anthropic was the mission. It's all about safety. And when you talk to people at Anthropic, everyone you find in the hallway, that's all they wanna talk about is, is safety.

[00:54:20] So one of the key things he talks about is the way that Anthropic approaches training these models and what they're building. He said, specifically related to building of, of Claude code, we wanted to ship some kind of coding product, for philanthropic for a long time. We were building the models in this way that kind of fit our mental model of the way that we build safe AGI, which is interesting 'cause usually people in philanthropic don't use AGI.

[00:54:43] Where the model starts to be really good at coding, then it gets really good at tool use, then it gets really good at computer use and that's the trajectory. I'm gonna repeat this again because this is like the main, if you take nothing else away from this, this is the thing you have to understand.

[00:54:57] Coding is, it's good at writing code, tool [00:55:00] use is it doesn't just rely on the language model itself to do the outputs, to write your briefs to, to come up with creative ideas for your campaigns, things like that. Tool use is access to other things like the internet, like search, so the ability to go use other tools to improve the language model.

[00:55:17] Yeah. And then computer use is seeing everything on your screen digitally and then being able to act in a digital way that's computer use agents that can do things on the screen just like you and I would. So. That was the real key. He talks about innovation and how there's no roadmap for innovation. You have to give people space.

[00:55:34] and even if like 80% of the ideas are bad, that's okay. Like you never know when the good thing's gonna come. He didn't think Claude code was gonna work. He didn't really even think it was that big of a thing when he first did it. you touched on how much of his code is written by the ai. A hundred percent.

[00:55:46] He said he hadn't written a line of code since like November. but I thought there was an interesting note where he said like in February it was like 20%, then by May it was like 30%. And then by November it was a hundred percent. So this is not like a multi-year thing. [00:56:00] This was like within eight months it went from 20% of his work to a hundred percent of his work.

[00:56:05] And, but they're still hiring more people. But these people are like 4x more productive than they used to be. And then when it got into what's next for Claude, he did say Claude is starting to come up with ideas, which I thought was really interesting. 'cause if we think about opening eyes, level of intelligence, chatbots one, reasoning models, two agents, three innovators, four innovators come up with ideas.

[00:56:26] And I'm increasingly seeing people within the labs talking about the fact that. These models are functioning as innovation partners. They're actually starting to come up with and solve problems. And then he gets into, the idea that this is coming to all knowledge work and that that's where they're going.

[00:56:42] Couple other quick notes I thought were interesting. He talks about token budgets. So like when a coder is starting to work at a lab, it's like, what's my token budget? Meaning how much access to intelligence do I have? How much can I spend using the AI myself to do my job? And I think that might become true for knowledge workers, like a [00:57:00] marketer, a sales professional.

[00:57:00] It's like, Hey, okay, my salary is 150,000 a year. What's my AI budget? Like, how, how much AI can I use to help me? so if you're telling me I have to keep my flat, my headcount flat, how many AI agents do I get? And what's my budget on those AI agents? That may sound weird. That is already happening in coding.

[00:57:19] And I could see in other industries by the end of 26, that becomes a regular conversation. Negotiations. It's like, what is my agent budget? So I would definitely go listen to this podcast. Again, it can be kind of technical from a coding perspective, but if you think about, he's like, pretend like he's just talking about your profession and replace coding with whatever you do.

[00:57:39] I think it starts to give people a really good understanding of where this is all going and is like analogy of the printing press I thought was really well stated about like, might be the best way to look at this. So, good interview. Lenny does an amazing job. It's a great podcast. I would definitely go check out that episode.

[00:57:57] Mike Kaput: All right, Paul, before we dive into rapid fire [00:58:00] this week, just one more quick announcement that this episode is also brought to you by our state of AI for Business Report. So we are currently running a short survey to inform our 2026 state of AI for business report. This is an expansion of our popular state of marketing AI report that we've done for the last five years.

[00:58:19] So this year we're actually going beyond marketing specific research to uncover how AI is being adopted and utilized across the organization. So to do that, we're aiming to hopefully survey thousands of business professionals across every industry and every function. We would love for you to be one of them.

[00:58:37] If you go to SmarterX.ai/survey, the survey literally only takes about five to seven minutes to complete. In return for completing it, we'll send you a copy of the full report when it drops, and you also are entered to for a chance to win or extend a 12 month SmarterX AI mastery membership. So that link again is SmarterX.ai slash [00:59:00] survey.

[00:59:00] We would love if you could take that for us if you have not already. We'd love to hear from you.

[00:59:05] Paul Roetzer: And I was just gonna note, so again, if you're listening to this podcast, we have a live audience with us today. as I mentioned up front, our Mastery members, and so I just glanced at the chat. So, and I'm glancing at my X feet as of right now, I can't see any updates on the Anthropic thing, but I do see that both Claude and ChatGPT were having.

[00:59:23] Yeah. Issues this morning. Yes. Not working. Could be completely coincidental.

[00:59:28] Mike Kaput: Mm,

[00:59:29] Paul Roetzer: maybe not. I don't know.

[00:59:30] Mike Kaput: Maybe not.

[00:59:30] Paul Roetzer: Just noted.

[00:59:30] Mike Kaput: Right? the conspiracies are probably already being hatched online. Online.

[00:59:34] Paul Roetzer: This is like a haven for conspiracy theories. Oh gosh. This is like everything going on right now.

[00:59:38] Block AI Job Cuts

[00:59:38] Mike Kaput: Alright, so let's dive into some rapid fire this week. First up, Jack Dorsey announced this week that his FinTech company block is cutting roughly 4,000 employees, which is nearly half its global workforce. And he explicitly has named AI as the reason block will shrink from over 10,000 workers to just under 6,000.

[00:59:57] Dorsey was pretty direct about the [01:00:00] decision in a statement on X. He said, we're not making this decision because we're in trouble. basically he was saying a significantly smaller team using the tools we're building can do more and do it better rather than cut gradually. Dorsey said he chose to move all at once.

[01:00:14] He was basically highlighting this idea that repeated rounds of cuts are destructive to morale, to focus, to trust that customers and shareholders put in the company and the market rewarded. Initially this decision block stock surge more than 24% in after hours trading. And however, some analysts have noted, kind of pushed back on the fact that block employed only just under 4,000 people before the pandemic.

[01:00:40] They ballooned during a hiring spree to the all very recent 10,000 plus workers. And they raise these questions about whether AI is the real driver here or a convenient frame for reversing over hiring. So Paul, I'd love to get your take in this. There are quite a few polarizing opinions on this one.

[01:00:57] Some people think he's just like, what they call [01:01:00] AI washing or basically using AI as an excuse to let people go to correct for that kind of over hiring. He's denied that I believe publicly in posts on X. Others. However, saying this is the canary in the coal mine that signals a coming wave of AI driven layoffs.

[01:01:14] Where do you fall on this?

[01:01:16] Paul Roetzer: So, so he did, address this idea that it's AI washing and he tweeted, yes, we overhired during CO because I incorrectly built two separate company structures, square and cash app rather than one, which we corrected mid 2024. But this misses all the complexity we took on through lending banking and BNPL.

[01:01:34] I don't know what that means. I'd have to look that one up. And that we're now targeting 2 million plus gross profit per person. Do, do you know what it is?

[01:01:41] Mike Kaput: it's buy now, pay later. It's like, like a firm and all those companies that let you like pay and install. Okay, cool. Exactly. Yeah.

[01:01:48] Paul Roetzer: okay. So they're now targeting 2 million plus gross profit per person for XR pre COVID efficiency, which stayed flat at 500,000 k or 500,000 from 19 to 24.

[01:01:59] We have [01:02:00] and do run an efficient company better than most. So he is basically saying like that is not the case. Yeah, we already corrected for over hiring. Okay. I'll just note Harry Stebbings, who's one of my favorite podcast hosts, he's got 20VC. Great podcast. He tweeted, this was yesterday morning.

[01:02:17] I have spoken to three founders in the last 48 hours. All of them with 500 to 1000 employees. Each of them is planning a minimum 20% headcount reduction. Hmm. Said with great concern. This is about to get very real for labor markets. As I've said on the show, many times I have talked to companies who have told me point blank, they are gearing up 10 to 20% layoffs at any given moment.

[01:02:41] They have contingencies in place of who those people are going to be. It. This is real. Like I do think that, one person doing it sort of starts to give cover to the other is sort just like last year in spring, we had Tobias Lutke start talking about the reflexive need for people to infuse [01:03:00] into everything they were doing.

[01:03:01] And then it led to other, you know, CEOs saying, I think you're gonna start to see this kind of thing. It'll, it'll, it'll start in tech. that's where it always starts with this kinda stuff, but I have talked to a lot of non-tech companies that are also planning for a 10 to 20% reduction of their workforce in 2026.

[01:03:18] Mike Kaput: Yeah, and I think it's interesting to see kind of the discussion and debate around these kinds of announcements because again, I know this gets headlines and people start getting really polarized around it, but really at the fundamental point to me at the end of the day is like when you hear someone like Boris, right?

[01:03:32] Claude Code saying developers or whoever, and now 200% more productive, we see that in other forms of knowledge work. The fundamental question is, is that possible or is it not? And I think the answer for us is a resounding yes. That's possible. If that's possible. That doesn't happen in a vacuum that has ripple effects, and that's why you have these people considering leaders considering these decisions, because if you can be that much more productive, the math kind of does [01:04:00] itself from certain financial metric perspectives.

[01:04:03] Paul Roetzer: And I will see some people try and contradict this like, well then why is Anthropic hiring? Well, because they're growing at 10 x per year in their revenue. If you were growing at a thousand percent per year, you would be hiring more people too,

[01:04:15] Mike Kaput: right?

[01:04:16] Paul Roetzer: Most companies aren't growing even double digit percentage.

[01:04:20] So if you are growing at single digits. And you can be wildly efficient and productive with fewer people, you're gonna do it. Yeah. And that's the reality. It's like you can't look at these labs and say, oh, well they're hiring salespeople. Of course they are.

[01:04:34] Mike Kaput: It's like hypergrowth companies.

[01:04:35] Paul Roetzer: Yeah.

[01:04:36] Mike Kaput: Yeah.

[01:04:36] Paul Roetzer: They can be 4x more productive and still need more people.

[01:04:41] AI Doomer Essay from Citrini

[01:04:41] Mike Kaput: All right, so next step. There is a research essay published this past week on Substack that actually made it to Wall Street trading desks and triggered a very real sell off. This essay is called The 2028 Global Intelligence Crisis.

[01:04:53] It's written by James Van Geelan of Rinni Research and Alap Shah, a former Citadel analyst. And it basically [01:05:00] models what could happen if AI displaces white collar workers at the pace current capabilities suggests. So the thesis centers on what these authors call a human intelligence displacement spiral.

[01:05:11] Basically a negative feedback loop that has no natural break. And they, again, pretty upfront say they're just hypothesizing their modeling one scenario. But in this scenario, as AI agents replace software engineers, financial advisors, and middle management companies start laying off workers to expand margins, they reinvest the savings into more compute and accelerate further displacement.

[01:05:34] The essay then kind of introduces two core concepts. One is ghost to GDP, where AI generated output benefits compute owners, but never circulates through this consumer economy. And the death of friction where AI agents optimize, optimize away the inefficiencies that entire business models depend on. So this hypothetical scenario, unfortunately gets pretty grim.

[01:05:56] It projects this doomsday economic scenario that results from these [01:06:00] factors including a national unemployment rate reaching over 10% an s and p 500 crash of 38% from its peak and a deflationary spiral all by 2028. And the essay racked up tens of millions of views on XA prompted Citadel securities to publish a formal rebuttal.

[01:06:18] The authors themselves clarified, Hey, this is just a scenario, not like a hard and fast prediction. And Paul, I'm curious whether or not someone disagrees or agrees with this essay like it moved markets. And I'm curious why you think that's the case. It just seems wild to me that this, I mean, random interesting analysis to be sure, but just like a random essay can now tank the market.

[01:06:38] Paul Roetzer: Yeah. It's the most practical, feasible analysis I've seen of what could happen. Hmm. So like if you think, go back to like situational awareness and all the pause that, that, series of papers created, this is better in the sense that this is much more approachable and understandable to like the average [01:07:00] business leader

[01:07:01] Mike Kaput: Yeah.

[01:07:01] Paul Roetzer: Or, or worker. So. I'll read a few excerpts. I would highly recommend going and reading this whole thing. So again, we are not saying this has an 80% probability of coming to reality. Their whole point was to lay out a possible outcome. And I will tell you, reading this thing, I I, there was lots of it where I was saying it's greater probable outcome that this is close to the truth than what most people think.

[01:07:27] and that's hard to wrap your mind around, but like, I'll just give you a few things. So they're writing this from the perspective, it's like June, 2028 and they're kind of summarizing what happened. So it starts with the consequences of abundant intelligence. It says two years. That's all it took to get from contained and sector specific, which is where we're today to an economy that no longer resembles the one any of us grew up in the quarter's.

[01:07:49] Macro memo is our attempt to reconstruct the sequence of postmortem on the pre-crisis economy. So then they're rewinding back to October, 2026. The s and p flirted with 8,000, [01:08:00] NASDAQ broke 30,000. The initial wave of layoffs due to human obsolescence began in early 2026. And they did exactly what layoffs are supposed to do.

[01:08:07] Margins expanded, earnings beat stocks rallied. Look at block lay off 4,000 people. Stock goes up 17% record setting corporate profits were funneled back into AI compute. The headline numbers are still great. Nominal GDP repeatedly printed mid-high single digit ized growth. Productivity was booming.

[01:08:25] Everything's looking great. The owners of the compute saw their wealth explode as labor costs vanished. Meanwhile, real wage growth collapsed despite the administration's repeated boast of record productivity. White collar workers lost jobs to machines and were forced into lower paying roles. Talks about the ghost, GDP, where it's like it's improving, but it's not really seeing the impact on the people in a positive way goes into how it started.

[01:08:48] So again, imagine the co topics we just talked about. With Boris and Claude, it says, in late 2025, AGI Agentic coding tools took a step function jump in capability. Go listen to the episode where we talked about [01:09:00] December, 2025 and what changed. A competent developer working with Claude Code or Codex could now replicate the core functionality of a mid-market SaaS product in weeks, not perfectly, or with every Edge case handle, but well enough that the CIO reviewing a $500,000 annual renewal started asking the question, what if we just build this ourselves?

[01:09:19] The company's most threatened by ai. The software companies became AI's most aggressive adopters. Software was only the opening act. What investors missed while they debated whether SaaS multiples had bottomed, was the reflexive loop had already escaped the software sector. The same logic that justified ServiceNow cutting headcount.

[01:09:38] Applied to every company with a white collar cost structure. By early 2027, large language model usage had become default. People were using AI agents who didn't even know what an AI agent was. In the same way, people who never learned what cloud computing was, use streaming services. They thought of it the same way.

[01:09:55] They thought of auto completer spellcheck a thing. Their phone just did. It started [01:10:00] out simple enough agents removed friction. I will not, I I would love to read the rest of this. It's insane. go read it. It is, don't read it as fact. Don't read it as this is exactly what's gonna happen, but when I f you just think what I just read to you, all of that is already playing out.

[01:10:17] They're, they're talking about the reality of what is currently going on, and they're projecting out how could this play out if the exponential curves that these labs are seeing remain true. If back in February of 2025 when Boris looked at an exponential curve and said, wow, by the end of the year it's gonna be writing a hundred percent of my code, and he was right, that's what's dictating all the decisions.

[01:10:37] It's dictating why Google has spent 180 billion on CapEx this year. All of those exponentials is why they're so confident in a future that probably looks closer to what this article says than what youth or others think the future looks like.

[01:10:54] Politics of Data Centers

[01:10:54] Mike Kaput: All right, so next step. According to at least some measures, public support for AI data centers appears [01:11:00] to be collapsing.

[01:11:00] So a recent poll by EM, bold research found that only 28% of Americans now support data centers near their communities with 52% opposed. That's a net support of negative 24% down from barely positive 2% just months earlier. So I was skewing slightly positive just months earlier. Data centers now pull worse than natural gas plants, solar farms, wind farms, and nuclear facilities.

[01:11:26] Interestingly, opposition on this issue appears to bridge the political spectr left-leaning, advocates I get, or, some people that do not support data centers are often angry about water and energy strain. While right-leaning activists view it as elite, big tech overreach.

[01:11:42] Paul Roetzer: It's not bad when both sides hate you or it's not good when both sides hate you.

[01:11:45] Mike Kaput: It's not good. And notably, the deepest opposition right now is among rural Republicans with negative 20% net support. And there's some really public facing issues that we're seeing where this backlash is becoming real. For instance, in South [01:12:00] Haven, Mississippi, Elon Musk's, Elon Musk's xAI actually installed 27 temporary gas turbines without permits to power.

[01:12:08] Its Colossus AI cluster. These run 16 to 24 hours a day. There's been a bunch of reporting on residents complaining and fighting back. They describe constant roaring pops, high pitch whining. And xAI spent 7 million building a sound barrier wall that neighbors nicknamed the Temu Sound Wall because it does almost nothing, kind of like a cheap product you'd buy on Temu.

[01:12:32] And there are some lawsuits potentially in the works around this. So Paul, it just really striking me, like you said, this issue cuts across traditional political divides. I wonder like this really, data centers could be the lightning rod issue going into midterms. They're just like, they tick a bunch of boxes that in the, and not, I'm not saying agree or disagree.

[01:12:50] Yeah. But like a narratively of like, easy to understand, for the most part, really visible, visceral kind of symbol of a lot of things people don't seem to like about ai. [01:13:00]

[01:13:00] Paul Roetzer: Yeah. We've said many times, one of the things that slows this down is societal revolt. Yeah. And you do need a tangible thing to revolt against.

[01:13:08] And this is an easy one. It's easy to protest, it's easy to, like you said, you know, the videos online of the loud noises. All that stuff is very tangible and understandable to people. Yep. it'll be very fascinating to watch how they spin this politically because to your point, like if, Republicans in rural areas where the data centers are going hate these things

[01:13:29]

[01:13:29] Paul Roetzer: But the current administration is accelerationist in terms of like accelerated all costs for get a PA guidelines, all this stuff like just build, build, build. But if they find that the poles are moving based on people's hatred for these things,

[01:13:41] Mike Kaput: right.

[01:13:41] Paul Roetzer: It's like, what, what do you do then? So I don't know the answers, but it, this is a really interesting thing to watch, to see if it catches steam and actually starts to affect a.

[01:13:51] Messaging going into the midterms and then just separately, like it's a problem like environmentally, which I talk to a lot of people who worry about the environmental impact of [01:14:00] ai. This is a very tangible thing for them to latch onto and like focus on the impact it's having. So it's a big issue on a lot of fronts.

[01:14:08] Mike Kaput: You know, one just final aside here, I found interesting, this is definitely way further down the line, but we've talked about that plan to like build data centers in space and we'll see how that actually, if that ever happens, how real that is. But if someone made a really interesting point in a post online, they were like, Hey, you know what another advantage of data centers in space is people can't go burn them down.

[01:14:27] You know? So in this idea of like, hey, societal down with societal backlash. Yeah. Your average person can't though. Right. You know? Yeah. So I think they were kind of getting to this point that, hey, these, an unintended perhaps benefit of those is that they sidestep this issue. But again, this is possibly a fantasy.

[01:14:45] We'll

[01:14:45] Paul Roetzer: see. Yeah. I don't want, I don't wanna spend a topic now on like data centers in space, but like, have you, have you seen a report yet on like, what happens when there's like 2 million of these things and they start like failing and. Just burn up in the atmosphere. I that's, I haven't get really like down data center in space rabbit [01:15:00] hole, but

[01:15:00] Mike Kaput: I imagine a data center in space is pretty large.

[01:15:03] So yeah, I don't know what happens when it falls back to her. Might be a good question for us to run through Claude or something. One of these days. We'll

[01:15:11] Paul Roetzer: go, we'll go deep on data centers in space.

[01:15:13] Mike Kaput: Yeah, exactly right. Alright, so next up,

[01:15:15] Anthropic Fluency Index

[01:15:15] Mike Kaput: Anthropic has published what it calls the AI Fluency Index. This is a me, a framework for measuring how well people are using ai.

[01:15:23] So this is a kind of a study and some data they collected that analyzed almost 10,000 quad conversations and tracked 24 specific behaviors that basically define effective collaboration with ai. And their kind of central finding is most people are still not using AI as effectively as they could. And the better AI gets the worse this problem becomes.

[01:15:43] Because there's this kind of headline finding here that the verification gap is still a big problem. We've talked about this. So when Claude produces polished artifacts like working apps, code formatted documents, they found that users become more directive but less evaluative. So the better the [01:16:00] output looks, the less people question it.

[01:16:02] Obviously Anthropic says this is exactly backward and so they recommend a few things people can do here. So first, stay in the conversation, treat the first response as a starting point, not the answer. They found conversations with iteration showed roughly double the effective behaviors of those where users accepted the first output.

[01:16:20] Second question, polished outputs. Most of all, the moment something looks good, that's exactly when you should pause and start verifying things. And then third, they say, set the terms of collaboration upfront. Only about 30% of users explicitly tell Claude how they want it to interact. but those who do show dramatically better results.

[01:16:37] So Paul, I found this to be a valuable read. Glad Anthropic is putting it together. But I guess I just still have to ask, we're three plus years into this and we're literally still coming out just now with these like, again, really good, but like basic stuff. And clearly even advanced Claude users or maybe further ahead of the curve, people are not using these tools appropriately.

[01:16:59] Paul Roetzer: It goes [01:17:00] back to the frontier lines thing about the lack of adoption and understanding of what these things are really capable of in enterprises in particular. Um. And when I was glancing at this getting, getting ready, I was thinking back to episode 1 93 where I talked about like basic intermediate and advanced users.

[01:17:14] Yep. So like the basic user treats it as an answer engine, the intermediate user more as like an assistant and advisor through continuous prompting and dialogue. And then the advanced user, coworker on demand subject matter expert. Like that's how you think about it. And I really like the one chart they showed about behavioral indicator, prevalence.

[01:17:32] Yep. And so it like, prioritize these different traits. It's like iterates and refines is really important. Clarifies goal before asking for help provides examples of what goods look looks like. Specifies format and structure needed sets, interaction well. So kinda like sets out like what are the people who are doing it, right.

[01:17:47] Doing. And I think that was like a helpful chart to see and again, like a, a good report to go look at, to understand kinda where we are. But again, it always comes back to we are nowhere near as far along as most people think. [01:18:00] Like yes, there are some power users doing incredible things. like one, one of my buddies, I was walking into basketball Thursday night and one of the guy was building like an AI native startup.

[01:18:10] He's like, you mess you, you playing with open claw? You gotta tell. I was like, dude, I don't mess with that stuff. Like, I don't know what I'm doing with open claw. I'm not turning that thing loose on my computer. So it's like you have these people are just like racing out into the frontiers and doing all this cool stuff, but most people are still like, it's just an answer engine.

[01:18:24] Like I just give it a prompt and I'm happy or not happy with its output.

[01:18:28] Mike Kaput: All right. Next up.

[01:18:29] Anthropic Distillation Attacks

[01:18:29] Mike Kaput: Anthropic has accused three Chinese AI labs, deep seek moonshot AI and minimax of running industrial scale distillation attacks on Claude using more than 24,000 fraudulent accounts to extract over 16 million exchanges worth of training data.

[01:18:44] So distillation is like the attempt to kind of train a weaker model on a stronger model's outputs, and it can be a legitimate technique when it's applied internally. But Anthropic is alleging these campaigns basically amount to systematic intellectual property theft. So the scale varied by [01:19:00] lab Minimax ran the largest campaign with 13 million exchanges.

[01:19:04] Moonshot AI extracted 3.4 million exchanges. They were actually focused more on tool use and computer vision. DeepSeek's Campaign was smaller at 150,000 exchanges, but targeted reasoning capabilities. So Anthropic framed this as a national security issue, arguing that illicitly distilled models lack safety guardrails and can feed capabilities into military and surveillance systems.

[01:19:28] Interestingly enough, Elon Musk had to weigh in on this as well, responding on X by calling philanthropic hypocritical. He basically is referencing the fact the company. Just settled for $1.5 billion over their use of copyrighted books to train Claude. And he mentioned the Anthropic is guilty of stealing training data themselves at massive scale.

[01:19:46] So Paul, I know this is a bit of a technical topic, but it does have pretty big implications for kind of the ecosystem overall, doesn't it?

[01:19:55] Paul Roetzer: Yeah. So it was interesting because philanthropic Google and openAI's have all in the last, [01:20:00] like 21 days, said the same thing is happening to all of them.

[01:20:03] Mike Kaput: Yeah.

[01:20:03] Paul Roetzer: So it's obviously a concentrated attack by the Chinese firms to, to do this, to like steal how it all the weights and how it works and things like that.

[01:20:11] there was an NBC news report in early February. It said, Google says it's flagship AI chat bot Gemini has been inundated by commercially motivated actors who are trying to clone it by repeatedly prompting it sometimes with thousands of different queries, including one campaign that prompted Gemini more than 100,000 times.

[01:20:29] Interestingly, and tying back to the Claude thing, I was just trying to find it. So there's this, who is this guy? He's like the under Secretary of War. I don't remember. His name is I'll, I'll find the link and put it in there. But I tweeted like he tweeted something about, okay. So he was attacking philanthropic.

[01:20:47] This was on like Sunday. And one of the lines in his tweet was, what's ironic is that there's been. No bigger thief of America's public information identity in mass or creator's works than Anthropic. Search the lawsuit. So [01:21:00] I re I re I usually stay outta this stuff, but I couldn't help myself, so I retweet and said, is anyone gonna tell him they all did this?

[01:21:07] Mike Kaput: Yeah.

[01:21:08] Paul Roetzer: And so that was, I let it go. And then I see this morning that he, someone did, because he deleted the tweet, removed the part about them stealing everyone's copyrighted material and retweeted his original tweet minus the part about ro stealing every stuff because they're all doing it. So unfortunately, this is the PR battle that the AI labs aren't going to win because everyone's like, well, you stole our stuff.

[01:21:30] Who cares if they're stealing your stuff?

[01:21:31] Mike Kaput: Right. Right.

[01:21:32] Paul Roetzer: It is a major problem. It is important that, that they stop this from happening. They're not apples to apples, but I get the whole, like, you stole our stuff. It's cool if they steal yours, but Right. We don't want this happening. This is a bad precedent.

[01:21:46] NVIDIA Earnings

[01:21:46] Mike Kaput: All right, next up. Nvidia reported fourth quarter revenue of $68.1 billion, up 73% year over year. They beat analyst expectations with these latest earnings. the expectations were 65.8 [01:22:00] billion. Data center revenue hit 62.3 billion, now more than 91% of total revenue. A segment that has scaled nearly 13 times 13 x since ChatGPT launched in late 2022.

[01:22:13] so their full year revenue reached 215.9 billion net income 121.1, 120.1 billion. Guidance for next quarter came in at $78 billion, more than 5 billion above what analysts expected. You would typically think that should be good for Nvidia. however, despite the beat on every metric, Nvidia shares fell 5.5% the following day, erasing roughly $260 billion in market value, the largest single day decline since April, 2025.

[01:22:45] Can you clarify this for me, Paul? It certainly seems like these should have, these results, should have delighted markets.

[01:22:52] Paul Roetzer: Yeah. I mean, they're, they're basically projecting out years in advance, non on demand. it makes no sense when you look at it on the surface. I don't pretend to be a day [01:23:00] trader smarter than,

[01:23:01] Mike Kaput: yeah.

[01:23:02] Paul Roetzer: Irrational markets in essence. I'm just looking now, they're down 4%. I don't,

[01:23:06] Mike Kaput: yeah.

[01:23:07] Paul Roetzer: Dow and NASDAQ reached down about 1%. So Nvidia is down significantly more than the market as a whole. despite, you know, I don't know how much us starting a war has to do with this, but

[01:23:18]

[01:23:18] Paul Roetzer: that like, it doesn't, it seems like the market as a whole doesn't care that we're, we're doing what we're doing in the Middle East.

[01:23:24] so I think Nvidia is basically just being punished for, I don't know what for breaking records and projecting. Well, I don't try and figure this stuff out. I just bet long term amount of companies I think are gonna do really well as there's more demand for ai and I would consider Nvidia to be one of those companies.

[01:23:41] Mike Kaput: While also just worth looking at those numbers regardless of the Wall Street response. Just because, I mean, who knows if we're in a bubble, how long this will last, but those numbers are still going pretty strong at the

[01:23:51] Paul Roetzer: moment. Yeah. I I think the biggest problem is that there's, their revenue is centered on eight companies predominantly, and people worry about that.

[01:23:58] And the circular investments [01:24:00] are real. Like they just put 30 billion into openAI's that's gonna be spent on 30 billion on NVIDIA chips. Like

[01:24:05] Mike Kaput: Right.

[01:24:06] Paul Roetzer: and so some people worry that it's very bubble like in how this is, the mechanisms with which these investments are working is we'll put money into you, you will in turn, turn around and spend that money with us.

[01:24:16] Yep. And cloud costs, chip costs, things like that. And I get that. I don't, I'm not debating that that's. Interesting.

[01:24:24] AI Product and Funding Updates

[01:24:24] Mike Kaput: Yeah, for sure. Alright, Paul, so to wrap up here, we've got a number of AI product and funding updates. I'm gonna run through these very quickly as we wrap the episode. If you have anything to add, please chime in.

[01:24:35] So first up, Google release Nano Banana 2. It's next generation image model that combines the quality of Nano Banana Pro with sub-second generation speeds, including in 4K images. It's rolling out now across the Gemini app in 141 countries. Separately, Google acquired producer ai, formerly the viral AI music tool called Fusion, bringing the startup into Google Labs and pairing it with DeepMind's Lyria [01:25:00] three music generation model.

[01:25:01] This platform lets you generate full songs, create music videos, and build custom in instruments from text prompts with paid plans starting at eight bucks a month. Anthropic has expanded its co-work platform with a number of enterprise plugins, which are pre-built bundles of skills and tool connections for specific job functions, including hr, design, engineering, finance, and operations.

[01:25:24] The update also includes private marketplaces for internal deployment and new connectors for Google Workspace, DocuSign and fact set among others. Peak Labs, which is known for its AI video generator, pivoted into a different product entirely, creating persistent ai digital twins called AI cells. Users upload a selfie record their voice.

[01:25:46] Answer personality questions and set their AI loose to act autonomously across Slack, WhatsApp, iMessage, and social platforms. The company launched this with a retro futuristic infomercial and its employees own AI [01:26:00] selves tweeted autonomously about the product. Last but not least, there is some more shakeup at Thinking Machines Lab.

[01:26:08] Two more founding members of Mira tis Thinking Machines Lab quietly left for meta, bringing total departures to at least seven since the startup launch less than a year ago. This company has raised $2 billion so far to $12 billion valuation, but has lost co-founders and early researchers to both Meta and openAI's driven by what Fortune has reported as money constraints, compute constraints, any lack of clarity on products.

[01:26:33] Paul Roetzer: That is an acquihire waiting to happen, that if any of that doesn't get acquihire in the next 90 days, I would be shocked.

[01:26:41] Mike Kaput: One final note here, like we always do, we now have a new AI pulse survey that will go live for this week based on this week's topic. So the two questions we're gonna talk about are where do you stand on the Anthropic Pentagon dispute over AI safety, red lines, and then second block cut nearly half its workforce this week and named AI as the reason.

[01:26:59] What's [01:27:00] your reaction? Be very interested to see the pulse on those two questions.

[01:27:04] Paul Roetzer: Oh yeah. How, what changes between now and next week

[01:27:07] Mike Kaput: too? Yeah, no kidding. So go to SmarterX.ai/pulse and you will find this week's survey there when you hear this episode, on Tuesday, March 3rd. And we would love to hear from you.

[01:27:19] So Paul, really, really appreciate, you breaking everything down for today. You know, thanks to everyone for tuning in to episode 200. Whether you're joining us live or listening to us on your podcast streaming platform, if you want to catch the q and a that we're about to do, offline here, AI mastery members can watch it in your account.

[01:27:38] And if you're not a member yet, head to academy dot SmarterX dot ai to join and get access. So Paul, thanks again.

[01:27:44] Paul Roetzer: Yeah, and thanks everyone for the 200 episodes. Like I you know, yeah, this time last year we had about 40,000 downloads a month, and now it's about 130,000 downloads a month. So the listenership to the podcast is, has blown up in an incredible way.

[01:27:57] And I know Mike, when we're out in the public and [01:28:00] get to go do talks and stuff, like the amount of people come up to us. Say they listen to us every week, it's, it's awesome to see and it's not something we ever expected. we literally just started doing this to synthesize information every week for ourselves.

[01:28:12] It was a pretty selfish reason why we started doing the weekly, you know, four plus years ago. so yeah, we're just grateful for everybody that listens and for all the personal notes we get. And, you know, for the grace when, you know, maybe we don't always stick as closely as we try to neutrality in everything we say and do, you know, we're doing our best.

[01:28:31] So yeah, hopefully we will have another couple hundred episodes and we can celebrate. We're gonna try and do something special every hundred. So, you know, hopefully we'll get to sometime next year, we'll get to number 300. So thanks everyone for listening and for our a mastery members, stick with us. We're gonna get into the Q&A, so everybody have a great week.

[01:28:48] I'm sure we'll have plenty to talk about next week.

[01:28:50] Thanks for listening to the Artificial Intelligence show. Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and [01:29:00] business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy and engaged in the SmarterX slack community.

[01:29:15] Until next time, stay curious and explore ai.