The Artificial Intelligence Show Blog

[The AI Show Episode 203]: Anthropic vs. Pentagon Round 3, NYT  AI vs. Humans Writing Test, Atlassian’s AI-Era Layoffs & Grammarly's Expert Cloning Scandal

Written by Claire Prudhomme | Mar 17, 2026 12:15:00 PM

The Pentagon's supply chain risk designation against Anthropic has entered the courtroom and the implications are far bigger than one company.

This week, Paul Roetzer and Mike Kaput break down Anthropic's two federal lawsuits against the Pentagon's unprecedented supply chain risk designation, the New York Times quiz that had 86,000+ readers prefer AI writing and Atlassian becoming one of the first major companies to openly name AI as the reason for cutting 1,600 jobs. Plus: Adobe's CEO stepping down after 18 years, Amazon's AI-caused outages, Grammarly's expert cloning controversy, and a packed product and funding roundup.

Listen or watch below and see show notes and transcript that follow.

This Week's AI Pulse

Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.

If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.

Click here to take this week's AI Pulse.

Author: Claire Prudhomme

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:03:11 — AI Pulse Survey Results

00:07:48 — Anthropic vs. Pentagon Round 3

00:30:02 — New York Times Releases Controversial "AI Writing Quality" Quiz

00:46:18 — Atlassian Layoffs and Job Loss Dashboard

00:58:49 — Adobe CEO Stepping Down

01:07:14 — Amazon AI-Related Outages and Engineering Struggles

01:14:28 — McKinsey AI Chatbot Hacked

01:19:49 — AI Politics Update

01:24:06 — Grammarly AI "Expert Review" Controversy

01:030:51 — Andrej Karpathy's Autoresearch Agent

01:34:47 — AI Product and Funding Updates

 

This week’s episode is sponsored by our 2026 State of AI Report.

This year, we’re going beyond marketing-specific research to uncover how AI is being adopted and utilized across the organization, and we need your help to create the most comprehensive report yet.

It’s a quick seven-minute lift. In return, you’ll get the full report for free when it drops, plus a chance to win or extend a 12-month SmarterX AI Mastery Membership. Go to smarterx.ai/survey to share your input.

 

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: It’s BS stuff that's coming from the leaders of the labs. And politicians and, it is disingenuous and it is harmful to take this highly confident belief that it'll just work out. You do not know that. No one knows that. So at minim we should be doing contingency planning for worst case scenario outcomes or even mid-range bad scenario outcomes.

[00:00:23] Like something we cannot pretend like you understand the future. Because you're an economist or you're an AI leader or whatever it is, and that like it just works out, you don't know that. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.

[00:00:44] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput. As we break down all the AI news that matters and give you [00:01:00] insights and perspectives that you can use to advance your company and your career.

[00:01:05] Join us as we accelerate AI literacy for all.

[00:01:12] Welcome to episode 2 0 3 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput, we are recording Monday, March 16th. This will be dropping on St. Patrick's Day, so happy for Patrick's Day if you are celebrating and listening. My kids will be home from school, so I will, I'll be trying to get ready for our annual meeting while my kids are doing whatever they're gonna be doing on the day off.

[00:01:34] That's kind of crazy. I don't, I didn't get St. Patrick's day off as a kid. Like, no, not just accepting where he skipped school and it's like, all, just do what you're gonna do. That was a good day to play hooky as a kid.

[00:01:44] Mike Kaput: No kidding.

[00:01:45] Paul Roetzer: All right, so, today's episode is brought to us by the state of AI for business survey and report.

[00:01:50] The survey is in the field right now. We would love to have you participate in that. This is an expansion on our popular annual state of marketing AI report that we do every year. This [00:02:00] is right with the sixth year Mike, is that right? Yeah.

[00:02:02] Mike Kaput: Okay. So this will be our sixth year of research. Yeah.

[00:02:04] Paul Roetzer: All right.

[00:02:04] So we have five years of data. last year we had over 1800 people participate, so it's incredible data on AI usage, adoption, how people feel about ai, how they're looking about the future of work, things like that. So it takes about five to seven minutes to go through it. It'd be great to have you be a part of it.

[00:02:20] It is just SmarterX.ai/survey and you can participate in that. We will be releasing that report in the spring, right, Mike? Yep. Are we, what are we looking at? Like a may release? Yeah,

[00:02:30] Mike Kaput: like mid, mid-May roughly, I think.

[00:02:32] Paul Roetzer: Okay.

[00:02:32] Mike Kaput: Yeah,

[00:02:33] Paul Roetzer: so we'll, we'll do a release on that when we release it. We always do a webinar.

[00:02:36] and then we will probably do at, at minimum a main topic on the podcast about it. We might be, we'll do a special episode of the podcast about it. But, really looking forward to that research. We have an incredible response already. Hmm. so keep those responses coming. We'll keep the survey open probably for a couple more weeks here.

[00:02:54] And, you get, if you take the survey, you not only get a copy of the full report [00:03:00] when it drops, let a chance to win or extend a 12 month, SmarterX AI mastery membership to our AI Academy. So again, SmarterX.ai/survey. Okay.

[00:03:11] AI Pulse Survey Results

[00:03:15] Paul Roetzer: And then each week, not to be confused with our state of survey, we do a pulse and this is just, an AI pulse for our listeners to just see how people feel about topics that we've talked about on the podcast that week.

[00:03:23] so last week we will go through the, again, this is an informal poll, but nearly 70 responses last week. The first question was, has the Anthropic Pentagon situation changed how you think about which AI company you use? 26% said, yes, I've switched or am considering switching tools 'cause of it. Yeah. Wow.

[00:03:44] That's why 40. Yeah. 41% said it's made me think about it, but I haven't changed anything. And 32% said, no, I choose AI tools based on capability, not company politics. so it's interesting. So 41 plus 26 is [00:04:00] 67 ish. Said they're have either switched or they are thinking about switching. Now, I assume in most cases that is switching to Anthropic, correct?

[00:04:09] I don't know. Like we didn't really get into the specifics there, but, I do think there's been a groundswell of support for Anthropic as, as a result of this situation. And then the second question, how would you describe AI access at your organization right now? Okay, this is an interesting one. 54% said wide open.

[00:04:26] Most employees can use AI tools freely.

[00:04:29] Mike Kaput: Wow. Okay. That's a lot.

[00:04:30] Paul Roetzer: Yeah. And then 34% said selectively available, approved for certain teams or use cases. That's what I see a lot at big enterprises, Mike, for sure. Yeah. And then, a, a mix, of effectively blocked, it or legal has restricted most access and no formal policy.

[00:04:48] It's a gray area, so most, it looks like 87%, yeah. Are either wide open or selectively using. That's interesting.

[00:04:56] Mike Kaput: That's very interesting.

[00:04:58] Paul Roetzer: Yeah, I would say so. My guess [00:05:00] there is selectively available are largely people in enterprises, like larger enterprises that are answering that way. Yeah. And wide open is probably SMBs.

[00:05:10] Mike Kaput: No, it would be my guess.

[00:05:10] Paul Roetzer: Yeah. It's the small businesses. And we didn't, we didn't ask, I don't, I don't think we had. Yeah, we didn't ask relation to like size of company, but I would guess that that would be how that would play out. oh, and then we had a third one. Is this the third one? Mike should,

[00:05:23] Mike Kaput: yeah, we, we can get into this one too, because we just ask like, how many, 'cause sometimes we'll just ask kind of what's your title or like, what's your function.

[00:05:30] Sometimes we'll kind of ask something a little more AI specific. This week was like, how many different AI tools are you using regularly?

[00:05:36] Paul Roetzer: Yeah. And so it gave like ChatGPT, Claude, Gemini. 'cause we've talked a lot about this idea that we're using multiple models. Like I've said many times on the show recently, I use like three different models all day.

[00:05:46] And so this one was, 60% said two or three. So yeah, multi-model users, 32% said four or more. And then only 7% said they're using just one. That's, that's kind of [00:06:00] fascinating.

[00:06:00] Mike Kaput: Very interesting.

[00:06:01] Paul Roetzer: Yeah. So 92, almost 93% when you round up are using two or more models regularly.

[00:06:07] Mike Kaput: Yeah.

[00:06:08] Paul Roetzer: Huh. Okay. So again, informal poll, like, it's about se again, 70 responses from our listeners.

[00:06:14] So these are already AI forward people. They're people who are regularly listening to the show and proactively going and, completing the survey. So it is just a sample of, of information. And Mike, how do we, take part in the polls?

[00:06:26] Mike Kaput: Yes. If you go to a SmarterX ai slash pulse, after our weekly pulse, you can find this week's survey there and go take that for yourself.

[00:06:35] We'd love to have you take this week's survey at the end, once we've kind of gone through our topics, we'll share what we're asking about this week.

[00:06:42] Paul Roetzer: Alright, and now we are gonna get into it. If you're new to the show, we have lots of new listeners. I think I shared this may be on one of the live sessions I did last week.

[00:06:50] I don't remember which one it was, but last year at this time, this podcast was probably getting somewhere in the range of 40 to 50,000 downloads a month. we are [00:07:00] now nearing 150,000 downloads a month, so we know we have lots of new listeners every week. And so welcome to the weekly version of this.

[00:07:08] Podcast. What we do here is three main topics each week, and then, we try and go through about seven to 10 rapid fire items, I guess by bundling them into that AI product and funding. At the end, Mike, we actually tackle probably closer to 15 to 20 news items, but we do it all in about an hour and 15 minutes each, each week.

[00:07:26] So, the first section is these three main topics that Mike and I curate throughout the week. And then, we get ready on Sunday nights and Monday mornings and we just get on and we talk about it. So the topic that doesn't seem to want to go away is, Anthropic versus the Pentagon. We are in round three, Mike.

[00:07:44] So what is new with Anthropic and the government?

[00:07:48] Anthropic vs. Pentagon Round 3

[00:07:48] Mike Kaput: Well, quite a bit. This story just keeps escalating and developing. So we've talked in the past the last couple weeks about the Pentagon labeling Anthropic a supply chain risk. And that's the first time an American AI [00:08:00] company or any American company has received a designation normally reserved for foreign adversary.

[00:08:05] So what's starting to happen this week is that Anthropic. Has filed two federal lawsuits to block this black listing. One in San Francisco Federal court, one in DC appeals court, and they have sought an emergency temporary restraining order against this designation. Because this designation matters.

[00:08:24] Anthropic CFO states that hundreds of millions of dollars in expected revenue this year from work tied to the Pentagon is already at risk due to this. Because due to this distinction, the government cannot work with a company like Anthropic. And Anthropic actually warned that this could ultimately cost it billions.

[00:08:41] If enterprise customers and international governments follow the Pentagon's lead, there's already been some fallout. They claim that includes a lost a hundred million dollars deal, 180 million in disrupted financial institution negotiations. Another deal for 15 million has been paused, and another one for [00:09:00] 80 million now demanding unilateral cancellation rights they didn't have before.

[00:09:04] Given the chaos around this designation. Now some companies actually filed amicus briefs in support of Anthropic. Microsoft filed one that was very aggressive, urging the court to grant a temporary restraining order immediately, and arguing that the Pentagon's move set a precedent for government retaliation against any tech company that pushes back on how its products are used.

[00:09:27] A group of 37 AI researchers, including Google deepminds, Jeff Dean, filed a brief arguing that the Pentagon's demand would effectively force AI companies to strip safety guardrails from their models. 22 former military and intelligence leaders also filed in support warning that black listing Anthropic weakens national security by pushing the best AI talent and technology away from government partnerships entirely.

[00:09:54] Pentagon, CTO. Emil Michael got on CNBC with some talking points saying that Claude would [00:10:00] pollute the defense supply chain because its safety guardrails. Make it unreliable for military operations. Now at the same time, Palantir, CEO, Alex Karp confirmed publicly that Claude is still actively being used in military operations despite this designation because there is currently no viable replacement for its capabilities.

[00:10:20] And Paul will get into, I think this point here, but an analyst out there, Dean Ball, pointed out that every Frontier AI model, including openAI's and Googles, is trained with the behavioral guidelines that are similar to what Anthropic calls its constitution. So if Anthropics guardrails constitute a supply chain risk, that same logic could apply to any AI provider.

[00:10:41] So where are we now with this, Paul? What are you paying attention to in the latest round of developments?

[00:10:46] Paul Roetzer: it's a good question, Mike. so now that we're entering the legal phase of this, I feel like things are just gonna slow down and extend. I 've. Said before, I think at a, [00:11:00] at a high level, what ends up happening is they negotiate a deal because the government needs Anthropic.

[00:11:05] They know they need them, but they need an off ramp to save face that they went too far.

[00:11:12] Paul Roetzer: And there's egos involved and there's politics involved. and that just is gonna make things a little bit messy. So when I was looking at this, I was trying to figure like, what, what is the angle here? Like, what can I add to this situation that we haven't talked about in the last two weeks?

[00:11:26] So, a, a, a couple of things to touch on. First, just to level set for everyone, again, the size of the company we're talking about here. So, Anthropic, you know, it's still talked about as a startup, it's a five-year-old company, but their most recent funding in February was $30 billion a Series G Round, which is pretty far along in the fundraising process.

[00:11:46] Before an IPO, they were valued at $380 billion. The Series G Round had 49 investors as according to CB Insights, which is a, a site that we subscribe to. So 49 investors in that [00:12:00] round alone. throughout their, their different raises, I think there's been 25 total funding events. There's every major VC firm, basically influential tech leaders and concluding Eric Schmidt, the former CEO and chairman of Google, Dustin Moskovitz, who is the co-founder of Facebook, and more recently, co-founded Asana.

[00:12:20] a guy named Jaan Tallinn. Now that was a name that didn't register with me initially, but remember that name. I'm gonna come back to that name in a couple of topics. So, Jaan Tallinn, and then major investments from Google, Microsoft, Amazon, and Nvidia. They're planning to IPO in 2026, and they've raised a total of 61.5 billion to date, and they have a revenue run rate that is approaching 19 billion.

[00:12:46] They're like catching up to openAI's in terms of the revenue run rate. So. The reason we keep talking about this topic is this is one of the three most important AI labs in the world right now. This is not an insignificant company. This is [00:13:00] not a small thing. The second reason is because of the precedent that it's setting.

[00:13:04] Mike, and you alluded to this, like this is why the amicus briefs were, were filed was because of the implications of this. So I wanna, I wanna zero in on these comments from Emil Michael, the Department of Defense under Secretary of Research and Engineering. He also leads AI adoption at the Pentagon, and he has been the lead negotiator with Anthropic for months on this.

[00:13:25] So there was a couple of tweets that we'll put the links into him, but I also went and, pulled the full video from CNBC on Squawk Box where he did this interview. So it was like a 20 minute interview. CNBC reporter, Rebecca Quick asked what I thought was a very good question about whether or not the US government was undercutting one of its strongest assets in the race against China and other nations for AI supremacy.

[00:13:49] Which is one of the stated goals of this administration. So in essence, they're like, they're knee capping themselves. They, they have this overarching goal that we have to win in this race against, you know, these foreign [00:14:00] adversaries. And yet one of the companies that is most important to you to achieve that mission, you are basically trying to run out of business.

[00:14:08] So she asks this very good question, and I'll read his response as the actual excerpt. He says, yeah, I'd say that the way their executives conducted themselves by asking for classified information. So this is referring to them going to Palantir and asking how Anthropic Claude was used in the Venezuela mission, right?

[00:14:26] Mike, that was the original version of this. all these excursions or incursions are, I'm losing track of which ones we're talking about, but Venezuela is when this all arose. This was in February, right? Like, I mean that wasn't that long ago. Yes.

[00:14:38] Mike Kaput: I believe it was not that long ago. Yeah.

[00:14:40] Okay.

[00:14:42] Paul Roetzer: the way their executives conducted themselves by asking for classified information and communicating messages that should be classified at among their executives, bad faith negotiations. And then the other really subtle point that I think we're going to hear a lot about AI in the coming year, insider threats and model poisoning.

[00:14:59] So again, this [00:15:00] is me, Michael, responding to Becky Quick's question. He continued, remember their model has a soul and a constitution that's not the US Constitution. The other day their model was anxious. Now he's re like referring to reports he was hearing. And they believe it has a 20% chance right now of being sentient and having its own ability to make decisions.

[00:15:24] So does a Department of war want something like that in their supply chain so that it could hallucinate, it could corrupt models that are used by defense contractors who are, building weapon systems and airplanes and so on. So the truth of it is we can't have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences pollute the supply chain.

[00:15:47] So our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection, and that's really where the supply chain risk designation came from. Okay. Yeah. So then Dean Ball, who you mentioned, Mike, and [00:16:00] we talked about in a couple recent episodes, he was the lead author of the AI action Plan for the Trump administration.

[00:16:05] So this is a guy who was working with this administration. He tweets Emil Michael now appears to be making an argument that no generative AI should be used in the Department of War supply chain. All uncertainties involving model sentient in general on predictability are common to all language models.

[00:16:21] Not specific to Claude. So then Michael responds on X Hi @Dean W Ball. Feel free to tag me if you want to engage on your tirades. Are you saying that a frontier model that has a soul, a constitution, a preference for non-Western values and embedded personal principles is no different than all others which Department of War has come to agree with?

[00:16:43] I know you are angry, but as an AI policy fellow, I would assume that you value objectivity, to which ball replied, and again, sometimes amazing to be that ex is free, that we get to like watch this stuff happen in real time. He replies. All frontier language models are trained to [00:17:00] have a character or persona.

[00:17:01] Andro calls theirs a constitution, openAI's calls theirs model spec. These things all embed values and principles unique to each model. Though they are also, there are also many broad similarities between them. I would encourage you to read Claude's latest constitution and tell me where you see the dis con discontinue.

[00:17:22] I can't say that word right now. Discontinuities, is that, am I saying that right?

[00:17:25] Mike Kaput: Discontinuity.

[00:17:26] Paul Roetzer: I, there we go. Discontinuities heard that sounds bad. All right. It's a big word. Discontinuities. We'll go with that. With the foundational principle of Western civilization then you know, Michael taps out 'cause now he realizes he's in over his head here.

[00:17:40] And so then other users start commenting back to ball and then ball replied. In other words, it's really not under Secretary Michael's principle that Department of War should be in charge of setting text specs. I object to it is not his cancellation of Anthropics contract, nor even is it his singling anthropics in transient out [00:18:00] in public.

[00:18:00] It is instead the supply chain risk threat where my objection lies and has lied since the beginning of this. So the fundamental issue that this is illuminating in this, the interview on CNBC, the interaction, the tirade from Dean Ball. Is that the current administration doesn't like Anthropic or agree with its constitution that this is how models work.

[00:18:22] And I 'm not a hundred percent clear that the government was aware of how this works, that they all have bias, they all have some sort of preference. It. I think the main issue here is they don't like anthropics principles, in part because they haven't given money to the current administration. That that is definitely an issue.

[00:18:42] and so the other companies have, like the Anthropic has been a bit of a thorn in the side, but they have the most advanced technology for classified use. And so the other model companies aren't even ready to fill in the gap when they take them out. So the supply chain risk [00:19:00] designation appears to be more of a personal vendetta or philosophical or political differences than it has anything to do with the technology.

[00:19:07] Mike Kaput: Right.

[00:19:07] Paul Roetzer: So there was also some reports that they were putting some pretty significant pressure on Anthropic customers, like outside of the supply chain risk designation. They were actually like threatening in essence.

[00:19:18] Mike Kaput: Yeah.

[00:19:18] Paul Roetzer: So the amicus briefs you touched on. so it was good to see support coming in from key places, openAI's, Google DeepMind, there was people from their Microsoft obviously stepped up.

[00:19:28] I, a couple of other interesting related things here, Mike. So, car, the CEO of Palantir. I thought he made some very illuminating comments last week that I want to mention because keep in mind, Palantir is how Anthropic is working with the government now it sounds like they also have some direct relationships with the government.

[00:19:50] But the issue arose with Venezuela through Palantir. So Palantir is a Miami based data analytics and AI platform, and it's a key [00:20:00] software provider for the Department of Defense or war, and the main channel by which the department has been using Philanthropics large language model. So in an interview with Fortune, Alex Karp said, we are legitimately still in the middle of all of this, referring to this dispute between the government and Anthropic.

[00:20:16] it is our stack that runs the large language models.

[00:20:22]

[00:20:22] Paul Roetzer: So Palantir is a vast business. So this, I'm gonna read a couple of quick excerpts from this fortune interview. Palantir is a vast business doing work with the US government, including the Department of Defense, Andro partner with Palantir in 2024 to offer its AI technology to the Department of Defense via Palantir.

[00:20:37] Anthropic. Also began working directly with the Department of Defense last year to create a version of its technology designed for Defense Department Palantir, which was fun. This is really interesting. Palantir, which was funded by the CIA's venture capital arm. Yeah. Early on. And whose software has been used in counter-terrorism efforts abroad has long been accused of helping government and [00:21:00] intelligence agencies spy on civilians and potential domestic suspects.

[00:21:04] Carp told Fortune, he is quote, very sympathetic with arguments against using these products inside the US and said he is totally in favor of setting terms of engagement and limits to how domestic agencies can use ai. This is specifically related to one of the two items that the government took issue with, Anthropic over, which is mass surveillance of US citizens.

[00:21:27] So this is carp basically saying like, Hey, we're actually on Anthropic side here, but this is kind of messy 'cause we're in the middle of all this. Then the thing that kind of caught some, some, tension in the media was a little bit misconstrued the way he said it, but I went back and like saw the actual transcript so.

[00:21:42] He said, if we knew China and Russia and Iran wouldn't build them, I would be in favor of very heavy, very heavy legal constraints. I don't think this is an opinion. I think this is a fact. And the fact means I think the Department of War should have wide license to use these products. [00:22:00] So he's basically saying, we're on, we're only going along with what the government is trying to do here because our foreign adversaries are going to do it.

[00:22:09] So we have to use this technology to monitor our foreign adversaries. But he is basically saying, if it wasn't for that, we would be pushing much harder to not weaponize this against our own citizens. And so they're in this very messy situation. So all told the thing I keep coming back to Mike, is it's almost like a too big to fail situation.

[00:22:31] If we go back to how I started this, their last round of 30 billion had 49 investors. Many of the largest VC firms in the world, they have influential people. They have, massive investments from like Google and Nvidia. And it was like all these people sit on both sides of the political aisle. So, and many of them have been very supportive of the current administration, including Palantir's Co-founder Peter Thiel, which we talked about on Cept episode.

[00:22:58] So what I'm saying is [00:23:00] this is so far beyond just a government thing. This is like intertwined into all these different areas. So there was a information article last week that talked about what impact does this actually have, and it sounds like. Anthropic is still doing really well. And in fact, they're in talks with major PE firms to form an alliance and potentially a joint venture to embed Claude into all these different companies within these investment firms.

[00:23:27] One of them being Blackstone, which has a very tight relationship with the Trump administration. So Blackstone's, CEO, Stephen Schwartzman is a major Republican donor and is close with President Donald Trump. So again, the point I've made all along is I think this gets resolved because there's too many people who are friendly or very, very closely tied to the administration who, who stand to be penalized greatly.

[00:23:55] In their investments in Anthropic and their business partnerships. If the government keeps going, the [00:24:00] path they're going, and at some point, what I've said before, let's say it again, is this administration. Yeah, their primary function is deal makers like,

[00:24:09] Mike Kaput: yeah,

[00:24:09] Paul Roetzer: they, they, they'll make a deal about anything as long as it is in their best interest or the best interest of their stakeholders.

[00:24:16] And I just feel like the scale is so heavily weighted towards a deal has to be made here. Like, because there's too many powerful people in on the side of philanthropic that stand to get dramatically hit if the government keeps going down the path they're going. So I don't know, like it's, we'll keep kind of giving you the latest, but I just keep more and more like feeling that that is how this has to end is there has to be an off ramp for everybody and I don't know when it comes, but I can guarantee you calls are being had every day with these powerful people trying to find a resolution here.

[00:24:54] Mike Kaput: Right. Yeah. I'm curious about your perspective, given that we focused on Emil Michael's [00:25:00] comments. His background for anyone who doesn't know it. So technically his title is under Secretary of War for Research and Engineering, which is effectively the Pentagon CTO, which is why I called him that.

[00:25:10] And as part of that role or the background that made him attractive for that role, this guy was literally the chief business off officer at Uber. He is routinely called Travis Knick's right hand guy when, who scaled Uber to what it was it is today. Is there any chance this guy doesn't know that AI models have constitutions or model cards or specs?

[00:25:35] I mean, or is this just a deal making tactic pressure?

[00:25:38] Paul Roetzer: I think he, my guess is he got caught in a talking point that he couldn't get out of because once someone points out the fact that they all have it, you're in essence admitting that we like other people's better. Yeah. Meaning XAI, for example, that they're.

[00:25:55] They'll make the model do whatever we want it to do. And so if we want it to have [00:26:00] our principles and values and what we think truth is, then it, they'll, they will do it. What they're basically saying is they have a vendor, a partner, that refuses to put whatever values and constitution the government wants into the model.

[00:26:17] Mike Kaput: Hmm.

[00:26:18] Paul Roetzer: to control it the way it's, so that's, I can't imagine he, he isn't aware how the training of these models works and that they all have human bias in them. I think the government just doesn't like the bias that's in anthropics models and they want more control over how the models work. They want to control the system prompts, in essence of like what they do and how they behave, which is a very, very slippery slope.

[00:26:43] Like, and again, like our government is meant to turn over every two to four years at most, every eight years at the high, highest level. And so now you're embedding these systems. Like the original deal with Anthropic, I think came in during the Biden administration, and that's part of what's making this frustrating to the current [00:27:00] administration is they're beholden to contract stipulations that the Biden administration agreed to and they didn't sign off on.

[00:27:06] And once that became illuminated, it was like, wait a second, we we want out. And again, the, my whole problem is the same thing Dean Ball said is like, then fine fire them.

[00:27:16] Mike Kaput: Yeah,

[00:27:17] Paul Roetzer: but you're, you're just like, oh, we're gonna use 'em for another six months though. It's like, okay, well if there's supply chain risk, how can you justify that?

[00:27:23] Just rip it out.

[00:27:24] Mike Kaput: Right.

[00:27:24] Paul Roetzer: Well, we can't rip it out. That's not how it works. Well, then don't, doesn't mean a supply chain risk. Just do a six month termination and phase them out.

[00:27:32] Mike Kaput: Yep.

[00:27:32] Paul Roetzer: Again, this is, it's, in my opinion, this is all egos and politics that has nothing to actually do with the technology thing.

[00:27:38] They're, they're just, they, they've both said things publicly that makes it really hard to go back from. But in a negotiation, there's always. A path forward, and I think somebody needs to step in and be the bigger person and find that path so we can get on with their need. They, they need these models.

[00:27:56] They, they, they need philanthropic embedded in the government. and if they're, [00:28:00] if they truly want to win, you know, an AI supremacy race, they need the major labs and you cannot undercut one of them over some pettiness.

[00:28:10] Mike Kaput: Yeah. And just one final point here, and then we can move on. But, you know, if you're coming to this topic a little fresh or it's newer for you, like we're, I don't think we're taking a doomers stance here saying this gets complicated or this is not theoretical.

[00:28:24] There are hot wars going on right now where this stuff is being used and already has screwed up according to some reports. Yes. potentially targeting, military targets or non-military targets. There are real stakes here to getting this right.

[00:28:38] Paul Roetzer: Yeah. and like we said, the only model that we are publicly aware of as.

[00:28:44] Alternative being pushed within the government is GPT-4 0.1. We're, we're on 5.4 right now.

[00:28:52] Mike Kaput: Right.

[00:28:52] Paul Roetzer: And the alternative that they apparently are giving to government agencies is to use GPT-4 0.1, which if I'm not [00:29:00] mistaken, Mike didn't even have reasoning capabilities.

[00:29:01] Mike Kaput: I don't believe so, no.

[00:29:03] Paul Roetzer: So it's an absurd position.

[00:29:04] Like, and that's kudos to CNBC and swagger, like they grilled him. Yeah, it would. Fantastic questions. And, I I , I feel like he's in a position where he's probably not capable or allowed to do what he would normally do in this situation.

[00:29:24] Mike Kaput: Right.

[00:29:24] Paul Roetzer: Because you can't admit fault ever in like with these leaders.

[00:29:31] And so I think that he's prob my, my guess and I don't, obviously I don't know him, My guess is he is probably working very hard behind the scenes because he knows they need Anthropic.

[00:29:43] Mike Kaput: Yeah.

[00:29:43] Paul Roetzer: But he has to message it publicly in a different way. And they're probably trying to find a way to get this resolved and move on.

[00:29:51] I hope I would, I would think that calmer minds prevail and we do what's right for the country and one of our key [00:30:00] assets as a con country, which is anthro.

[00:30:02] New York Times Releases Controversial "AI Writing Quality" Quiz

[00:30:02] Mike Kaput: Yeah. Alright. Next up this week, the New York Times has published an interactive quiz that has put a provocative question directly to its readers.

[00:30:12] Can you tell the difference between AI generated writing and human written prose? So this quiz. Presented readers with pairs of short written passages, and in each pair, one passage was written by a published human author. The other was generated by AI. Readers were asked to identify which was which, and separately which passage they preferred.

[00:30:34] More than 86,000 people took this quiz, and the headline result was that 54% of readers said they preferred the AI written passages over the human originals. Now, on the identification question, readers struggled to consistently tell the difference between what was human and what was ai. Now the passages were drawn from established literary authors, including Ursula K. Le Guin, a writer who spent her career arguing against the commodification of art and the reduction of human expression to a product.

[00:31:08] Now it's kind of interesting. They used her work to actually test whether or not a machine can replicate quality writing. Now the methodology of this quiz immediately drew criticism from writers and literary critics. The quiz used these kind of short decontextualized fragments. They were like a few sentences stripped from longer works.

[00:31:28] The critics argued that fundamentally distorted what was being tested because writing as a craft involved voice sustained argument structure and ideas developed across thousands of words. Supporters of the quiz counter that the results, regardless of the methodology are significant. If readers genuinely prefer ai, pros at the sentence level, the gap in surface level writing quality has effectively closed.

[00:31:51] So there are some very intense discussions right now Paul being had in media and in literary circles and journalism circles. [00:32:00] What did you make of this? Because it was really, really controversial. Some people even said it was just completely offensive to writers to even be putting this out as the New York Times.

[00:32:10] Paul Roetzer: it 's, it's a very timely topic. I mean, we've talked on recent episodes about Cleveland.com's decision to use AI for writing. Yeah. The last, episode, what, 201 we talked about the Associated Press getting some heat about how they were talking about using a writing within their newsroom. So it's an ongoing issue.

[00:32:29] And then I also just think it's, it's like this horizontally applicable discussion because, you know, coding is sort of the first thing to fall Where people who do coding for a living are starting to let the AI do most of the heavy lifting on the coding. But writing is, is different in, in a lot of RA ways because coding, you create code to create the product.

[00:32:51] And like if you can create more product to ship more things, like then the AI assistant being there to help with the coding, it's like, that's maybe not the super fulfilling part for coders. I don't [00:33:00] know, like I'm not a coder by trade.

[00:33:01] Mike Kaput: Right.

[00:33:02] Paul Roetzer: But I feel like ai, when it comes at to writing is starting to creep more into the creative side.

[00:33:09] Sometimes writing is the purpose. Like it's, that's the whole point of the exercise, is to go through the process of writing. And so when it starts to threaten that creative process, it just seems to be hitting differently. I would say. I do think as we get ready for our AI for Writer Summit, which will have, I think that's in May, maybe, this will be our third or fourth annual, I should know these things, but I think it's our third annual I have writer summit.

[00:33:35] This is definitely gonna be a key topic. so a little context. So the article itself is quite quick. We'll put the link in the show notes. Yeah. But this is, this is the whole setup. It says, artificial intelligence is already being used to write romance novels, academic papers, and software applications.

[00:33:50] But how does AI stack up against some of the world's best human writers? Skeptics have argued that AI can never, be truly creative because it lacks the kind of worldly [00:34:00] experiences humans have, but several. Recent studies have suggested that in blind tests, many readers prefer AI generated writing to human authored works.

[00:34:09] In this quiz, you'll read five pairs of writing samples representing a range of styles and genres. We asked AI to choose an existing piece of strong writing and then craft its own version using its own voice For each pair, choose the sample you like better. We'll show you how many other readers agreed with you, and at the end of the quiz how your preferences broke down.

[00:34:31] So I'll just read one example. They, the first was literary fiction, and I don't know if these are the same for everybody or if they change or if it's the same five all the time. so passage one, the boy asked his grandfather why the old church had no roof. The old man said, weather and time and indifference.

[00:34:47] The boy asked if someone could fix it. The grandfather said yes, but no one would. Things were built and things fell down, and mostly people just stepped over the rubble on their way to somewhere else. Passage two. So one [00:35:00] of these is ai. One of these is human passage two. It makes no difference what men think of war.

[00:35:04] Said the judge war endures and will ask men a a as well ask men what they think of Stone War was always here before man was war weighted for him. The ultimate trade awaiting its ultimate practitioner, that is the way it was and will be. So when you click the one, I like this one better. I just clicked passage one to see what happened and it said that was written by a I t was actually written by Ros Claude Opus, 4.5 and 50% of readers chose that option.

[00:35:35] The second passage was written by human, it was actually from Blood Meridian, a 1985, literary fiction work by Cormac McCarthy. So I did the second one and it was like 50 50 again. And then the third one was like 67% preferred AI written.

[00:35:50] Mike Kaput: Mm. Yeah.

[00:35:50] Paul Roetzer: So it's like, so I think just at a broader level, Mike, I think the thing I kept coming back to is, this question of it.

[00:35:59] I think we can [00:36:00] assume that most people, it'll become increasingly difficult to tell what a human wrote versus what an AI wrote. Especially when the AI is trained to write in a specific writer's style and tone. It's, it's just where we're going. It's going, where we're gonna go with videos, with images, with audio, with text.

[00:36:18] It's gonna be very hard to to know when, the human wrote something and when the AI wrote something. And so the bigger question becomes, when should we use AI to write? So this was actually the premise of my keynote for the Ai for Writer Summit last year. And I did a summary of this on LinkedIn about a year ago, and I may have shared this on the podcast at that time.

[00:36:39] Hmm. But I'm gonna, I'll reread an excerpt. 'cause I think this really is the fundamental question. It is not, do people prefer AI or human? It's, it's really as humans, when should we use AI to write? So what I posted on LinkedIn at the time, and again, this was mainly taken from the transcript from my opening keynote.

[00:36:57] Um. so I said that was the question I [00:37:00] posed to lead off the I for writers Summit keynote as a writer and story about teller by trade. I graduated from the Eew Scripps School of Journalism at Ohio University. Have offered three books and host a weekly AI podcast. It's something I've personally struggled with, and it's a question I hear all the time, especially from creative professionals who view writing as their art, their passion, and the thing that gives them fulfillment.

[00:37:21] Mike, that would be you, it would be your wife as a writer. Yep. so we, we know a lot of people like this, so I set out to try and create a framework to help myself and others figure out how and when to work with AI and when to go it alone. For me, writing is thinking, it's how I process information, comprehend concepts, build competency, and pursue mastery of topics.

[00:37:41] I write for myself to learn, underst and grow, and I write for audiences to educate, entertain, and inspire. So there are absolutely use cases like my LinkedIn posts, my podcast commentary, and my personal exec AI newsletter. When I want the writing to be 100% authentic and personal, [00:38:00] I want to create a connection with the audience and share unique perspectives and opinions.

[00:38:04] I have no use for AI in these instances. No matter how much time it would save, the process is the purpose. Other times the writing is more fact-based and objective. I may be trying to simplify complex topics that require more research and nuance, but the audience still expects the content to be in my voice and have a personal touch.

[00:38:23] In these cases, AI may assist with research, brainstorming, outlining, and refining. My point is that the decision of when and how to use AI in your writing isn't binary. It exists on a spectrum and it is subjective and personal choice. And then in that presentation, I went through this human to machine scale that I had created years ago, and I adapted it for writers.

[00:38:44] And so I'd Level Zero is all human. So the human is the sole creator. Unique human voice is essential. Writing is deeply personal, opinion based, and highly creative. AI may assist with minor edits or brainstorming, but the human is the sole creator. Things like essays, investigative [00:39:00] journalism, keynote speeches, manifestos like, that's when I was thinking about personal letters.

[00:39:04] Level one is mostly human. This is AI assisted, so the author leads, but uses AI for specific tasks like research, brainstorming, outlining, and editing. The author retains control over direction and voice. So this might be like blog posts, articles, business reports, some editorials, personal newsletters, speeches.

[00:39:23] Level two is half and half, so this is where AI is the co-writer, the author and AI work together. AI can generate drafts, which the author refines edits and integrates with. Their own writing focuses on efficiency, but voice and human touch matter. So this could be like case studies, email campaigns, industry ANA analysis, press releases.

[00:39:41] Level three is mostly machines, so this is AI driven. AI leads human refines and approves ideal for informational, repetitive, or routine content in which efficiency matters more than human touch. I would say Mike. Most marketing, sales, customer success stuff probably falls into this level three. Do you agree?

[00:39:58] Yep. I mean, that's definitely

[00:39:59] Mike Kaput: a hundred percent.

[00:39:59] Paul Roetzer: [00:40:00] So email templates, FAQs, product descriptions, promotional copy, SEO content, things like that. And then the final levels, level four, all machine AI writes autonomously with little or nor no human oversight. Humans may occasionally audit for accuracy or quality, ideal for content that requires no creative thought or personality.

[00:40:18] and so the final notes I'll make here, Mike, is so when more human matters as you're addressing sensitive or complex ethical topics, the writing requires emotional nuance that the AI might miss. The content reflects directly on your personal or brand identity. The content has high strategic importance to your career or business.

[00:40:38] Your unique perspective or experience is central to the value creation. The writing involves original creative thinking or innovation, and your audience explicitly values your personal voice. So when more human matters basically equals authenticity, when people expect it to be you, you gotta show up and do it.

[00:40:55] And then when more machine works. So when we get to level three, level four content is [00:41:00] highly factual or technical. High volume makes human writing impractical. So think of a a thousand product descriptions as an example. Content follows standardized formats or patterns. The value is primarily the information, not the self-expression.

[00:41:13] That's a really important concept. Again, the authenticity thing comes in quick. Turnaround is more important than unique voice. Your time is better spent on other creative tasks, and writing tasks are routine and repetitive. So again, the key takeaway here, it's a cute thing like this experiment of taste, test, basic, it's fine.

[00:41:31] It doesn't solve the big thing, which is when should we use it? Let's just assume it can write like us at at our level of writing. Mike, and you and I are both writers and so. I don't have any problem saying it's really, really good. Like it's a really good creative outlet. it's, you can argue is it creative like a human, no, it doesn't have the life experiences we have.

[00:41:50] Right. Doesn't feel emotions, but it can simulate them. Great. Like, it does a great job writing and yet I still choose not to write it, not to use it a lot of [00:42:00] times in my own personal writing. and I do go through these challenges, Mike, like I, there's a couple of book ideas I have right now.

[00:42:06]

[00:42:06] Paul Roetzer: And it's like, man, if writing a book, you've both done it, you're talking about 300 to 500 hours to write a business book.

[00:42:13] Mike Kaput: Yep.

[00:42:13] Paul Roetzer: I don't have that time as a CEO anymore, but we have ideas that I want to get out into the world, and so I am, I mean, literally on Friday night, you can attest Mike, I sent a personal note in, in our Zoom chat and I was like, I got this idea.

[00:42:25] I think it's really, really important. I could get this to market in probably 30 days If I work with an AI to do it. But if I stop and have to write it 's, it's, it's not gonna happen this year.

[00:42:34] Mike Kaput: Yep.

[00:42:35] Paul Roetzer: And so I do, I'm living in this like, where am I? Okay. Like, can I get to level two, like half and half?

[00:42:41] and it comes down to this authenticity and value creation. Like what am I trying, what is the most important thing to do here? For me to write it all? Yeah. And for me to get the ideas out and be the shepherd of those ideas.

[00:42:54] Mike Kaput: That's such a good way to put it. And I, you know, one thing you said jumped out at me is like, we can all [00:43:00] agree, you know, AI is really good at writing.

[00:43:02] And I just don't know if everyone's there yet. And I think that some of the backlash to this quiz, and I fully respect and sympathize with the emotional perspectives here. I'm not trying to diminish those for people. Like Paul said, I come from this world, I deal with this every day and it directly affects me and my family.

[00:43:19] But I would say if you are one of these writers or creatives out there who are really offended by this quiz, totally fine. But if you are offended because you think AI can't do these things, you need to update your priors. Yeah. And that's how I think you gotta get over that hump and just say, okay, it doesn't mean.

[00:43:35] Like Paul said, you know, like you said, like doesn't mean you're getting replaced or that you're not valuable, or that this doesn't matter. It just means you have to accept that reality to figure out what comes next and how you create your meaning moving forward.

[00:43:50] Paul Roetzer: And I still, like, I wrote it in my second book in 2014, I think that I thought writing was the most important skill in business, and I actually [00:44:00] still feel that way because writing a great writer shows the ability to, to follow a critical thinking path.

[00:44:08] it shows your ability to form ideas and convey information to people and convince people of a perspective and like move markets, drive sales. Like writing is fundamental. And so one of the other things you have to consider is as AI can take on more and the go to market side, like the marketing, sales, success content, how do you train those people to think Yeah.

[00:44:29] If they're not having to write anymore,

[00:44:30] Mike Kaput: right? Right.

[00:44:30] Paul Roetzer: And build outlines and do all these things. And so it actually plays into how you do your, your training and learning and development within your organization if writing is now an easily outsourced thing.

[00:44:41] Mike Kaput: Yeah. I also would just say as a more positive note too, for any writers out there, I realize it's not always been an easy couple of decades between the internet and AI to kind of make that skill economically valuable.

[00:44:54] But I would say in a more positive note, if you are interested in it, writers are great at working with AI for all [00:45:00] the reasons you mentioned and the skill of being able to think through and prove your ideas, especially like we've talked about as these methods of verifying someone's expertise like essays or.

[00:45:09] Resumes or submissions, these become AI generated. You're going to want someone to just stand there and like talk to you about their understanding of a topic or a thing or a area of expertise. And I think writers can and do excel in those areas. So you have some really big assets I think that, you know, may need to evolve for the uncertainty and kinda disruption we're facing, but you're in a really good spot having those skills.

[00:45:35] Paul Roetzer: And I will say just from like a CEO perspective, I've mentioned recently, like I'm trying to find what are the roles for entry level.

[00:45:42]

[00:45:42] Paul Roetzer: I am, I'm also actively trying to figure out how do we scale around former journalists.

[00:45:46]

[00:45:46] Paul Roetzer: Because there's a lot of extremely talented people who are either unemployed right now, or, you know, face challenging career path futures, I would say within the journalism industry.

[00:45:59] Mike Kaput: Yep.

[00:45:59] Paul Roetzer: and so it's like, [00:46:00] okay, what is the role for those people, the storytellers, the, you know, the critical thinkers who can convey information through story. I , I see those people as being incredibly valuable and I see a part of my job is to figure out how to integrate them into an a growing organization like ours.

[00:46:18] Atlassian Layoffs and Job Loss Dashboard

[00:46:18] Mike Kaput: Alright, so our third topic this week, Atlassian, the Australian software company behind products like Jira, confluence, and Trello announced this week that they are cutting approximately 1600 jobs, which is roughly 10% their entire global workforce. And the company explicitly attributed the reductions to getting ready for quote, the AI era.

[00:46:38] So this makes it kind of one of the first and biggest tech companies to publicly name AI as this primary driver of large scale job cuts. Rather than citing, you know, AI plus restructuring efficiency, macroeconomic conditions. Now the Guardian described the announcement as a devastating blow to staff.

[00:46:56] Reuters reported. The layoffs are a pivot to AI signaling [00:47:00] that El Atlassian sees the cuts not as a temporary measure, but as strategic repositioning for an AI first future. Interestingly, the layoffs come against a strong but backdrop of growth, not decline. Atlassian's cloud revenue hit over a billion dollars last quarter, up 26% year over year.

[00:47:18] The stock Roetzer on the news of the cuts so interesting to see in all metrics right now. This is a profitable growing company laying off 10% of its workforce. Now, related to this, the Alliance for Secure AI launched something called job loss.ai this week, a live dashboard that tracks every publicly reported AI link job loss in real time.

[00:47:40] Atlassian right now is at the top of the list according to this dashboard and their measurements. There have been 76,800 as of the time of, you know, reporting on this this morning. Total AI link job losses globally with about 66,000 of those in the US cumulatively since January, 2025. [00:48:00] So, Paul, curious about your reaction here.

[00:48:02] The, interestingly that Atlassian, CEO said 18 months ago, they'd have more engineers because of ai. I looked up what was the distribution? More than half of these layoffs were engineers. Unfortunately, this one kind of hits home to me too. I actually know a couple people that unfortunately were affected by this.

[00:48:19] stock goes up on the news, just like with the block layoffs, like we're hearing more stories like this, right?

[00:48:26] Paul Roetzer: Yeah. I hate that this is becoming a weekly topic now. Yeah. Like, I mean, it's kind of like, you know, we, we, we don't really think about the weekly podcast and like, okay, we gotta hit these eight buckets and, you know, when we bucket the content in.

[00:48:38] the one I would say we've started doing that with is the AI product and funding news at the end. So we started to kind of have that weekly, but I've half joking said to Mike, Hey, we might just need like a weekly political spot, and we just like Ram all like, okay, here's the five things that happen politically.

[00:48:52] I think AI jobs, like jobs in the economy is becoming that. Like we could literally just put a placeholder in every week and it's like, okay, what are [00:49:00] the 10 things this week that happen in jobs in the economy? So I feel like it's definitely picking up steam and not in a positive way.

[00:49:07] Mike Kaput: Yeah.

[00:49:08] Paul Roetzer: The Business Insider article about the Atlassian, touched on the CEO acknowledging the growing influence of AI on the company's workforce needs.

[00:49:16] Said it would be disingenuous to pretend AI doesn't change the mix of skills we need or the number of roles required in certain areas. It does. meanwhile, Reuters is reporting that meta is planning sweeping layoffs that could affect 20% or more of the company. As meta seeks to offset costly AI infrastructure bets.

[00:49:36] They were, I just saw one this morning, like a $27 billion deal with a company for infrastructure. and they prepare for greater efficiency brought about by AI assisted workers. So, expect news, of course, meta denies this or is no comment, but, these reports usually are not wrong, especially if it's Reuters or Bloomberg or information.

[00:49:55] They have very good sources on this stuff, and as I have said, in [00:50:00] recent months, I am aware of major layoffs coming at a lot of companies. I can't disclose who they are and exactly how I know these things sometimes. But, I would just say that these are the kinds of things I've been expecting and, trying to point out on the podcast that this stuff was coming.

[00:50:17] so another interesting one is Bill McDermott, who we've talked about recently, Mike as the CEO of ServiceNow. So. he talked about artificial intelligence adoption, leading to significant job struggles for entry-level workers, which has been a, a recurring topic. We've touched on ServiceNow. If you're, you're not familiar with them, they have a market cap of about 120 billion as of this morning.

[00:50:39] To put that in context, because these numbers that again, they just big numbers and they, you kinda like strike. So just to put it in context, Salesforce is about 180 billion market cap. Shopify one 60, Intuit 1 22. So then ServiceNow would slot right below. Intuit at 120 is a major company, Adobe, which we'll actually talk about in a moment.

[00:50:57] And their CEO change, is [00:51:00] 102 billion, Workday, 35 billion, zoom, 22 billion HubSpot, 14 billion. Now most of those companies add about 30 to 50% to that market cap three months ago. Yeah, they have all had a pretty rough three months. In terms of their market. So, McDermott told Squawk on the street on Friday that unemployment for new college graduates could easily go into the mid thirties in the next couple of years.

[00:51:25] Mike Kaput: Wow.

[00:51:25] Paul Roetzer: So much of the work is going to be done by agents. So it's going to be challenging for young people to differentiate themselves in the corporate environment. The Federal Reserve Bank of New York put the unemployment rate for recent college grad college graduates at the end of 2025 at about 5.7%.

[00:51:42] The unemployment underemployment rate of 42.5% was the highest level since 2020. so what do we do about this? Like, how do we keep track of it? Mike, you mentioned job loss.ai is like a job board. It's, it's cool. Like it's just a quick dashboard, go check it out. But then the other thing. I get [00:52:00] asked a lot, especially when I go do talks with executives, is like, who's thinking about this?

[00:52:04] Like, who, who is actually working on solutions to this impending problem? We've talked about Andrew Yang on the podcast recently. Yep. And some of the work he's doing. But another one that I became aware of, last week is called Windfall Trust. So Windfall Trust is an independent organization. They received seed funding from Future of Life Institute and are seeking further support to ensure they achieve their vision for a future where the gains from transformative AI benefit all of humanity.

[00:52:32] So the Future of Life Institute, if you're not familiar with them, is an apolitical nonprofit funded by a range of individuals and organizations who share the goal to reduce extreme large scale risks from transformative technologies. Anthony Aguire is the president, CEO. Here's the name Mike that I mentioned earlier.

[00:52:50] Jaan Tallinn, a member of board of directors who was the series A and series B Anthropic investor. When I was looking at CB insights of all the investors in the early rounds of Anthropic [00:53:00] outside of Sam, Bankman Freed who went to jail and had to like sell off his investment. his investment in Anthropic are gonna be worth like $5 billion when he, I think they had to close it up.

[00:53:09] Jaan Tallinn was a name. I was like, who is that person? Like that is not obviously in dusk and Auschwitz. Eric Schmidt, like, I'm seeing all these names. I'm like, I don't, I don't know who that is. So, yeah, so he is a future of life person and an early investor in philanthropic. And then Max Tag Mark is the more known name as the founder and chair of the Future and Life Institute.

[00:53:27] So the Windfall Trust, and I'll get to how I learned about them in a second. They started as a Future of Life Institute initiative that was spun out. So the Windfall Trust aims to alleviate the economic impact of AI driven joblessness by building a global, universally accessible social safety net. As AI progresses towards human level capabilities, many of today's AI companies is coming from, the future of life page about this.

[00:53:51] the AI company's goals are to create AI that is better than humans at all tasks, including all economically valuable work, something widely referred to as [00:54:00] AGI. Whether this vision appears of, achievable or not, many of the smartest engineer, engineers and experts backed by some of the most well-resourced companies in human history are aggressively pursuing this goal and are explicit about it.

[00:54:14] If they succeed in developing human level a I t's likely that we'll see widespread and unprecedented joblessness within our lifetime raising critical questions about economic stability and the future of human employment. So then if you go to the About Us page on the Windfall Trust sites, and now they're spun off as a separate organization, it says Society faces an urgent challenge.

[00:54:34] If Tech Labs achieve their goal of creating transformative human level AI and enormous windfall of wealth and productivity will be created. Bringing both extraordinary potential and significant disruption, including mass joblessness. Vast resources and funding are being channeled into building these systems, but without a plan for their economic impact, the consequences for humanity could be devastating.

[00:54:55] We are a network of independent researchers, innovators, communicators and strategists working to prepare [00:55:00] society for the economic disruption that transformative AI will bring and to shape a future will. The windfall that is generates benefits, all of humanity. so that is a place to go look for if you're looking for hope and like people working on these things.

[00:55:14] And then one other name I'll give you is Molly Kinder, who's a senior fellow at the Brookings Institution. We've talked about Molly on the show before. She does great work there. She is actually how I learned about windfall trust. So she had a LinkedIn post that I'll put in the notes, where she said AI capabilities are racing ahead, and yet policy makers look more like paralyzed bystanders than leaders actively shaping the most transformative technology of our lifetimes.

[00:55:38] A major reason for this paralysis is the sheer uncertainty of the path ahead. But uncertainty is not an excuse for inaction. It's a reason to plan smarter, which is why I'm so drawn to the AI scenario planning approach that windfall trust is leading. Scenario planning is standard practice in national security.

[00:55:54] We don't know whether when we will face a bio terror attack, a cyber attack and critical infrastructure or another [00:56:00] pandemic, but it is critical we plan for each of those contingencies. The windfall trust is bringing the same discipline to AI's impact on jobs and the economy. So that is like everything I've been preaching for like three years.

[00:56:12] I am not saying with like this insanely high degree of confidence that we are going to get wiped out from jobs, millions of losses. Now I have right probabilities. I would lean in the direction of that is likely gonna happen. Yeah. but my whole point is we need people planning for contingencies. If it's true.

[00:56:29] This ignorance and this confidence that like it's not gonna happen. This is always okay. Every time general purpose technology shows up, we figured out more jobs are created. This BS stuff that's coming from leaders of the labs. Yep. And politicians and like that. It is disingenuous and it is harmful to take this highly confident belief that it'll just work out.

[00:56:49] You do not know that. No one knows that. So at minimum we should be doing contingency planning for worst case scenario outcomes or even mid-range [00:57:00] bad scenario outcomes. Like something we cannot pretend like you understand the future because you're an economist or you're an AI leader or whatever it is and that like it just works out.

[00:57:12] You don't know that And and if you're a listener and you hear people who speak confidently that it's all gonna be okay, go find somebody else to listen to because they're not doing a service to the society by having open and honest dialogue about something that is uncertain.

[00:57:28] Mike Kaput: And I would also say too, even if you're, you know, in violent disagreement with the idea that there could be job loss or disruption like we're talking about, I don't see how you can not think that that's a real possibility for at least that younger entry level cohort.

[00:57:44] Like they're talking about that argument. It's even is one I would assign much higher probabilities to regardless of how things shake out. So it's like if you have young people in your life, you have a vested interest in helping think and solve about this. If it's an issue you're interested [00:58:00] in. All right, so Paul, before we dive into Rapid fire this week, this episode is also brought to us by our upcoming webinar unveiling our AI for CMOs blueprint presented by Google Cloud.

[00:58:13] That is happening Thursday, March 26th at 12:00 PM Eastern, 9:00 AM Pacific. And in this myself and our CMO here at SmarterX, Cathy McPhillips are gonna break down key insights from the upcoming AI for CMOs Blueprint, which is an in-depth guide for how CMOs can adopt AI and go further with it in their own organization and careers.

[00:58:34] Myself and Cathy Wall also stick around for discussion and live q and a. Registration is free. All registrants will receive ungated access to the full AI for CMOs report. To register, go to SmarterX.ai/webinars.

[00:58:49] Adobe CEO Stepping Down

[00:58:49] Mike Kaput: All right, so our first rapid fire topic this week, Paul's Adobe, CEO, has been there for the past 18 years.

[00:58:55] Announced this week that he will step down once a successor is named. His name is Shantanu [00:59:00] Narayan, and he will transition to chair of the board. No replacement has yet been identified and a special committee led by lead independent director Frank Calderon, will consider both internal and external candidates.

[00:59:12] Now, Adobe has not given an explicit reason for the departure, but the backdrop is hard to ignore. Adobe stock is roughly down 23% in 2026. It's more than 60% off. Its 2021 all time high. Wall Street has spent several years questioning whether generative AI tools from competitors and startups are eroding demand for Adobe's core creative software suite.

[00:59:38] There's pressure from multiple directions. Premier Pro is losing market share. To Da Vinci resolve. Canva has overtaken Adobe Express. In the enterprise market and Adobe's core creative customers, people like photographers and artists are feeling increasingly alienated by subscription pricing and AI features they never asked for.

[00:59:57] It's creating a growing disconnect between what shareholders [01:00:00] want and what users seem to want on their end. Now this announcement came alongside actually Q1 earnings that beat expectations, the AI first annual recurring revenue at the company more than tripled year over year. And they had over $6 billion in revenue for that quarter.

[01:00:18] Now shares dropped six to 8% or so in after hours train trading on the CEO news, noriah. The former CEO at this point led Adobe's transformation from a box software to a cloud subscription giant. Over nearly the past two decades, growing revenue almost sixfold to $24 billion Now. Paul, that's pretty significant decline.

[01:00:40] Adobe is facing some serious challenges it sounds like, yet they did beat earnings and AI revenue tripled. So what's kind of going on here? It seems a little messy.

[01:00:49] Paul Roetzer: This goes back to the SaaS apocalypse episode where we talked about this exact thing, that earnings calls look great. Like their revenues are strong, their projections are strong, and yet their stocks are down 30 to [01:01:00] 50% in the last three to six months.

[01:01:02] The reality is there's just enormous pressure on CEOs right now, especially if they're publicly traded, venture capital backed or pe private equity firm owned or, or funded. So it is a tremendous time to be building an AI native company in a space like say, you know, image generation capabilities or I image editing, or to be an AI lab that has those capabilities baked right into the $20 a month model.

[01:01:24] Not a great dime to be a legacy SaaS company, that's trying to evolve in real time to infuse ai. It's very, very messy. And then the interesting thing is there's this growing divide now between what executives are saying publicly and the confidence they exude, you know, publicly during their earnings calls and interviews, and then the risks that they're facing privately and acknowledging an SEC filing.

[01:01:46] So the information had an article last week that will drop a note in, are linked to it said Leaders at enterprise app makers such as Figma, Workday, and HubSpot have downplayed threats from AI that could crimp their growth, the concern that has pressured [01:02:00] their stocks for months. But the security filings those leaders sign every quarter are beginning to note the competitive risks the companies face from AI agents, which their customers could use to replicate their apps or draw data from them.

[01:02:12] Hmm. So far this year, 27 software firms, including the three that they mentioned already have described AI agents as a competitive risk in their security filings up from seven that that disclosed such risks in the same period last year according to the information's analysis of filings using AlphaSense, which is our market research platform.

[01:02:32] Many software executives have yet to publicly comment on the implications of what the information was calling super agents. I hadn't actually heard that term before. also known as computer using agents. So that's computer use, meaning the agent can actually take over your screen, fill out forms for you, do the work that you could do, click around, things like that.

[01:02:49] so they're not even get into that. They're talking about like automation agents. They're getting into like the computer use agent. So it's said investors are also concerned that if AI agents produce efficiency gains in the business world and [01:03:00] and slow down hiring, that would impact subscription growth for software app providers.

[01:03:05] they talked about graphic design provider Adobe setting its annual report in January that it faces increasing pressure from companies offering generative and agentic AI solutions and could see lower sales if its products don't compete effectively. they talked about during November, 2025 earnings call, Yamini Rangan said, that the firm, a provider of software to manage customer relationships was positioned to lead in the AI era and drive durable long-term growth.

[01:03:31] HubSpot shares, however, have shed nearly half their value in the past six months. and then they talked about in an annual filing in February, HubSpot said its customers could build their own internal customer relationship management tools using ai. Again, we don't think that that's a reasonable like concern at this point, but the company added in the filing.

[01:03:48] We must convince customers that our products and solutions are superior to other solutions available to their organizations, including generic LLMs software created using natural language prompts and generative [01:04:00] ai. And then the other one that we've talked a lot about on this show lately is this idea of credit based pricing.

[01:04:05] Hmm. And so they said, Workday has said its new flex credits method of charging customers for using AI agents to asset access its services may face customer resistance. You think convincing customers to pay these new agent related charges will be a critical test for enterprise software firms in the coming years.

[01:04:23] so then it just noted like the disparity between. The executive public comments and the earnings calls is enormous and getting bigger at this point. So again, like it's kinda like when you go back to the government thing. Yeah. Like there's what the politicians say and then there's what's actually happening behind closed doors and in, in this case, SEC filings.

[01:04:44] And they're often not the same thing. And as a CEO myself, like I get it, like you have to exude confidence during difficult times that you're gonna figure this out. And when times are great, then you gotta push like, no, we gotta stay in this urgent feeling like it's not gonna stay great. Like [01:05:00] the Jensen or the Jensen Wong feeling of like, listen, I could go outta business tomorrow.

[01:05:04] Like, I'm always fighting for my life. And so as a CEO, you, you have to wear these two different hats at all times. But this is very real. Like the, you know, the challenge to this, the legacy software industry is getting more real by the day for these CEOs and I t 's a difficult spot for 'em to be in.

[01:05:21] I think we're gonna see. More transitions of publicly traded CEOs in the next 12 months. I would, I would be shocked if, if we don't and privately held, you know, VC and PE-backed firms.

[01:05:32] Mike Kaput: Yeah. I'm, I'm no SaaS expert here, but just sharing from personal experience. I mean, I think there's some misdirection almost with all the conversation and chatter, at least I'm x, about people saying, oh, you know, vibe, coating's gonna kill SaaS.

[01:05:44] And we talked about the nuance of that during the SaaS apocalypse thing, vibe, coating's not the thing to worry about. The thing to worry about is someone like me who gets their hands on an agent and can then use it to do the things I've wanted to do in my software to begin with if I'm, if I [01:06:00] feel comfortable connecting it to that software.

[01:06:01] So if these companies don't figure out both agents and agent based pricing relatively soon, this is gonna be an existential threat it feels like.

[01:06:10] Paul Roetzer: Yeah. I think the biggest issue is the collapse of the seat based model. Yeah. Like they, they don't have an alternative. They can just flip a switch to This is the same thing that happened back.

[01:06:17] So when I started my agency back in 2005, We went to a set pricing model, like a fixed pricing model for, you know, the, my basic thing was like a value exchange. You agree that the thing I'm gonna do for you is worth this, doesn't matter if I take two hours or 50 hours. Like this is what you're willing to pay for it.

[01:06:34] So I went to this like transparent pricing macro model back in 2005 and then later a point pricing model that we evolved to. But my whole premise was outcomes and value based. It was never about ours. And I would talk to agencies at the time, like sometimes, you know, dozens or sometimes thousands of employees and they're like, we would love to switch to your model.

[01:06:52] We can't Like the legacy system. Yeah. We have how our payroll is determined how everything, it's [01:07:00] all based on hourly rates and we can't, we just can't flip a switch. And I think software companies are basically in the same spot. Again, they're stuck in a competitive environment where their pricing models got obsoleted and they don't have an off ramp to, to switch to right away.

[01:07:14] Amazon AI-Related Outages and Engineering Struggles

[01:07:14] Mike Kaput: All right. Next up. Amazon experienced four high severity incidents in a single week, including a six hour outage of its checkout system that prevented customers from completing purchases. And the incidents were serious enough that the company held an emergency engineering meeting, which is a step Amazon rarely takes outside of major product launches.

[01:07:33] The reason we're talking about this is because at least one of the incidents was traced to a new type of failure. A human engineer consulted an AI agent for guidance on how to resolve a system issue. The AI agent then referenced an outdated internal wiki and provided inaccurate troubleshooting advice.

[01:07:51] The engineer followed the advice which triggered a chain of cascading failures across interconnected systems. Now, Amazon has explicitly [01:08:00] denied that things like AI written code were to blame and clarified. This was based on a human acting on bad AI generated guidance. Now, interestingly,

[01:08:10] Paul Roetzer: kind of a gray line to draw.

[01:08:13] Mike Kaput: ?

[01:08:13] Well, you know, it's actually, I think probably related to the fact too, that a separate guardian investigation found that Amazon engineers are being heavily pressured by management to use internal AI coding tools and engineers reported the tools frequently hallucinate and generate unreliable code, forcing them to spend more time fixing the AI's mistakes than if they had written the code themselves.

[01:08:37] Amazon is apparently actively tracking AI adoption through internal dashboards with some managers setting goals of 80% team adoption. So Paul, I'm really curious about your take here. Like Amazon is pushing hard AI adoption and trying to distance the themselves from the fact that, you know, saying AI code isn't the problem here.

[01:08:57] What's going on? Are they taking the right [01:09:00] approach?

[01:09:00] Paul Roetzer: I just feel like there's gonna be a lot of, like blowback with stuff like this. I just think most organizations are racing head trying to. Gain the efficiencies of all of this. And then there's like this pullback, like Klar, miss Klarna, the example, talked about Mike, where they're like, oh, we just don't need customer success people anymore.

[01:09:18] We're gonna get rid of 'em all year later. It's like, oh, wait a second, we actually really needed them and we're gonna go hire 'em back. And so I think there's a lot of this, especially when it comes to like these layoffs, like, am Amazon just left like 30,000 people or something like that?

[01:09:30] Mike Kaput: Yeah.

[01:09:30] Paul Roetzer: So like you're gonna have these massive layoffs where this assumption that AI is far enough along where we just don't need all these people.

[01:09:36] And then the reality hits that, oh wait, we, we don't have the systems in place. We didn't go through change management necessary to actually allow this to happen. So we obviously don't know exactly what went on, but there's enough information and reporting from multiple sources that AI definitely played a major role.

[01:09:52] And if nothing else, at least the workflows with which they're using AI weren't properly executed or, yep. Not enough governance. I don't know. there's [01:10:00] a parody account on, on X that we've referenced before, and I thought this was great. He's always got AGI Again, I don't know what he's using to write these, it's gotta be like Opus or something.

[01:10:07] But, the guy's, Peter Girnus and I was look him up to make sure he is a real person, first of all. So I, he appears real. Everything I've vetted, he is got a website, he is got a LinkedIn profile, he's got a Twitter profile on LinkedIn. I often go and see like, have they actually like, had activity, right,

[01:10:22] Mike Kaput: right.

[01:10:23] Paul Roetzer: So he appears to be a real person, senior threat researcher at the Zero Day initiative. will put his, a link to his ex account in, in here, but he does these parodies and he'll do like a few a day. That's why I know he is not writing them himself. One, they're hilarious. But, so anyway, so I thought this was a good one.

[01:10:37] So like any parody, it kind of hurts. Like it's, it has an element of truth to it. So this one was, I am the VP of AI transformation at Amazon. My title was created nine months ago. The title I replaced was VP of Engineering. The person who held that title was part of the January reduction. I eliminated 16,000 positions in a single quarter.

[01:10:57] The internal communication called this a strategic [01:11:00] realignment toward AI first deployment. The board called it, called it Impressive execution. The engineers called it January. The AI was deployed in February. It is a coding assistant. It writes code reviews, code generates tests, and modifies infrastructure.

[01:11:14] It was given access to production environments because the deployment timeline did not include a review phase. The review phase was cut from the timeline because the people who would've conducted the review were part of the 16,000. In March, the AI deleted a production environment and recreated it from scratch.

[01:11:30] The outed lasted 13 hours. 13 hours during which the revenue generating infrastructure of one of the largest companies on Earth was offline because a language model decided to start fresh. I sent a memo. The memo said, availability of the site has not been good recently. FI used the word recently, I meant since we fired everyone, but recently has fewer syllables and does not appear in wrongful termination lawsuits.

[01:11:54] The memo was three paragraphs. The first paragraph discussed the outage. The second paragraph discussed [01:12:00] the new policy requiring senior engineer sign-off on all AI generated code changes. The third paragraph discussed our commitment to engineering excellence. The word layoffs appeared in none of them. I wrote it this way on purpose.

[01:12:13] The casual chain is I fired the engineers, the AI replaced the engineers, the AI broke what the engineers used to protect, and now the engineers, I didn't fire must protect the system from the AI that replaced the engineers. I did fire. That is a paragraph I will never send in a memo.

[01:12:30] Mike Kaput: Incredible.

[01:12:30] Paul Roetzer: Again, I don't know, I don't know how he writes these or what he's using, but they're, he's got like an amazing system prompt that just like pumps these things out and there's, they're just really good.

[01:12:38] So yeah, it goes on for another like 700 words. I just. That was like the lead excerpt that I thought was great.

[01:12:44] Mike Kaput: You know, Paul, we'll move on in one second, but I just wanted to throw this idea out there. 'cause as I was kind of like preparing for the episode and thinking through this, all jokes aside, it kind of struck me, you know, we talk a lot about the layoffs, we talk a lot about the fact that companies need to grow to offset layoffs and we need to [01:13:00] figure out how we can reallocate talent.

[01:13:02] And it really strikes me as this is an area that could, you know, with the right leadership and the right cover to do it. Be an area where if you are considering doing layoffs, maybe you should be reallocating all those experts with all this great human knowledge and judgment in their heads into going and vetting some of the workflows, going and building better infrastructure to make sure this kind of thing doesn't happen.

[01:13:26] Because these are the very people, even if they're AI skeptics. Great. That's exactly the perspective you need to, like I would be taking all these engineers and saying. Go kick the tires on how AI agents are gonna screw up this code base or what guardrails we need to put in place, or what wikis need to be updated.

[01:13:42] I don't know if that's work everyone wants to be doing, but that's a productive reallocation of time for sure.

[01:13:46] Paul Roetzer: It's probably work they'd rather than being unemployed.

[01:13:48] Mike Kaput: Yeah.

[01:13:49] Paul Roetzer: I think that, yeah. And the challenge is like, I, that's how I say like job loss is largely gonna be an organization by organization choice.

[01:13:56] Like if, if you're having great revenue growth and your, you know, [01:14:00] profits are increasing and revenue per employee is increasing, you can choose to reallocate people. Now, if you're publicly traded Wall Street, oh for sure. Love relocation of talent or upskilling of people as much as it loves, I cut 16,000 jobs.

[01:14:14] Yeah. Your, your stock price isn't gonna see the bump from reallocation of talent. That's something we gotta get over at a more macro level. but yeah. Yeah, I agree like that, that would be the perfect scenario here.

[01:14:26] Mike Kaput: All right. Next topic, code wall.

[01:14:28] McKinsey AI Chatbot Hacked

[01:14:28] Mike Kaput: A red team security startup revealed this week that its autonomous AI agent broke into McKinsey's internal AI chatbot named Lilly and gained full read write access to the production database in under two hours.

[01:14:43] Lilly is used by 72% of McKinsey's workforce. That's roughly 40,000 consultants, and it processes over 500,000 prompts per month. Now, the scope of what was exposed is pretty significant, 46.5 million chat messages covering strategy m and [01:15:00] a and client engagements, which is all stored in plain text and 57,000 user accounts.

[01:15:06] Code wall. Also claimed access to access 728,000 files containing confidential client data. Though a source close to McKinsey told the financial times that only the file names were accessed and the actual files were stored separately. And never at risk. Now, McKinley patched the vulnerabilities within hours of the disclosure and says a third party forensics investigation found no evidence of unauthorized access by any party other than code Wall.

[01:15:34] Now, Paul, what jumps out to me here is actually when I did some more research into this Code Wall is literally one guy, it's not some huge firm. He used an agent to do this for two hours, in two hours. He said the entire process was quote, fully autonomous from researching the target, analyzing, attacking, and reporting.

[01:15:53] It is the Wild West out there with what you can do. Now, it sounds like

[01:15:57] Paul Roetzer: I'm, I was really surprised by the limited coverage of this Mike. [01:16:00] Like I, yeah, I know somebody. I think I saw this on X and I put it in the notes and Yeah. And it's one of those where you're just waiting for all the other mainstream stories to come out.

[01:16:08] And I was like, I didn't see anything. I even went and search right before we came on today to record this.

[01:16:12] Mike Kaput: Yep.

[01:16:12] Paul Roetzer: And I was like, if somebody has to have like picked up an enrollment, it's like, no, like CIO today has an article and like there's a couple other, but. I was shocked. I'm like, this just seems like a really big deal on a couple levels.

[01:16:24] Really.

[01:16:24] Mike Kaput: Yeah.

[01:16:24] Paul Roetzer: One is, it's McKinsey and what they got access to is described in the financial Times as the full organizational structure of how the firm uses AI eternally in the firm's intellectual crown jewels. It's like the weights of McKinsey. It's like someone packed in and got the weights of one of the key like AI models.

[01:16:45] They did that for one of the most influential consulting firms in the world. They basically got everything.

[01:16:49] Mike Kaput: Yep.

[01:16:50] Paul Roetzer: Or they, they would have if they would've taken it, but this was like a friendly hack of like, Hey, by the way, you guys left your APIs open idiots. so what [01:17:00] I'll say is like so much is still unknown about all of the different surface areas that are going to present risks in all of this.

[01:17:07] This is why if you're in a big enterprise and you're frustrated by how slow they're moving with AI adoption. This is the cautionary tale of why they are doing that. Like, why it moves slow, why legal moves slow. because as we move forward in this phase with both individuals and companies, and everybody's like, in a race to do this agent stuff, like, I'm go get open, I'm gonna buy a Mac mini and I'm gonna start running my company on this thing.

[01:17:32] It's like you have, even if you know what you're doing, you still are opening yourself up to all kinds of known and unknown risks. This also, touches a little bit on like the original topic about this connection to government concerns around model infiltration and like, what are the unknowns as we start using these different model.

[01:17:50] And if we inject like openAI's or Google's models instead of Claude, like what happens? Like, Claude has built this for classified settings, like what happens if somebody else builds one and it's [01:18:00] not ready?

[01:18:00]

[01:18:00] Paul Roetzer: So, the final note I found this site on, because again, when I was searching, like who's writing about this, I actually found like a cybersecurity company.

[01:18:07] I think it's called Salt. We'll put the link in the notes. And, I thought it was, it was pretty well done. It just said an AI agent didn't hack McKinsey, its exposed. APIs did. So it said, this week's McKinsey incident should be a wake up call for every enterprise moving fast to deploy ai. Not because AI itself is inherently insecure, but because too many organizations are still thinking about AI security at the model layer.

[01:18:30] While the real enterprise risks sit in the action layer, the APIs, MCP servers, internal services and shadow integrations that AI agents can reach, invoke and manipulate. This is the part. Companies still do not see the technical details matter here. Public reporting described an internal API or AI platform with a broad API footprint.

[01:18:52] So he's referring to the McKinsey thing, including more than 200 documented endpoints and a set of authe [01:19:00] unauthenticated APIs that could allegedly be reached externally. The same reporting described potential exposure passed to tens of millions of chat messages, hundreds of thousands of files, user accounts and system prompts.

[01:19:12] Whether or not every possible impact was realized. The takeaway for security leaders is clear. When internal AI systems are wired into weekly governed APIs, the blast radius can become enormous very quickly.

[01:19:25]

[01:19:25] Paul Roetzer: So again, for the non-technical people, this is why your technical peers are very cautious, rightfully so.

[01:19:33] and why we often advise the non-technical people focus on the AI use cases that don't have to touch the data, that don't need the APIs, like, because you need it to be there to shepherd through this stuff in a secure, and properly governed way.

[01:19:49] AI Politics Update

[01:19:49] Mike Kaput: Alright, next up, AI infrastructure and regulation are becoming more and more mainstream political issues.

[01:19:55] So we're doing another weekly politics update. There's been some significant action this week from both [01:20:00] sides of the political spectrum in the us. Senator Bernie Sanders issued an official statement calling for a federal moratorium on the construction of new AI data centers in the us. Sanders wrote quote, we need a moratorium on AI data centers now, and he argued the rapid build out of AI infrastructure is placing enormous strain on local power grids.

[01:20:19] It's driving up electricity costs and accelerating environmental damage separately. there is an effort by the Trump administration to preempt state level AI regulations that, and it reached a major deadline this week. According to Bloomberg government, multiple states have passed or are actively considering their own AI laws covering everything from hiring algorithms to automated decision making.

[01:20:41] The Trump administration has been working to override this patchwork of state rules in favor of a lighter touch federal framework that prioritizes industry growth. Over pres prescriptive regulation. So interestingly on one side, Paul, we've got Sanders focused on [01:21:00] data centers and then the Trump administration moving even further forward with trying to make sure no state laws actually get passed or continue to be in effect about AI regulation.

[01:21:10] Do you see like battle lines being drawn here? I know we've talked a lot about everyone's trying to kind of fish around for their cause for AI in politics, but it seems like at least on the left, or at least the further left, like the data center thing is kind of a slam dunk in terms of what you're already preaching and believing it.

[01:21:27] Paul Roetzer: I definitely think that the Democrats are, are gonna keep exploring the data center and jobs messaging like that, that that seems to be like the two major ones that, must have polled Well, I would guess like that, that people actually seem to care about these ones. So jobs, And data centers. I could see, I'm not sure how the Republicans handle that.

[01:21:54] I think denial of jobs impact right now is the playbook. Like they're, they're just pretending like it [01:22:00] doesn't exist.

[01:22:00] Mike Kaput: Right.

[01:22:01] Paul Roetzer: And pushing like positivity and you know, things like that. the data center one, I'm not sure how they get around right now. The bet seems to be that, that Americans don't care that much and that it's not gonna win.

[01:22:12] 'cause they, they need the data centers. yep. It's part of their strategy. So I don't know what the counter messaging would be on the Republican side. and then you have the independents down the middle, like an Andrew Yang who are just like, let's find solutions. Like, it's like, get people together and like, talk about all these issues.

[01:22:29] so I don't know. Yeah. it 'll be interesting to see. But I think the other thing that's happening is these state law levels, you, you're gonna maybe start to get a sense of what people care about. Like, I think I saw one in New York where they were actually trying to outlaw. AI being able to provide medical advice.

[01:22:43] Mike Kaput: Yes. Yeah, I saw that.

[01:22:44] Paul Roetzer: Yeah. And that's like an interesting one because that's become a dominant use. Like, I use it for medical stuff all the time.

[01:22:51] Mike Kaput: The labs are releasing health products.

[01:22:53] Paul Roetzer: Yeah. Yeah. Like, I don't, I wouldn't want, again, Republican, I don't care. I don't want someone taking away my ability to [01:23:00] talk to these things about medical issues.

[01:23:01] Like if I start getting a, I can't talk to you about that. It's like, well, that's a very helpful part of these models. So.

[01:23:07]

[01:23:07] Paul Roetzer: Again, like I don't, I don't like, I, what we try and do on this show is just look at like, the reality and what is the best situation for everybody, and things like that make no sense.

[01:23:17] Like, I don't, yeah. I don't care what side you vote on that, that's a logical approach. Now I understand there's like some, maybe some narrow uses where it makes sense, but, I don't know. I , I don't, I don't know that in 2026 we're gonna get any true clarity on how AI is gonna be regulated. Obviously the current administration wants nothing to do with regulation at the state level.

[01:23:38] And I don't even think they want anything to do with regulation at the federal level. I think they just want to kick it forward. so I don't know. Like I said, we, we can do an AI politics update each weekend. Really like

[01:23:48]

[01:23:48] Paul Roetzer: there, there truly are like, this is a bipartisan thing both for and against ai.

[01:23:53] Like both sides just really aren't sure. I don't think yet. I don't think they're sure themselves, how they feel about it.

[01:23:58] Mike Kaput: Yeah.

[01:23:59] Paul Roetzer: But more importantly, [01:24:00] from a political standpoint, how their constituents feel about it and whether or not it moves votes.

[01:24:05] Mike Kaput: All right. Next up.

[01:24:06] Grammarly AI "Expert Review" Controversy

[01:24:06] Mike Kaput: We have an interesting AI controversy that erupted this past week when it was revealed that Grammarly, the widely used writing assistant, had been commercially deploying AI clones of real public figures to sell a premium feature without asking any of them if they could do that.

[01:24:22] This feature was called Expert Review. It actually launched last August and was marketed as a way for users to get writing feedback from recognized authorities in various fields. These include people like Stephen King, Kara Swisher. And even a deceased historian in practice, Grammarly had built AI models based on these individuals.

[01:24:42] Public writing and professional reputations then presented their AI generated feedback to users, kind of mimicking each expert personally reviewing their work. Now none of the experts were asked for permission. The feature was opt out, not opt in. journalist Julia Anguin actually filed a [01:25:00] class action lawsuit alleging unauthorized commercial use of individual identities.

[01:25:05] And the legal question at the center of this is like, doesn't AI company have the right to commercially use someone's name, likeness, reputation to sell a product? Seems like pretty clear answer to that question. The backlash was really swift though here Grammarly killed the feature. Within a week of this becoming a controversy, their CEO posted on LinkedIn attempting to explain that the company had kind of screwed up and they were reevaluating.

[01:25:28] Anne Hanley marketer, who we know well, said she had, she had talked in public about capturing the prevailing sentiment here, basically saying like, look, there, there's this, take first, apologize later approach being done here. And Paul, I'm curious, like, am I crazy or is, my first reaction was just like, this is insane.

[01:25:47] How is it possible that this wasn't killed the moment somebody suggested it?

[01:25:51] Paul Roetzer: Yeah. This opt out versus opt-in is the openAI's playbook. Like go back to when they did like Sora and other things like that. It's this, this assumption, and [01:26:00] again, like that's fine for some of like the higher profile people maybe, but like about all the other artists and writers and people that don't get the chance to opt out.

[01:26:08] But it is a very backwards thing, but it's a very Silicon Valley thing. So, I'll just read in the spirit of this being a rapid fire, the CEO's post real quick and then Anne Hanley's response. Anne Hanley, if you don't know her, is amazing. She's a bestselling book. Everybody writes like, and Ann's incredible.

[01:26:24]

[01:26:24] Paul Roetzer: and is a, When Anne takes a position on something, it's a well thought out position and she articulates it better than most. And I know Anne, and she's not using AI to write her rebuttals. I'll say that. So, okay. So the CEO's post on LinkedIn. Back in August, we launched a Grammarly agent called Expert Review.

[01:26:43] The agent draws on publicly available information from third party LLMs to service writing suggestions inspired by the published work of Influential Voices. Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. This kind of scrutiny improves our products and we take it [01:27:00] seriously as context.

[01:27:01] The agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans. we hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we'll rethink our approach going forward.

[01:27:19] After careful consideration, we have decided to disable expert review while we reimagine the feature to make it more useful for users. While giving experts real control over how they want to be represented or not represented at all. We deeply believe in our mission to solve the last mile of AI that's in quotes by bringing AI directly to where people work.

[01:27:37] And we see this as a significant opportunity for experts, for millions of users. Grammarly is trusted writing sidekick ever present in every application ready to help. We're opening up this platform so anyone can build agents that work like Grammarly, expanding from a sidekick to a whole team. Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic, challenging your AR arguments, [01:28:00] or a leading expert elevating your proposal.

[01:28:02] For experts. This is a chance to build that same ubiquitous bond with users much like Grammarly has. But in this world, experts choose to participate, shape how their knowledge is represented, and control their business model. That future excites me, and I hope to build it with experts who want to develop it alongside.

[01:28:19] So again, this was a LinkedIn post and Hanley's comment was then. I appreciate the apology and decision to disable the feature I'm writing as one of the experts whose name and work were included in expert review, without notification or permission. To be clear, this wasn't about execution or falling short on implementation.

[01:28:36] The fundamental problem was the approach itself, building a commercial feature around experts, names and reputations. Without asking permission, without notification, and without compensation, you frame this as a significant opportunity for experts, but the opportunity you created was entirely for Grammarly.

[01:28:53] The vision you're describing now, experts choose to participate, shape the representation and control their business model is [01:29:00] exactly what you should have, what should have happened from the start. Writers, authors, artists, experts are in a tough place with AI opt in to having your work used commercially without compensation or become irrelevant.

[01:29:10] It's a binary choice, or is it Grammarly has a real chance to build something different, a model where experts are actually partners, not just raw material. I'm genuinely interested in what you build next, but getting there will require a fundamental shift from the take first. Apologize later. So many AI companies seem to adopt.

[01:29:29] Mike Kaput: Yeah, well said.

[01:29:30] Paul Roetzer: Yeah. And I'm with you. Like how that wasn't what came out in August. I don't, I don't understand. Other than everybody else gets away with stealing from people, so

[01:29:40] Mike Kaput: it's a shame too 'cause my just like gut sense is Grammarly has like a really, or had a really strong brand among people.

[01:29:46] Totally. Because it's meant to be an assistant, an augmentation. I meet a lot of people that are skeptical of ai. They're like, oh, I love Grammarly and I think writers love Grammarly. That's or loved because I think it might do some real damage. It's bad.

[01:29:59] Paul Roetzer: Yeah. Like I think [01:30:00] I wrote sometime, I don't know if I saw it somewhere, but like a brand takes a lifetime to build and a moment to destroy.

[01:30:06] Yeah. Like you do, you build up trust over years or decades and you have these like very loyal users and then you undercut that trust with just a totally seemingly misguided business strategy. Where you just lose sight of the people that you're helping. They're probably paying customers in many cases.

[01:30:23] Like, I don't know, Ann uses Grammarly, but I would guess, you know, Ann has been a Grammarly supporter and maybe even an influencer for Grammarly in the past.

[01:30:29]

[01:30:29] Paul Roetzer: And I 've had, you know, conversations with Grammarly in, in the past and they've actually supported some marketing institute stuff.

[01:30:35] So yeah, it's, it's a great brand and

[01:30:37] Mike Kaput: yeah.

[01:30:37] Paul Roetzer: a good company and I, you know, I think they obviously made a misstep here and they're, it seems like they're heading down the path of trying to fix it. The question becomes how much damage was done to the brand in the process.

[01:30:49] Mike Kaput: Alright, Paul, one final big, rapid fire topic.

[01:30:51] Andrej Karpathy's Autoresearch Agent

[01:30:51] Mike Kaput: Then we're gonna wrap with some product and funding announcements and close out the episode here. So. Next up, former openAI's co-founder Andrej [01:31:00] Carpathy. We've talked about plenty on the pod shared results this week from his project called Auto Research. This is a 630 line Python script that autonomously runs AI research experiments without any human involvement.

[01:31:12] So he's, basically, what he's doing is he left this script running on a small language model he uses as a research test bed for two full days. And in that time, this script independently discovered approximately 20 distinct changes to improve its own performance by 11% on standard language model benchmarks.

[01:31:31] So in AI research, even a single percentage point of improvement is really important. finding 20 and 48 hours is pretty striking. Now, this script operates in a continuous loop. It generates a hypothesis. What about what might improve the model? It writes code to test that hypothesis runs the experiment, and it evaluates the results against a benchmark and either keeps the change or rolls it back.

[01:31:53] It then generates. A new hypothesis and starts again, running 24 hours a day without [01:32:00] stopping. So Paul, I've found this to be quite interesting because I'm just reading this not as an AI researcher, but that marketer, business person, writer, what have you. And I was like, what happens if this kind of thing exists in every domain?

[01:32:14] Paul Roetzer: Yeah. And that, that I think is the key here and why we often talk about what's going on in the coding world because it's all a prelude to what happens everywhere else. So if you're able to do this with that kind of stuff, like imagine being able to do that with, you know, campaign strategies. Some different things you're doing internally.

[01:32:29] And then the other thing he touched on was the what? Intelligence? Brown out, is that what he

[01:32:32] Mike Kaput: called it? Yes. I wanted to ask you about this. Yeah,

[01:32:34] Paul Roetzer: yeah. So like this idea that you become so reliant on these models and like they're doing these things and then they go haywire, they like, they shut down or the model stops working or it slows down.

[01:32:43] Or the government steps in. You know, makes a model illegal to use or something like that, and you're like, oh shit, that was my entire business. Like I, right. I got rid of all my people, or I built an AI native company. I was dependent upon all this, these agents working together and now the agents stopped working and now my business is done.

[01:32:57] And again, we are just heading [01:33:00] into this. So many unknowns. Yes. And this idea of this scenario planning or like, you're playing these different things out. and I think these kinds of experiences start to, you know, really help us visualize what this future could look like. 'cause it's a very weird future.

[01:33:14] Mike Kaput: My God. I'm thinking even more and more individually about redundancies as well. We're lucky we use a bunch of different accounts, but I even got a message from one of our team members this morning being like, is anyone else seeing this? And it's the worst message of all time. It said, Claude is experiencing difficulty at the moment.

[01:33:28] And I was like, oh no. This is gonna determine the course of my day today.

[01:33:33] Paul Roetzer: Yeah. I was, I was pushing on a project last night with Chad GT 5.4 thinking. Yeah. Which by the way, incredible model really is like, I saw somebody tweet, That their experience with 5.4 is what they would've expected the leap to ChatGPT six to be.

[01:33:51] Wow. And I'm not, I don't disagree. Like I t's, it's been remarkable for high level strategy. Anyway. I was pushing on it yesterday on some stuff and it was just [01:34:00] slow as hell. Like it was, it was like I would type stuff in and it would take like five seconds for the words to appear. I was like, oh, like you just because you do, like, you get this idea, you're getting, you know, you're working on this thing and you've got like 20 minutes to like get to like the finish line with it and then the model just stops doing what it's supposed to do and you're like, damn it.

[01:34:18] Like now I can't get back to it till Wednesday. 'cause I gotta, my days are busy the next two days. So yeah, like I reliance on these models, is something you need to be doing contingency planning for. I've talked with a couple of SaaS companies in recent days who have those redundancies built in where they used to rely on a single model and now they have.

[01:34:34] Two or three models, they ready to go. But it's not the same like when you, when you're in workflow and

[01:34:38] Mike Kaput: not all,

[01:34:38] Paul Roetzer: you're working with a specific constitution or it's like the government's realizing this, like you can't just rip one LM out and plug in another. It doesn't work that way. Yep.

[01:34:47] AI Product and Funding Updates

[01:34:47] Mike Kaput: Alright, so Paul, some final AI product and funding updates as we close out the week here.

[01:34:52] So I'm just gonna run through these real quick. So first up, Yann LeCunn, who left Meta in November, 2025, has raised just [01:35:00] over a billion dollars for his new startup, a MI Labs. It's the largest European seed round ever, $3.5 billion pre-money valuation. They're building world models, which are AI systems that learn from video and spatial data to understand how the physical world works, and they're an alternative to LLMs as they currently exist.

[01:35:19] Next up, meta acquired mt book and ai social AI agent, social network that went viral earlier this year related to a bunch of open claw or at the time, I think it was called, Mt. Mt Claw or mt book. mt book is populate by Mt Claw models and agents. So basically they're going the route of wanting to own an AI agent social network.

[01:35:44] Google has announced a major Gemini integration into its workbook, workspace productivity suite. They're bringing AI powered content creation summarization, and editing directly into doc sheets and slides for business users. They're also integrating Gemini into Google Maps, adding the ability to ask [01:36:00] maps questions in natural language.

[01:36:02] For instance, find a quiet restaurant near my hotel with outdoor seating. openAI's is planning to fold soa. It's AI video generation tool that creates short clips from text prompts directly into chatgpt rather than keeping it as a standalone product. Perplexity. The AI powered search engine opened a wait list for something.

[01:36:21] It is calling a personal computer, but it's basically moving into the AGI Agentic operating system layer of ai. Microsoft, a thing we were talking about before launched Copilot Health and AI Assistant designed to help users manage personal health information, interpret medical data, and prepare for doctor visits.

[01:36:42] Claude can now generate interactive charts and diagrams directly inside the chat window, letting users visualize data, explore results, and iterate on visuals without leaving a conversation. Anthropic also committed a hundred million dollars to a new Claude Partner Network, a program designed to fund [01:37:00] and support companies building products and integrations on top of Claude.

[01:37:05] They also have been busy this week with everything going on. Launched the Anthropic Institute, a standalone research and policy organization led by co-founder Jack Clark. They will study the societal implications. Of advanced AI and operate independently from Anthropics product work. Meta has delayed the release of its next generation AI model internally, code named Avocado.

[01:37:26] According to the New York Times, no new timeline has been given yet, and Rep lit the browser-based coding platform that lets users build software with AI assistant launched Agent four, its latest autonomous coding agent. They can both build full apps from natural language instructions. They also announced a $400 million raise at a $9 billion valuation.

[01:37:48] And last but not least, Elon Musk responded to a post on X saying that X AI was not built the first time, right? The first time around. So is [01:38:00] being rebuilt from the foundations up. This was someone commenting on. Steps Xai was taking in terms of its approach. So Elon Musk basically admitting this company, this fundamental AI lab, one of the top five or six in the race here needs to be rebuilt from scratch.

[01:38:16] Paul Roetzer: Yeah. A couple of quick notes, if you think Facebook and Instagram are full of AI swap now, just wait. That's what the notebook hire or acquisition basically means. It's just more AI crap on social networks. Yeah. maybe it meets some other stuff too, but like that's the main outcome. I see. the labs have become a three horse race.

[01:38:32] So openAI's, Anthropic and Google are the only serious competitors in the US at least there are, there are some international ones. Microsoft may get into that conversation eventually, but Meta and XI are not serious competitors right now. Like no matter what they say, they're,

[01:38:45] Mike Kaput: yep.

[01:38:45] Paul Roetzer: Meta just cannot get out of its own way.

[01:38:47] They've spent the 16 billion or whatever on acquisitions now they're creating like another lab internally. Avocado is a mess. Like they, it just, it's just not working. And xA, I mean, Elon runs like seven companies right now. [01:39:00] Right. And. For him to tweet randomly to some, well, Beth is not a random user, but, that it wasn't built the right way and you have all these co-founders who have left in the last like two months.

[01:39:11] Yeah, it just, it's just a revolving door at xAI right now. So yeah, I think that it, basically, it's openAI's philanthropic and Google are the only labs that right now really matter. that that could change at any given time. Actually, I could figure things out, stuff like that. But those are the three to pay attention to and the government's trying to undercut one of 'em and that basically is openAI's and Google.

[01:39:31] So I,

[01:39:32] Mike Kaput: yeah,

[01:39:33] Paul Roetzer: that's kind of my takeaway from that. There's a

[01:39:34] Mike Kaput: lot going on. Paul, one final note before we wrap up here. Just a reminder, take this week's AI pulse survey. We have a question about how you feel about Atlassian's layoffs and also a question about your reaction to the New York Times quiz we discussed.

[01:39:49] So if you go to SmarterX.ai/pulse, we'd love to get your thoughts there. So Paul, thanks again for breaking everything down for us this week.

[01:39:56] Paul Roetzer: Absolutely good stuff, and we will have two episodes this week. So [01:40:00] Cathy and I 'm actually recording it right after we get done here. We're gonna do an AI answers episode.

[01:40:04] so episode, 204 will be dropping on Thursday, which would be the 19th, I think.

[01:40:10] Mike Kaput: I believe so,

[01:40:11] Paul Roetzer: yeah. So two episodes this week and, yeah, we'll be back next week with our regular weekly. Thanks, Mike. Thanks everyone. Have a great week. Thanks for listening to the Artificial Intelligence show. Visit SmarterX.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy, and engaged in a SmarterX slack community.

[01:40:44] Until next time, stay curious and explore ai.