The Artificial Intelligence Show Blog

[The AI Show Episode 201]: Anthropic vs. Pentagon Round 2, AI Job Impact Study, Services as the New Software & GPT-5.4

Written by Claire Prudhomme | Mar 10, 2026 12:15:00 PM

The gap between what AI can do and what it’s actually doing at work is closing faster than most companies are prepared for.

This week, Paul Roetzer and Mike Kaput discuss the escalating Anthropic vs. Pentagon saga, a Sequoia partner’s prediction that AI will replace entire service industries (not just software), GPT-5.4 dropping and outperforming professionals on economic benchmarks, and a Polish mathematician’s “personal singularity” moment. Plus: rapid-fire updates on AI journalism, OpenClaw security risks, AI copyright law, and Meta’s smart-glasses privacy scandal.

Listen or watch below and see below for show notes and the transcript.

This Week's AI Pulse

Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.

If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.

Click here to take this week's AI Pulse.

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:04:32 — AI Pulse Survey Results

00:07:00 — Anthropic vs. US Government Round 2

00:19:43 — Anthropic Analyzes AI Job Impact

00:35:30 — Services as the New Software

00:49:19 — Barriers to Enterprise AI Adoption

00:54:54 — GPT-5.4

01:00:55 — The Move 37 Moment for Math

01:05:39 — AI and Journalism Update

01:10:03 — NVIDIA CEO Calls OpenClaw “Most Important Software Release Ever”

01:13:35 — AI Art Can’t Be Copyrighted

01:16:08 — Meta Sued Over Smartglasses Privacy Concerns

01:19:21 — Microsoft Copilot Cowork

This week’s episode is sponsored by our 2026 State of AI Report.

This year, we’re going beyond marketing-specific research to uncover how AI is being adopted and utilized across the organization, and we need your help to create the most comprehensive report yet.

It’s a quick seven-minute lift. In return, you’ll get the full report for free when it drops, plus a chance to win or extend a 12-month SmarterX AI Mastery Membership. Go to smarterx.ai/survey to share your input.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: These things are getting really good, really fast, and it's not just gonna be happening to computer programmers this year. It's gonna start happening to everybody else. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.

[00:00:17] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:38] Join us as we accelerate AI literacy for all.

[00:00:45] Welcome to episode 201 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We're recording Monday, March 9th, 9:30 AM Eastern Time, which is relevant because Microsoft already dropped cowork [00:01:00] today, which we've been waiting for. So we will, we'll get we, we did a last minute ad.

[00:01:04] We'll get to the Microsoft Cowork announcement, a little later on in the show. So, I don't know, it was, interesting, Mike. It was kind of like, I didn't wanna jinx us going into this week. It was kind of a quiet week.

[00:01:16] Mike Kaput: Yeah,

[00:01:16] Paul Roetzer: I'd say like from AI perspective, but some really interesting, main topics we're gonna hit on that I think get into some of the bigger issues moving forward around jobs and the economy.

[00:01:28] It was a cool report from Anthropic. We'll touch on a really interesting article from a partner at Sequoia and then just getting into like, around adoption, but, well, I don't know. I say quiet week, but we got 5.4 from opening. I was like, nothing happened last week, but we're so used to these, like this flood of stuff that, you know, probably what used to be a busy week is just feels like, oh, okay, good.

[00:01:51] This'll be easier. Something like an hour and a half of prep before the show today.

[00:01:54] Mike Kaput: No doubt.

[00:01:55] Paul Roetzer: All right, so today's episode is brought to us by the State of AI for [00:02:00] Business Report. This is a new report we are working on, but we need your help. So right now the survey is in the field. this has given us a chance to hear how you are feeling about AI adoption, what's going on in your organization within your career.

[00:02:14] Takes about five to seven minutes to participate in this survey. We would love your input. Go to SmarterX dot ai slash survey. take that. You can put your contact information in, right? It's not required to put the contact information, right? No, it's not. Okay. If you want to get the report sent to you, you can put your email in at the end.

[00:02:32] Mike Kaput: Yeah.

[00:02:32] Paul Roetzer: but you can take it anonymously if you'd like. So again, SmarterX ai slash slash survey and that is the 2026 State of AI for business report. We had about 1800 people. Take our state of marketing AI report last year. This year we're expanding that, outside of marketing. So we're hoping to get a lot more coverage in other departments, other areas, so that we can share a really good overview of what is going on in [00:03:00] business.

[00:03:00] And we're gonna talk a lot about that kind of concept today. the other thing we'll mention is also brought to us by the Intro to AI class that I teach every month. I've been doing this now since fall of 2021. We are on, 56, I think you right? Yeah. Yeah. We're doing, class 56 of the INT intro to ai.

[00:03:19] So we've had more than 55,000 people I think now at this point, register for Intro to ai. It is a 30 minute class. I teach, it's a Zoom webinar. and then I usually spend about 25 or 30 minutes on q and a. And we often get, dozens of questions and then we turn the unanswered questions into, special episodes of this podcast.

[00:03:39] So you can go to academy dot SmarterX ai slash courses and then you just scroll down to the free classes. Or in the navigation, you can actually click on courses and go to free classes. two simple ways. We will also put a direct link to the landing page to register in the show notes. So if you want to go to the show notes and click on that, that'll take you right to the [00:04:00] landing page.

[00:04:00] So again, academy dot SmarterX ai slash courses. If, you don't personally need the intro to AI share it with people on your team who do it is like the first thing we tell people when they're like, well, how do we get started? I've got a bunch of coworkers who don't really understand it yet, aren't really feeling the sense of urgency.

[00:04:17] We always say, just send them to the free class, like we do it every month. so that's a great thing. So that is happening on, March 12th, right, Mike? Yeah, yeah. Thursday, March 12th at noon Eastern Time will be the free intro to ai. Class number 56.

[00:04:32] AI-Pulse Survey

[00:04:32] Paul Roetzer: okay, so, so the pulse survey, we talk about this every week.

[00:04:36] looks like we had 91 responses. So we, we always say this is an informal poll of our listeners. This is not meant to be, you know, formal research. It's just taking a pulse of how people that listen to this podcast are reacting to different topics we talk about each week. So last week we said, where do you stand on the Anthropic Pentagon dispute over AI safety Red lines, 62% said [00:05:00] Anthropic is right to hold the line on autonomous weapons and mass surveillance even,

[00:05:05] Mike Kaput: even at the cost of government contracts.

[00:05:07] okay, is what the full question was. Okay. We've got, kind of gets cut off in the Google form summary.

[00:05:14] Paul Roetzer: So pretty, pretty strong majority then we had about 18% say red lines are reasonable, but Anthropic should have. Negotiated more,

[00:05:22] Mike Kaput: negotiated more quietly too, that people were kind of like the big fuss of it was more the issue than the actual negotiator, which

[00:05:30] Paul Roetzer: maybe we just didn't do a good job of explaining.

[00:05:32] They were negotiating four months. Like this was not something that just happened over a three day period, the government gave the three day mandate.

[00:05:39]

[00:05:39] Paul Roetzer: and then 16 and a half percent said this is primarily a political power play, not a genuine safety debate. That's interesting.

[00:05:46]

[00:05:47] Paul Roetzer: Okay. And then the second one was block cut.

[00:05:49] Nearly half its workforce this past week and named AI as the reason. What's your reaction? 44% say the layoffs are real, but the pace will be slower than the headlines [00:06:00] suggest. And then 29% said it's mostly a correction from pandemic over. Hiring AI is a convenient narrative and 25% said this is the beginning of a major wave of AI driven layoffs across the industry.

[00:06:13] So. When you, when you combine the 44% and the 25%, Mike, we've got Yep. This is the beginning of a major wave and the layoffs are real. Yep. So that would tell me sentiment is, yeah, there might be some of that pandemic stuff mixed in there and just some over hiring. But, our informal poll would say, people are leaning in the direction of this is getting real.

[00:06:32] And I'm, I'm gonna suggest maybe after our conversations today, people may feel even more in that direction.

[00:06:41] Mike Kaput: Yeah.

[00:06:41] Paul Roetzer: okay, Mike, so we've got, we're actually gonna hit on the Anthropic versus the government. To start off, just give a recap of where that's at, and then we're gonna get into, some of the AI adoption and some of the maybe reality that's starting to come to life around the AI impact on jobs and the economy.

[00:06:59] Mike Kaput: Yeah, [00:07:00] Paul.

[00:07:00] Anthropic vs. US Government Round 2

[00:07:13] Mike Kaput: So we are in Anthropic, first, the US government round two here. So last week we had covered kind of this initial unprecedented confrontation between Anthropic and specifically the Pentagon. secretary of War, Pete Hegseth had given an ultimat to the company. Dario Ade had refused to allow Claude to be used for mass domestic surveillance or fully autonomous weapons.

[00:07:23] And this week the situation escalated even further. So on March 4th, kind of after we had covered this, the Department of War made good on its threats. They sent Anthropic a formal letter, officially designated designating the company as a supply chain risk. This makes Anthropic the first American company ever to receive a label normally reserved for foreign adversaries.

[00:07:45] So federal agencies moved quickly to comply with this new distinction. The Treasury State Department and HHS all announced their ending, their use of Anthropic products. But here's kind of the crazy part. Even as the [00:08:00] government is moving to blacklist Anthropic, the US military is still actively using Claude in combat.

[00:08:05] Reports have shown that Claude is powering Palantir's Maven Smart System, which was used to identify over a thousand targets in the first 24 hours of the now ongoing operations in Iran. Interestingly,

[00:08:18] Paul Roetzer: including a quick note, the possibility that Claude may have been involved in the selection of a target that ended up killing 150 children in a school that was next to a Naval facility in Iran.

[00:08:30] So

[00:08:31] Mike Kaput: yes, and we actually got some more background, at least from one perspective on how we got here. So Pentagon, under Secretary Emile Michael actually went on the All In podcast and talked about how after the US military operation that captured Venezuelan dictator Nikola Maduro in January. Anthropic had contacted Palantir to ask whether Claude had been used in that operation.

[00:08:55] We alluded to that in our initial overview of these events, but Michael actually [00:09:00] said on the podcast, this caused what they called a massive, whoa moment at the Pentagon, where they suddenly realized they were completely dependent on a single AI provider that might suddenly shut off access mid fight due to a guardrail or ethical objection, leaving war fighters stranded.

[00:09:16] This drama then started spilling even more into the public eye because there was a leaked 1600 word internal memo from Dario Amay that got leaked to the press, and in it he torched openAI's. Specifically, they had this replacement deal with the Pentagon that we also covered last week. He said it was 80% safety theater.

[00:09:35] He accused openAI's of quote, straight up lies and claimed Anthropic was being punished for not giving quote, dictator style praise to President Trump. Aade later issued a public apology for the tone in the memo, he said it was written on a very chaotic day. And didn't reflect his considered views. But the fallout for opening eye is also bad here.

[00:09:55] Sam Altman had to face furious employees at Noel Hands meeting after [00:10:00] this deal, admitting the company's hasty Pentagon deal looked, quote, opportunistic and sloppy. In fact, they actually lost someone already. Kaitlin Kalinowski, who worked on robotics at openAI's and publicly resigned over the deal stating that surveillance and lethal autonomy are lines that deserve more deliberation than they got.

[00:10:20] So where we stand now is basically, Anthropic says it has no choice but to challenge the supply chain risk designation in court. Yet they still at the moment offered to continue providing Claude to the military at nominal costs during the transition to ensure frontline war fighters aren't deprived of the tools they're currently using in combat.

[00:10:39] So Paul, this is just getting messier. Both the CEOs end up. For different reasons looking pretty bad publicly. Like where do you think this stuff actually lands, especially amidst literally an ongoing war using Claude?

[00:10:53] Paul Roetzer: Yeah. I don't know. I mean, I said last week on episode 200 that I thought they've at some point like find a [00:11:00] common ground.

[00:11:00] It obviously got messier. you know, the posts, the internal posts from ADE didn't probably help much.

[00:11:09] Mike Kaput: Yeah.

[00:11:10] Paul Roetzer: you know, at a time when they probably needed to try and tone it down a little bit that, you know, got out and, that's not gonna have that effect right now. obviously, you know, Sam sort of came out regretting how opportunistic everything looked on their end and, you sort of apologized internally for, for that and said they probably could have handled that better.

[00:11:30] So, I dunno, it's just a high pressure environment. It doesn't help that there's so much active military conflict slash war being, pursued at the moment by the US government. Where these systems are being used and they can't just flip a switch. I mean, the alternative from reports right now is that the only thing OpenAI has the government can use is GPT-4 0.1, which doesn't even have reasoning capabilities.

[00:11:57] So like what? Yeah, what good is that? [00:12:00] So from everything we're seeing now, obviously there's all kinds of things that can be happening behind the scenes that we're not, you know, made aware of, but the only model capable of being used in classified settings is clawed. And you don't just like flip a switch and change that.

[00:12:16] So it's, it's just very bizarre to me that the government is taking the stance they're taking while probably still negotiating back channels to make this all work out. you know, I thought it was interesting ADE called out the fact that I had alluded to last week on episode 200 that they also didn't give money to Trump and that that was probably part of the reason why they don't like Anthropic.

[00:12:38] so I had pointed out that Greg Brockman, who's the president of Opening Eye and his wife Anna, are massive givers to Trump and

[00:12:45] Mike Kaput: Yep.

[00:12:46] Paul Roetzer: Supporters of a super pac, design for the Republican Party. So there's definitely just elements of politics, like lots and lots of politics involved here. the cloud companies had to come out and clarify for their customers because if you think [00:13:00] about it, like AWS and Google, and even Microsoft now allow access to Claude

[00:13:06] Through their clouds. And so they had to come out and say, listen, you still as a, as a customer of our cloud services still have access to Claude. This is only specifically for these government instances where, you know, we're not allowed to use it. Be 'cause there was a concern that it was a domino effect.

[00:13:22] Like once the government identifies as a supply chain risk, is it a broad supply chain risk, like no one can work with philanthropic or is it narrow specific to the use case and based on philanthropics own reading of the law. And, it sounds like the cloud company's readings of the law, it is narrow, meaning it's only for that specific use.

[00:13:42] So some of the things I'm watching for moving forward is, do we see more AI res researchers on the move? You mentioned the head of robotics at openAI's who has, you know, left it seems like for more moral reasons than anything, whether or not the government philanthropic strike a deal despite the bluster and the egos and all this stuff going on.

[00:13:59] [00:14:00] Like, there's probably still a deal to be had. Even the, what was his name? Michael? The,

[00:14:04] Mike Kaput: Emile mic. Emile Michael on that All in podcast.

[00:14:07] Paul Roetzer: Yeah. So I mean, some of the stuff he was saying was bs like, there was obviously like some political messaging and what he was saying, but other, out outside of that, like, you get it's like, okay, no, I understand you wouldn't be dependent on a single vendor for any reason if you're the government.

[00:14:20] So a lot of what he said made sense. Then there was some just political, you know, stuff mixed in there. Um. But he even said like, Hey, I'm a deal maker. Like at the end of the day, like I just, I want, I wanna make this work. And if that means we do a deal with Anthropic, fine.

[00:14:35] Mike Kaput: Yeah.

[00:14:35] Paul Roetzer: so I do think both sides are very open to this.

[00:14:38] I think it's gonna be interesting to see how quickly these other labs are able to step up. I still have heard nothing from Google. Like, If I may have missed a statement from Demis or Sundar or somebody, but I haven't seen anything from Google, which tells me they're probably working behind the scenes to like do something, with them.

[00:14:55] If they don't already have something in place with them, I think they're a more likely [00:15:00] player than open, in terms of their ability to like scale up and do something like this. So, I don't know. I mean, it'd be interesting to see what happens with these other labs and how quickly they fill the gap.

[00:15:11] Again, my guess if I was like a betting man here and I was like playing on Poly market or something, I'm gonna imagine they do a deal with philanthropic, both save face in some way. They find an off ramp to diffuse this and. Anthropic keeps delivering what it does, and the government gets deals in place with the other labs.

[00:15:28] So they have backups and redundancies that avoids the concerns they have. And then the other thing, Mike, that I'm more and more starting to pay attention to, 'cause we're actually starting to see some data on, is public sentiment on ai. So there was an NBC news poll, I think this was over the weekend. I saw this.

[00:15:44] Where they, they talked to a thousand registered voters, where they're looking at like ideology over electability. So it was like a prelude to the midterms, and they're trying to start to figure out like where the party's gonna fall in terms of, candidates who are more [00:16:00] ideological versus more electable.

[00:16:01] And they, so they, it was a lot of politics in it, but then they asked the question, it said, so this were mostly done through, phone interviews. I said, so this is the interviewer saying, okay, now I'm gonna give you a few names of several terms, public figures, places, and groups. I'd like you to rate your feelings toward each one on a very positive, somewhat positive, neutral, somewhat negative or very negative.

[00:16:26] If you don't know the name, then just say so and indicate as such. So then they go through a bunch of things and they say, how do you feel about this, about this, about this? So one of them is ai, that is artificial intelligence. That's how they phrase it. it scored a, a negative 20. Meaning the negative sentiment was 20 points higher than the positive sentiment.

[00:16:47] Now, in context, you know, we don't know negative 20 is bad. Well, I'm gonna give you a little bit more context. So the Pope scores a plus 34. So I assume these are pretty balanced between, you know, Democrat and Republican [00:17:00] interviews. although some of the data would tell me it was probably more Republican, people that were talking to.

[00:17:05] Because Stephen Colbert is the only other thing that had a, a positive rating at plus 10. So apparently everybody loves Stephen Colbert. And then if we go down the list, I'll just hand pick a few other ones to give you some sense. So Marco Rubio, negative seven, Trump negative 12, the Republican party, negative 14 ice negative 18 AI negative 20.

[00:17:28] The only things that scored lower than AI are the Democratic Party at negative 22, and I ran at negative 53.

[00:17:36] Mike Kaput: Wow.

[00:17:36] Paul Roetzer: So public sentiment for AI from a thousand people interviewed with a plus minus 3% margin of error. says that generally speaking, the public does not like ai. So, and it was the other data would go back, you know, from multiple polls.

[00:17:52] They do this every, like, it looks like five or six months. this was the first time this poll specifically showed sentiment towards ai, [00:18:00] which tells me they're obviously now trying to gauge that, as we've talked about many times on this show. The politicians are trying to figure out if AI can play a role in moving votes in the midterms.

[00:18:12] Hmm. And so this is a poll that tells you like, okay, people don't like it, but what don't they like about it starts to become the thing. So that's the kind of stuff I'm watching for. Again, I think the philanthropic deal at some point gets done. They, they find the off ramps and they save face and they, you know, make it work on both ends.

[00:18:29] But I don't know, I mean, it's, we'll see, see if this week brings a little bit more of toning down versus the, ramping up the animosity.

[00:18:38] Mike Kaput: Yeah. And we had talked about last week on episode 200, the polling around data centers that was showing this big swing towards negative sentiment around data centers.

[00:18:48] I wonder if we are seeing the beginnings of this massively negative narrative wave for AI sentiment and opinion.

[00:18:55] Paul Roetzer: I honestly think it's gonna be really hard to avoid. Yeah. Like, again, if I was, you know, trying to prognosticate [00:19:00] a little bit, I. I think the benefits of AI are gonna be harder to convey.

[00:19:05]

[00:19:06] Paul Roetzer: And they're gonna take longer to realize, like scientific discovery as an example. where the negative effects of AI are going to be very obvious. Yeah. And, if they are connected to job loss, as I would fully expect, it is felt by everyone. Like everyone starts to know someone who Yeah. Has lost a job because of ai.

[00:19:31]

[00:19:31] Paul Roetzer: And that like all the positive stuff that we hope will come, the abundance that you hope folk comes from aIt's not gonna come as fast as the negative stuff.

[00:19:43] Anthropic Analyzes AI Job Impact

[00:19:43] Mike Kaput: Well, somewhat related, let's talk about this because in our second big topic this week, we have some new research and new work from Anthropic around AI and jobs.

[00:19:54] So this week Anthropic published this big news study that put some data. [00:20:00] Behind, you know, these kinda warnings that AI is coming for white collar jobs. So Anthropic researchers, as part of a study, created this new metric that they call observed exposure. So instead of just guessing what AI can do in different jobs, they actually compared theoretical LLM capabilities, large language model capabilities against real world anonymized usage data from Claude to actually see what tasks people are automating right now in white collar work.

[00:20:30] And the biggest takeaway here is there's this massive gap right now between theory and reality. So, for example, they find that AI can theoretically handle 94% of the tasks done by knowledge workers, but in practice, Claude is only covering 33% of those tasks. So as this gap, however, inevitably closes, the demographics of who is going to be hit hardest from what they found are striking.

[00:20:57] So they find that. The most [00:21:00] AI exposed workers are not things like physical laborers. They're highly educated and well paid. For instance, workers in the most exposed jobs, according to anthropics analysis, actually earn 47% more on average than your average worker. They're 16 percentage points more likely to be female, and they're more, nearly four times as likely to hold a graduate degree compared to unexposed workers.

[00:21:24] So they find the top three most exposed occupations right now are computer programmers. 75% of their tasks are covered by ai, customer service reps, and data entry tier. Meanwhile, about 30% of workers, things like cooks, mechanics, and lifeguards have zero exposure. So I guess the good news here is that the researchers so far found no systematic increase yet in unemployment for the highly exposed white collar workers.

[00:21:53] Since chat, GPT launched in 2022. But they are seeing some early warning signs for [00:22:00] Gen Z specifically. While companies are not doing massive layoffs yet, they have severely slowed down entry level hiring and the job finding rate for young workers defined as ages 22 to 25 entering these exposed fields that Anthropic was analyzing has dropped by roughly 14% compared to 2022 levels.

[00:22:20] So Paul, I'm curious, what's the thing here you think people kind of need to sit with? This seems trend-wise to be hitting on some of the things we've discussed in the past, in other studies, but we know right now there's this kind of big gap between what is possible and what's actually happening when it comes to AI exposure, jobs being exposed to ai.

[00:22:38] Paul Roetzer: I think the, just the fact that the labs are now looking at this, they're realizing that these benchmarks they've previously been looking at that are mainly testing for iq. Are saturated and we're not gonna learn much from model to model. You know, based on incremental, you know, point here, point there on IQ tests.

[00:22:57] So they have to start looking more deeply [00:23:00] at actual work. this is on the heels of GDP valve, which we talked about September, 2025 open. The, I came out with that where they were doing something similar at the time. They said the GDP valve was designed to help them track how well models and others, perform on economically valuable real world tasks.

[00:23:19] And they're looking at, you know, basically they're starting with the gross domestic product. It's a key economic indicator, and then drawing tasks from occupations in the industries that contribute most to GDP. So I think it's extremely important that in this six month period, we now have labs more, being more realistic about where this is all going.

[00:23:37] By starting to look at the implications on real labor and, the jobs. The paper itself, you know, people loved the radar chart in there. Yeah. Because it's like easy to understand. It shows these different professions, shows the exposure level they have, and then how little it's currently happening. I think in some ways it gives people like this, false sense of security, like, oh, okay, good.

[00:23:59] Like [00:24:00] my, my field is safe. as I've said a lot recently on this podcast, looking backwards isn't gonna tell us anywhere about where we're going. yeah, there's the data isn't going to show the impact yet. you know, I think there's a lot of reasons why, but we're starting to see more like the, you know, the block kind of stuff.

[00:24:22] And we'll talk in the next, topic a little bit more about this. But the key is they're starting to at least look at the exposure levels. So they're starting to look at the careers and the different roles. and there's breaking it down at the task level. So they're starting to look at how are the tasks that make up jobs going to impact this?

[00:24:42] And again, this, this isn't even that new of a concept. In 2023, when GPT-4 came out, there was a paper on this concept where they were looking at the on net database, which breaks down like 800 professions. It's actually really cool if you've never looked at it. It's a government database and you can go in and you can put [00:25:00] in a job profession and it breaks it down into like 20 or 25 tasks that make up that job.

[00:25:04]

[00:25:04] Paul Roetzer: And so that's in essence what they're doing here. So when I build jobs G pt, this is exactly what I did. So, if you've never tried it, you can go to SmarterX dot ai slash jobs gpt. It's just a free custom GPT. And my, my goal there was actually, it was building on the openAI's Microsoft paper from 2023.

[00:25:22] And what I was trying to do was project out as the models get. Smarter, more generally capable, what will the exposure level be of different jobs?

[00:25:30] Mike Kaput: Yeah.

[00:25:30] Paul Roetzer: And so if you go into jobs GPT and you put a job title in, it does a task level analysis. So it looks at what the tasks are that make up those jobs, and then it applies an exposure level.

[00:25:40] So then I devise like an 11, element exposure key. So an AI exposure key that looks at things like voice capabilities of the models, advanced reasoning, persuasion, digital world action or AI agents, physical world action, robotics even gets into, and so the whole point of when I built jobs, GPT was trying to like look at these jobs, break 'em [00:26:00] down into tasks and then say what can AI do today and what is it gonna be able to do as they get smarter and become more exposed?

[00:26:06] So that's, that's the basic premise of what they're doing here, is they're looking at this database of all these tasks then trying to project out. the observed exposure right now is coming from the use of their APIs. So it's an imperfect thing because it's what the one thing I'm excited about. Yeah.

[00:26:22] In the fall when we've talked about Anthropic efforts to look at this, it was almost exclusively being used as a coding tool.

[00:26:29] Mike Kaput: Yeah.

[00:26:30] Paul Roetzer: And now that Anthropic is becoming way more popular, even in the last 30 days, this kind of data, looking at their API usage, like, Claude Cowork and things like that, you're gonna start to get a much better representative data set of how the exposure is spreading.

[00:26:47] So, yeah, that, that, that was kind of my notes on that. I think it's a good report. It's pretty dense. I wouldn't, it's not for everybody. I would say like the main takeaways you hit on, and that's probably sufficient for most people.

[00:26:59] Mike Kaput: Yeah.

[00:27:00] Paul Roetzer: if you're really intrigued by this, I would say go dig into it.

[00:27:02] But it is, it's, it's pretty heavy reading. it does allude Mike to the importance of establishing your own evals within your company. Like knowing the three to five things that you want to test each time a new model comes out and having those in place. And the other thing that I got thinking about, so this morning I was, I was listening to a podcast with Peter Diamandis.

[00:27:21] And he was interviewing Andrew Yang, who You know, we've talked about, I think an episode 200, or maybe it was 1 99 or 1 98, ran for president in 2020 on the, premise of a universal basic income, the need for, because saw the future of work being automated. really good interview, but it got me really thinking this morning, and I actually, like, I have to talk to you and Taylor about this because I'm like, we have to do more on the research front around this.

[00:27:46] Yeah. So I gotta think about the social contract. And they were talking a lot about this premise of the social contract and what is expected from workers. Like, I go to college, I get a good education, I come out, I'm a good worker. Like I should have opportunities like That I [00:28:00] should have the ability to earn a living and find fulfillment in my work if I fulfill my part of it.

[00:28:05] That's, that's society's, you know, what it contributes back to me is that I get a, a job and I can raise a family and buy a house and I can do all these things. Well, what if that social contract breaks, and AI causes that, that breakage of the contract, then what happens?

[00:28:20]

[00:28:20] Paul Roetzer: So I just threw this actually used, 5.4 thinking, which we'll talk about.

[00:28:26] GPD 5.4 thinking just came out and I just threw the, in like a prompt around this. I was like, like, talk to me about the social contract related to AI and jobs. And I'm just gonna read what it gave me. and this is like, 'cause I vetted this and I was like, yeah, this is actually like really good, but it's saying it better than I could say it.

[00:28:42] And I had like five minutes to, to get this ready before the show. So I just wanna read this because I think this is really important for people to understand and it's some, like I said, it's somewhere we're probably gonna spend more time talking about and more research on. So it says, the social contract is the implicit deal between workers, employers, and society about how work should function and what people can [00:29:00] reasonably expect in return with jobs and ai.

[00:29:02] That deal gets stressed because AI can change who does what work, how value is created and who benefits from it. A simple way to think about the traditional social contract around work. As people build skills and work hard, employers provide wages, stability, and opportunity. Society supports education, labor protections, and a safety net.

[00:29:20] In return, people can earn a living, build a future, and have dignity through their work. AI raises the questions, does that deal still hold if machines can do more of the tasks people used to do? The social contract debate around AI usually centers on a few big issues. One, if AI boosts productivity, who gets the gains?

[00:29:38] Do the benefits go mostly to the company owners in a small number of technical workers or AI forward workers? Or are they shared with employees through better pay shorter weeks, work weeks retraining and broader prosperity? Two, what does society owe workers whose roles are changed or displaced? If AI automates part of someone's job, is the expectation that they simply adapt on their own?

[00:29:59] Or do [00:30:00] employers and governments have a responsibility to help with retraining transition support, and new pathways to work? Three. What is the obligation of employers using ai? Is it acceptable to use AI to cut labor costs or should companies also use it to augment workers, improve job quality and create new forms of value?

[00:30:17] People can participate in. Four. What happens to fairness and dignity at work? If AI is used to monitor workers score performance or make hiring and firing decisions, people worry that the social contract breaks down unless there is transparency, accountability, and human oversight. And then finally, five is a job still the main way people access security and status.

[00:30:37] If AI reduces the amount of human labor needed in some areas, society may need to rethink whether healthcare, education, retirement, security, and basic economic stability should depend so heavily on having a traditional job. That was actually something that came up in the Andrew Yang interview I was mentioning.

[00:30:51] They were talking about universal basic services, which I hadn't really thought about. Like there, there's all this talk about like, let's just give people money. Let's just pay them to live. But what about [00:31:00] infant? Instead of doing that. We had a government say, let's say you paid $250 a month and you got healthcare education retired.

[00:31:07] Like you got all that included into

[00:31:09] Mike Kaput: it. Yeah.

[00:31:10] Paul Roetzer: So then it's a, so when people say the social contract around jobs and ai, they usually mean what new obligations should exist among companies, workers and governments? When AI changes the role of human labor, a healthy AI era social contract might look like workers get training and a real chance to adapt.

[00:31:26] Companies share productivity gains more broadly. AI is used to augment people where possible, not just replace them. Decisions that affect livelihoods remain accountable to humans. Society strengthens safety, safety nets for periods of transition and people retain dignity, agency and economic opportunity even as work changes.

[00:31:46] So it really becomes a fairness and power question. If AI creates enormous value, what do people owe one another? So that progress benefits more than just a few. Super, super important stuff. We've talked about all of those elements. At [00:32:00] different times. In, in more isolation. And I think on this podcast, we, we probably need to start having more conversations around this social contract related to work as a whole.

[00:32:12] It's kinda what we set out to do with our marketing AI industry council, where we focused on the impact of people first. And that's the marketing talent report that we put out last month where we were looking and saying, well, what happens to the people? And I don't, I don't think enough people are answering that, asking that question.

[00:32:26] And I will say, I'll tell you this, and I'll, and here, Mike, I have, I've done a lot of workshops, I've done a lot of public talks, I've spent a lot of time with executives and entrepreneurs, government leaders. No one wants to fire people. Like when we talk about the inevitability of jobs going away, I have yet to meet a CEO who takes joy in getting to lay off 20% of their workforce.

[00:32:51] No one wants to do it, but there are going to be CEOs forced to do it.

[00:32:57] Mike Kaput: Yeah.

[00:32:58] Paul Roetzer: And so I think we j we [00:33:00] just have to be talking more about this and I don't have the solutions, but you know, the one thing I had floated last year is like, is there a tax on automation? So if you are claiming you laid off 4,000 people because of ai, not only do you have to pay unemployment, but you have to pay an AI tax.

[00:33:16] Like, and I have no idea. I've never heard anybody talk about that. It hasn't come up from what I've heard with government. but there's just these kinds of things and that's why I would suggest if you're interested in this topic, topic, go, we'll, put the link in the show notes, go listen to the Andrew Yang interview because they talk about a lot of ideas.

[00:33:35] And the unfortunate part is you'll come away with it realizing we are, we are nowhere near actually having solutions.

[00:33:43] Mike Kaput: Right?

[00:33:43] Paul Roetzer: But we actually do have some people thinking deeply about this, including Andrew Yang.

[00:33:48] Mike Kaput: Yeah. And it really seems like there is a wide open road to plow through if you're a politician by proposing some actual policies.

[00:33:56] I don't know if they'll be good or bad, but there's a [00:34:00] vacuum in terms of conversation about what we actually do about it. And this stuff doesn't change just by thinking about it, right? It, we have to have policies, elected officials, businesses, leaders, private, public, et cetera, actually doing things about this for any of this to actually change.

[00:34:19] Paul Roetzer: Totally. And the thing that, I mean, there's lots that worries me and I'll, I'll end here. Like, like I says, probably topics for more conversations. The OB most obvious path is AI is going to create trillionaires. Elon Musk will be one before the end of the year, barring any massive collapse in one of his companies.

[00:34:38] there will be others. And the fastest path to a, to some sort of solution is a version of universal basic or high income where the trillionaires and billionaires give money back to private citizens.

[00:34:52]

[00:34:53] Paul Roetzer: And think about the downstream ramifications of that. Yeah. Like, okay, so now we've solved, like we've gave some [00:35:00] people some money, but now the five people in the world who are controlling AI now control society too, because they own the media companies and they're giving people money.

[00:35:09] It's like those people are so above the law at that point. Like, it's just, I don't know. I would love to like sit around and sit in some think tanks about this stuff. Yeah. I'm sure there's fascinating conversations happening. I'm just not aware of them at a very deep level and I just at least want to know that they're happening.

[00:35:28] Mike Kaput: Yeah. Same.

[00:35:30] Services as the New Software

[00:35:30] Mike Kaput: All right, so next up we have an essay published by Sequoia Capital partner, Julian Beck. And in it he actually predicts the next trillion dollar company. Won't, won't be a traditional software provider, it will be a software company masquerading as a services firm due to what AI now enables. So he said previously AI companies built what he calls co-pilots.

[00:35:54] They, you know, sell a tool to a professional to work with you. But Beck argues that [00:36:00] models are now smart enough to act as autopilots that sell the final work product directly to the buyer. So he divides work into kind of his own buckets that he calls first intelligence, which is complex, but rule-based tasks and then judgment, which is tasks that require experience and instinct.

[00:36:18] He notes that AI has officially crossed the threshold to handle the first category, the pure intelligence tasks autonomously. So the market math behind this is really interesting. He says that for every dollar that's spent on software, $6 is spent on services. And we'll get to why this matters in a second.

[00:36:38] So for example, a company might spend $10,000 on accounting software like QuickBooks, but then they spend $120,000 on the human accountant to actually use it. Beck says that the next legendary company will just close the books for you, and he actually maps out verticals that are ripe for [00:37:00] autopilot takeover.

[00:37:01] There is huge labor spend in a few big areas that he mentions. One is insurance brokerage, which spends 140 to $200 billion a year on salaries. Accounting spends 50 to $80 billion. The healthcare revenue cycle spends 50 to 80 billion. Recruitment is 200 billion plus dollar industry. And management consulting is three to $400 billion as well.

[00:37:27] So Paul, this touches on this service as software, idea that we have kind of discussed in the past that we're not going to replace the TAM of software companies. We're going to replace the TAM of available salaries.

[00:37:45] Paul Roetzer: Yeah. The, so we talked a lot about this. I think it was on the sa apocalypse episode.

[00:37:50] Yeah. Might have been when we touched on this a little bit. And then, it's come up a lot. We asked the basic premise is software industry in the US is about [00:38:00] 300 to 500 billion roughly in terms of revenue annually. annual wages for knowledge workers, people who, you know, use computers for a living is four to 5 trillion.

[00:38:10] So the knowledge work labor market is literally 10 x the software revenue industry. So it's always been an inevitability that that was what, Silicon Valley would build towards. It's what VCs would fund is companies that went after the much larger total addressable market. So I'm gonna, I'll read a few excerpts from here, Mike.

[00:38:30] Some of it is just reinforcing what you said, but I think these are really important points. So, the article said every founder building an AI tool is asking the same question. What happens when the next version of Claude makes my product a feature? They're right to worry. If you sell a tool, you're in a race against the model.

[00:38:46] But if you sell the work, every improvement in the model makes your service faster, cheaper, and harder to compete with. So the example he used about writing code is mostly intelligence, knowing what to build. Next is judgment. Judgment is [00:39:00] different. It requires experience and taste. Instinct built over years of practice, deciding which feature to build next, whether to take on tech debt, when to ship before it's ready.

[00:39:10] He used the example of cursor, he said Users, cursor's a coding agent. If you're not familiar with cursor. users treated AI as auto complete. Today, more tasks are started by agents than by humans. Software engineering accounts for over half of all AI tool usage across professions. Every other category is still in single digits.

[00:39:29] The reason is that software engineering is primarily intelligence work. AI has crossed the threshold where it can do most of the intelligence work autonomously and leave the judgment to humans. Software engineering got there first. It is coming for every single profession. And they actually had a chart, it said, in what domains are AI agents deployed?

[00:39:49] software engineering is 49.7%. The next closest is back office automation at 9%. Other has 7%, but then to give you a sense of some of the other ones he's looking at [00:40:00] marketing and copywriting. 4% sales and CRM 4%, finance and accounting 4%. academic research, 2.8%. Cybersecurity, customer service then goes on and.

[00:40:11] So he is kinda giving a sense, and then he gets into this idea of copilot sells the tool. autopilot sells the work. So again, if you think about Microsoft and all those positioning, it says copilots. It's like you're gonna work with the thing and it's gonna do the work with you. three to five years from now, that'll be a very misleading projection of what the future would look like.

[00:40:33] They're not gonna be copilots. It is, it is gonna be a lot of autopilots is the reality of where knowledge work goes. So he said, today's judgment will become tomorrow's intelligence. As AI systems accumulate proprietary data, this is a very important concept to understand. As AI systems accumulate proprietary data about what good judgment looks like in their domain, the frontier will shift.

[00:40:53] What that means is right now you need co-pilots that work with the humans because the human [00:41:00] still has the experience, the expertise, the judgment, the taste that knows what to do next. What he's saying is once the models get enough training in specific domains. Then they get judgment too. And maybe we don't need the human to have the judgment anymore.

[00:41:15] And now it can become an autopilot. again, I always go back to the example of full self driving in a Tesla. Most of the time it's just, it's a copilot. because a lot of times, like when I'm driving my Tesla driving home from the airport Thursday night, it nails a pothole. Didn't see it. So I had on full self-driving, coming down a road that direct directly from the airport to my house and it drills a pothole.

[00:41:39] I was paying attention. Now I didn't see the pothole, but because it was dark out. But that's an instance where the thing doesn't know what it doesn't know. And sometimes the human judgment has to come in and say, Hey, this road is tore up because of all the snowplows. I'm gonna actually maintain control while I'm going.

[00:41:57] That was my mistake, is I should have just maintained [00:42:00] control myself and I could have swerved and not hit it. That's the premise. But at some point, maybe the car gets better than me. Judging it recognizes that there's potholes everywhere and it starts to kind of control that environment itself. So that's the basic premise is right now we are largely in the copilot phase, but coding and software engineering is moving much faster towards autopilot.

[00:42:22] And at some point, the rest of these professions start to follow the copilot to autopilot transition. So he said the total addressable market for autopilots is all labor spend in a category, insourced and outsourced combined. But the right place to start is where outsourcing already exists. So this, again, is a really important concept to think about.

[00:42:41] If a task is already outsourced, it tells you three things. One, the company has accepted that this work can be done externally, so they're willing to outsource it. Two, there's an existing budget line that can be substituted cleanly. Three, the buyer is already purchasing an outcome. [00:43:00] Replacing an outsourcing contract with an AI native services provider is a vendor swap.

[00:43:05] Replacing headcount is a reorg. The fourth mic I would add there is you don't have to fire anybody. So if you're already outsourcing it, then the best thing you can do is get rid of that. Now, that's not great if you are the company providing the outsource services, but it at least buys you time to not have to lay a bunch of people off.

[00:43:23] So outsourcing, if so, think about your company, think about what you currently outsource. That is the starting point as it shifts to where it can do the intelligence and the judgment. So I said the playbook, companies should start with the outsourced intelligently heavy task nail distribution, expand toward the insourced judgment heavy work as the AI compounds.

[00:43:42] The outsourced task is the wedge. The insourced work is the long-term total addressable market. Plotting every services vertical on an intelligence to judgment spectrum and outsource to in-source ratio produces a priority map with labor, total addressable markets. In brackets, the list is illustrative.

[00:43:57] So that's referring to a specific thing which I would recommend. Go [00:44:00] look at the chart. Yeah, it is illuminating. And then I'll just mention a couple of things, Mike, to, to zoom into the ones you mentioned. Because it gives a little context as the Why Insurance brokerage, for example, is a major one. So it says insurance brokerage is 140 to 200 billion.

[00:44:15] The largest dollar market on the list, standard commercial lines are highly standardized. The broker's value add is essentially shopping across carriers and filling forms. That is pure intelligence work. The distribution layer is incredibly fragmented. Tens of thousands of small brokers each running the same process.

[00:44:33] So no single incumbent controls the customer relationship. Accounting and auditing is another one 50 to 80 billion outsourced in the US alone. The US has lost roughly 340,000 accountants over five years. While demand has grown, 75% of CPAs are nearing retirement. The licensing path is long and starting salaries, lag, tech and finance, that structural shortage is pushing firms to accept AI faster than almost any other [00:45:00] profession.

[00:45:00]

[00:45:00] Paul Roetzer: let's see. I did another one. Legal. so legal and transaction work. 20 to 25 billion contract drafting NDAs, regulatory filings, high intelligence, routinely outsourced. The work product is standardized enough that quality is verifiable. So the buyer can trust AI output without deep legal expertise.

[00:45:22] And then one final note. Actually, lemme do the management consulting one and then I'll add one one more note. So, management consulting, three to 400 billion, huge market, but the work is mostly judgment. The interesting question is whether AI can disaggregate consulting into intelligence components, data gathering, benchmarking, et cetera, and judgment components like strategic recommendations with the intelligence layer getting automated and the judgment layer staying human.

[00:45:47] And then this tweet was from Sunday, the, how do you say that, Mike? Coi?

[00:45:54] COI think.

[00:45:55] Paul Roetzer: Okay.

[00:45:55]

[00:45:55] Paul Roetzer: Letter. Yeah. Yeah. So we'll put a link in. here we go. Finance related [00:46:00] job openings are collapsing. Finance and insurance. Job openings fell 117,000 in December to 134,000, the lowest level since February, 2012.

[00:46:12] Available vacancies in these sectors have dropped by 410,000, or neg minus 75%. Since the 2022 peak openings are now even lower than the 2001 recession. Bottom by comparison, the largest monthly decline during the 2008 financial crisis was 125,000. As a result, the finance and insurance job openings rate fell to 1.9%, meaning fewer than two out of every 100 jobs in the sector are currently vacant.

[00:46:43] The lowest since February, 2010, excluding the 2009 2010 lows. This is the lowest rate recorded this century. The finance industry is bracing for more layoffs.

[00:46:56] Mike Kaput: Wow. I love how he breaks this down, but [00:47:00] when you start seeing it broken down like this, it almost becomes obvious. It's like a no brainer that this is where things would start to go, which is a bit scary.

[00:47:11] Paul Roetzer: Yeah, and we've talked about the accounting one on the show before. and I actually did some, I don't say like consulting work. I did some working executive sessions for some major accounting firms, and this is the thing I had illuminated for them was they had lost this 300, 400,000 CPAs during the pandemic.

[00:47:33] And so my point to them was like, okay, you're at a deficit of talent.

[00:47:36] Mike Kaput: Yeah.

[00:47:37] Paul Roetzer: That just accelerates someone building AI to do that job. And as soon as you fill the gap of those three to 400,000, you've now automated the need for anybody else. So you're just gonna, you need the ai, but as soon as you train the AI on that domain, now you don't need.

[00:47:53] The humans that were left.

[00:47:54]

[00:47:55] Paul Roetzer: It's, it's a catch 22. Almost like it again, [00:48:00] like our whole point here isn't to take a position on any of this. Yeah. It's to illuminate the reality of where this goes. Like, and yes there are things that can change this, but more and more it's like this inevitability.

[00:48:13] Of like it's going to happen and how quickly and what we do about it is like the things we have to really start thinking about and be more proactive about.

[00:48:25] Mike Kaput: Absolutely. Alright, Paul, before we dive into this week's rapid fire, just a quick announcement that this episode is also brought to you by our upcoming AI for CMOs Blueprint webinar.

[00:48:36] This is a webinar unveiling our upcoming AI for CMOs blueprint, which is an asset we're creating in partnership with Google Cloud. And the webinar unveiling it is happening Thursday. March 26th at 12:00 PM Eastern, 9:00 AM Pacific. And in the session, myself and SmarterX, CMO, Cathy McPhillips are going to actually break down the insights from this report, which contains real world use cases, tools, and [00:49:00] advice for how CMOs can adopt ai.

[00:49:02] We'll also do some discussion and live q and a registration is free. All registrants will receive ungated access to the full AI For CMOs blueprint to register, go to SmarterX dot ai slash webinars. Alright, so let's dive into this week's rapid fire.

[00:49:19] Barriers to Enterprise AI Adoption

[00:49:19] Mike Kaput: So first up, we have a post from Wharton, professor Ethan Molik, that is highlighting the dynamic that while AI models are advancing faster than ever, there's a massive adoption divide in the corporate world.

[00:49:30] So he wrote on X this week quote, it is amazing how many companies I talked to still have AI effectively blocked by it and legal departments. For out of date reasons when many companies in highly reg regulated industries have figured out ways to deploy Enterprise Chat, GPT, Claude and Gemini without any apparent problem.

[00:49:50] It is one of the weirdest divides. I speak to two companies in the exact same industry, and one has been using AI for the past 18 months. The other has a committee that has to approve every use [00:50:00] case individually and talk about how AI companies will train on their data. The deciding factor is whether an executive is willing to assume risk.

[00:50:07] If the answer is no, then risk reduction forces in the organization, things like it, and legal among them, have every incentive to avoid anything that might even be rumored to cause a problem. It is a leadership question. So Paul, I have to say, reading that, that certainly hits home to me with a few of the conversations we've had with companies and things we've seen.

[00:50:30] Does that align with what you're hearing today, that there's kind of this. Diffusion or adoption gap based on internal, let's call it leadership issues, bureaucracy, et cetera.

[00:50:40] Paul Roetzer: There was a article in the Wall Street Journal, and it was end of last week over the weekend, that was AI Needs Management consultants after all.

[00:50:47] And this was the basic premise. It's like, why openAI's just announced their Frontier Alliance Anthropics doing deals with these consulting firms. They need the people that have the trusted relationships with these enterprises to get in there and show them [00:51:00] how to use the platforms.

[00:51:01] Mike Kaput: Yeah.

[00:51:02] Paul Roetzer: So I mean, we're definitely seeing that.

[00:51:04] And then this was what I ended up writing the editorial in my exec AI insider newsletter that I do on Sundays. this is the editorial, like I was like, I I just have to address this, so I'll just read this. So if you get the newsletter, you, you, you've read this already, but it's the most concise way I can say it.

[00:51:21] So I'll just read what I wrote for Sunday. It said in March, 2023, two weeks before the release of GPT-4, I published the Law of Uneven AI distribution, which stated. The value you gain from AI and how quickly and consistently that value is realized is directly proportional to your understanding of access to and acceptance of the technology.

[00:51:41] In the post I wrote, so this again is what I wrote in March, 2023. I've been thinking a lot lately about AI adoption curve, both in our personal and professional lives, and who stands to benefit most from rapid advancements in the technology. Where I've landed is that the impact and benefits of AI will be unevenly distributed to individuals, companies, [00:52:00] and industries.

[00:52:01] In some cases, it will be by your own choice, and in others it will be by institutional design. For example, financial services companies blocking employees, ChatGPT access educational systems, at all grade levels, struggling to adapt to generative AI capabilities in curricul and my own willingness to install some super interesting AI apps.

[00:52:20] Because I don't know or trust the companies and how they'll use my data, this uneven distribution will create dramatic differences in people's experiences with and perceptions about ai. It will profoundly impact how you, how much you reap the benefits of AI in your personal and professional lives. How much value your company extracts from the technology and the trajectory of your AI journey.

[00:52:40] So then I continued in the newsletter on Sunday, said, here we are three years later and the law continues to hold true. In the last two months alone, I've spent time with leaders of major financial and healthcare institutions that are still blocking access to generative AI platforms in their companies.

[00:52:55] And I've met with school administrators at high school and college levels who are [00:53:00] searching for answers on how to handle AI in the classroom. And personally, I have yet to go down the path of exploring Open Claw or even Claude Cowork, to be honest with you. Yeah. And, open Clause, an open source AI agent that runs locally on your computer to perform real world tasks because of the security and unknown risk.

[00:53:16] So I'm not accepting of the risk that I have to give up, like what I have to give up to get the benefit. So for companies struggling to see ROI from ai, you have to go back to the basics. AI understanding and access. The companies that are racing forward and realizing the benefits of AI are optimizing AI literacy and empowering their employees to integrate generative AI solutions into their workflows.

[00:53:38] But one of the things I see Mike all the time is they're not starting at the top. Like the AI literacy is kind of from the the bottom up.

[00:53:44] Mike Kaput: Yeah.

[00:53:44] Paul Roetzer: Whereas it actually needs to be happening at the bottom, but it has to also happen from a leadership level. 'cause they won't give the priority and urgency needed to AI transformation if they don't understand AI capabilities themselves.

[00:53:57] So you have to focus on that. You have to start with [00:54:00] those fundamental things, or else we're just gonna be in this continual cycle of, here we are three years later, and those same three principles of lack of understanding, lack of access, lack of willingness to give up or accept the risks, are still preventing companies from doing anything with ai.

[00:54:16] Mike Kaput: Yeah. And like we talked about on a couple past episodes, as we see these predictions from people, right? Like Microsoft, CEO of ai, Mustafa Suleyman saying like, Hey, in 18 months all white ColorWorks going to be automated. Like, look at these timelines, the reality of the lot messier and more uncertain and more nuanced.

[00:54:34] Paul Roetzer: I have a hundred percent been with companies of late that if they even have rolled out Gen AI in its current form to all their employees in 18 months, I'll be shocked. Like it's,

[00:54:44] Mike Kaput: yeah.

[00:54:45] Paul Roetzer: It is there people underestimate human friction. Like it is, it is a massive barrier to doing this the right way in companies, especially the big enterprises.

[00:54:54] GPT-5.4

[00:54:54] Mike Kaput: And to your point in the previous topic about that's why some of these outsource things that are already [00:55:00] being off your plate and hiring someone to do are really natural starting point for a lot of this adoption. Yep. All right, next up. openAI's, you know, had some chaos, obviously with the whole Pentagon and Anthropic situation.

[00:55:14] But in the midst of all this, they actually dropped a massive product update. They released GPT 5.4. It was released on March 5th in three variants. There's a standard version, a thinking version for reasoning and a pro version for maximum performance. openAI's is positioning this as its most capable model ever for professional work.

[00:55:33] It is also taking a direct shot at Anthropics Enterprise customer base by launching new native integrations for things like Excel. The benchmark jumps on this model appear to be huge. It is the first general purpose AI model to natively surpass human performance on computer use tasks. So on the Benchmark OS world verified, it scored 75% it, which blows past the human baseline of 72.4% and [00:56:00] crushes GPT 5.2 score of 47.3.

[00:56:04] On openAI's is relatively new benchmark called GDPV, which we referenced in a previous topic. this test knowledge work across 44 different occupations. GPT 5.4 matched or exceeded industry professionals 83% of the time. It also saw its abstract reasoning score on the ARC AGI two benchmark jumped to 73.3%.

[00:56:26] That's up from 52.9% and it is the first general reasoning model rated high capability in cybersecurity according to openAI's under the hood, it features a massive million token context window and introduces a new tool search feature that dramatically reduces token usage by 47% by dynamically looking up tool definitions.

[00:56:49] So Paul openAI's has released GPT 5.4 right in the middle of kind of their controversy and chaos with the Pentagon. His benchmarks are pretty impressive. [00:57:00] early tests. People seem to really enjoy the model, myself included. What stood out to you about what the model can do and where we're at, with GPT five,

[00:57:08] Paul Roetzer: GDP-val number is pretty shocking.

[00:57:12] Yeah. So, master exceeded industry industry professionals 83% of the time.

[00:57:18] Mike Kaput: Yeah.

[00:57:18] Paul Roetzer: I just, I dunno when we can throw all this data out and it's hard to like, understand the significance of that. Like, if you just, and again, think about how fast this is all happening. Like GPT-4 came out in March, like al almost three years ago to, to the week.

[00:57:36] Like, I think it was like

[00:57:38] Mike Kaput: yeah.

[00:57:38] Paul Roetzer: Middle late March, 2023. So we're talking about three years. I don't know what it would've scored at GPT-4 level, but it was, it would've been the single digits if they even had these benchmarks back then.

[00:57:49] Mike Kaput: Right? Right.

[00:57:49] Paul Roetzer: So, I mean, again, like I think so many companies have a lack of urgency around this.

[00:57:55] And if you just look at a three year trend line, we are talking about [00:58:00] exponentials in terms of these models capabilities to do the work that. Your people do, and I don't, I just don't know, like, at what point people are going to accept all this and act on it. Like, I don't know what the data point is that people need to see, or like maybe it's that their company does the 10, 20% layoff and then they realize this is real.

[00:58:23] Mike Kaput: Yeah.

[00:58:23] Paul Roetzer: Or they're asked to have a 10% contingency for June of this year and now like the CMO is like, oh my gosh, I didn't know this was actually happening to my co I don't, I don't know what it is, but like it's real, I dunno how to say this.

[00:58:38] Mike Kaput: Right.

[00:58:38] Paul Roetzer: and again, I know we're preaching to the choir with listeners to our show.

[00:58:43] Like you are all in theory. Like you, you have proactively chosen to listen to a show about artificial intelligence. you are probably in the know on this, and I'm just gonna tell you like most people aren't there with you. All right. And so. Whatever you can do whatever [00:59:00] data point you need to show your peers, your friends, your leaders, show 'em, like, we gotta move people.

[00:59:06] These things are getting really good, really fast and it's not just gonna be happening computer programmers this year. It's gonna start happening to everybody else. And, yeah, I don't know. I I've only tested on a few things, but I had one very specific high value use case on Friday that I was actually doing.

[00:59:25] Mike Kaput: Yeah.

[00:59:25] Paul Roetzer: And, my personal experience lately is, I've mentioned this on the show, but if it's a high value strategy thing, I normally will test at least three different models.

[00:59:34]

[00:59:34] Paul Roetzer: So I'll have GeminI'll have Claude, and then I'll have ChatGPT, but then I'll even test variations of those models. So like in ChatGPTI may do 5.4 thinking, and then I'll do 5.4 Pro.

[00:59:45] I'll do it with and without my Co-CEO GPT. So like, I'll experiment with all these different ways to try and get it a massive like gain in terms of the value. And the one project I did with 5.4 was extremely impressive, but I didn't [01:00:00] have time to run it against Claude 4.6 and like compare it yet.

[01:00:03] Yeah.

[01:00:03] Paul Roetzer: But it was impressive. And it's, I will say it's a task that I would otherwise be paying tens of thousands of dollars for.

[01:00:12] Mike Kaput: Right.

[01:00:12] Paul Roetzer: And it did it very well, but outsourced tens of thousands of dollars for it. It did it in about three minutes while I was picking up my kids.

[01:00:20] Mike Kaput: Yeah. Yeah. That's a bit eyeopening.

[01:00:23] We will repeat our weekly reminder till we're blue in the face. If you have, don't have a paid edition of these accounts, go get one and go test these out for yourself. You don't even need to read the benchmarks or here are the data or look at the charts. Just go, use the tool. Stop what you're doing, pause this podcast much and go try it out.

[01:00:43] Paul Roetzer: I get it every week. I still get some. Oh my gosh. Like, oh, I tried this thing and it just didn't do it. I was like, was it the thinking version? Did you, did you use the, like what model did you use? it changes things when you use the thinking versions.

[01:00:55] The Move 37 Moment for Math

[01:00:55] Mike Kaput: Yep. All right. Next up. Less [01:01:00] than a year ago, Polish mathematician Barage Naski was a vocal AI skeptic.

[01:01:06] He dismissed the technology as a very advanced calculator that could not understand deep mathematics. Today, however, he has declared that his quote, personal singularity, has arrived. So the catalyst for this change in opinion was something called Frontier Math. This is an incredibly difficult benchmark created by a company called EPOCH AI to test models on research level mathematics.

[01:01:30] Neke was one of 30 global experts invited to contribute to this benchmark. He designed the hardest problem tier possible. They call it a tier four problem based on 15 to 20 years of his own accumulated research. This problem was a beast. It featured a documented 13 page solution. Required an answer that was a massive number to prevent.

[01:01:54] Lucky guessing, you can't just stumble on the answer. And it was specifically designed so [01:02:00] that a PhD level mathematician would need at least a month just to figure out an approach. But during a recent evaluation to open AI's, newly released GPT 5.4 PRO solved the problem. It became the first AI system to ever crack one of these problems.

[01:02:17] And what shocked ness Reky wasn't just that the AI solved it, but rather how it solved it. It didn't brute force its way to a correct answer. It successfully extrapolated a pattern to bypass more advanced mathematical machinery. REI called the solution very nice, clean, and feels almost human. He compared the breakthrough to something that should sound familiar to listeners of this show to AlphaGo's Famous Move 37.

[01:02:42] A moment where Machine didn't just win, but demonstrated genuine creative insight. This is not just an isolated incident either. This represents a pretty big acceleration in AI's reasoning capabilities. In mid 2025, only three frontier mass tier four problems had ever been solved. [01:03:00] Today, 42% of those brutal problems have been cracked at least once, and rather than feeling displaced by AI ness, correct, he actually said he's kind of embracing it.

[01:03:09] Now as a collaborator, he put it like this. He said, quote, at least I've gained a tool that understands my idea on par with the top experts in the field. My singularity has just happened, and there is life on the other side off to infinity. That's a pretty powerful moment. Paul and I realize not most of us are not world-class mathematicians, but significant that someone of this caliber in this field is now doing an about face on what's possible with ai.

[01:03:37] Paul Roetzer: Yeah, I liked seeing this, at least like his response to it. just as a reminder, if people haven't watched yet, the, my my keynote from our make on event in 2025 is available on YouTube. You can watch the whole thing, and it was the move 37 moment for knowledge workers. So this is something I've thought a lot about.

[01:03:55] and I tend to focus on like the human element of it. So there's the technological [01:04:00] capability part of it, but the human element, I defined as the moment when you realize AI is better than you at what you do.

[01:04:05] Mike Kaput: Yeah.

[01:04:06] Paul Roetzer: So for many people it starts with like individual tasks, like, oh, it writes abstracts better than I do.

[01:04:10] And like, there's just that one thing. but then it, some point it starts to add up and you realize like it's just kind of generally better at what I do, and like I gotta figure out what else I'm gonna do and where I'm gonna find my fulfillment. So my talk was a very optimistic look, but it was also the fact that we were all going to have these moments as, as writers, as consultants, as accountants, as lawyers, as whatever, where it's.

[01:04:35] It's just as good or better than you at the core thing you've done.

[01:04:39] Mike Kaput: Yeah.

[01:04:39] Paul Roetzer: And that's a weird moment that we have to prepare ourselves for. So I always like highlighting examples of it, especially when there's a optimistic view of, I had it and life's still okay, like, I'm still gonna go on and solve other problems and do other things.

[01:04:53] Mike Kaput: And especially in these kinds of fields when we're talking science and mathematics and, you know, [01:05:00] research ai, being able to crack the code on some of these problems starts to hint at the fact AI may start discovering new knowledge for us and doing new maths. And the kind of alludes to that in some of what he talked about as well.

[01:05:11] Paul Roetzer: Yeah, I mean, the most fundamental level, the universe is mathematics, right? Like everything can be broken down into mathematics and so if AI is really good at mathematics, it's, it actually bodes well to scientific discovery and, expansion of our human knowledge and understanding of the universe and kind of a meta way, like it's really exciting.

[01:05:32] Mike Kaput: Yeah.

[01:05:34] Paul Roetzer: Not meta the company, by the way,

[01:05:37] Mike Kaput: who we will, we will talk about.

[01:05:39] AI and Journalism Update

[01:05:39] Mike Kaput: so in our next topic, tension over AI and journalism has reached another boiling point this week, highlighting a massive divide between media executives pushing the technology and rank and file reporters dealing with its realities.

[01:05:53] So over at the Associated Press, Amy Reinhardt, who is the senior product manager for ai, sparked a [01:06:00] firestorm in an internal Slack channel. When she told staff that regarding AI in the newsroom, quote, resistance is futile referencing a recent editorial from cleveland.com about their use of ai, which by the way, we covered on episode 1 98, rein Hart suggested reporters should just be in the business of gathering quotes and let LLMs actually write the stories.

[01:06:22] She actually claimed that many editors would actually prefer an AI written article over a human written watch. Understandably, the pushback was a bit fierce Here. One AP reporter called the comments quote, insulting and abhorrent, defending human writing over AI written slop. Another noted that AI managers seemed to live in a quote, totally different reality than working journalists.

[01:06:44] The AP itself actually officially distanced itself from her remarks. So Paul, in episode 1 98, we had talked about cleveland.com editor, Chris Quinn, publishing an editorial, basically saying that AI was the future. Journalism schools were letting down [01:07:00] students if they were not telling them to embrace ai.

[01:07:02] He even talked about how his own newsroom is basically doing what Reinhardt has outlined, which is letting AI generate the stories, keeping reporters focused on reporting. it's kind of just keeps getting messier and more confusing. There's a lot of powerful feelings around this.

[01:07:20] Paul Roetzer: Yeah. With the Associated Press stepping in and you know, even if it's not a formal Associated Press position and it's just Yeah.

[01:07:26] You one out or within. that affects the industry greatly. Yeah. For people who don't maybe under, you know, understand the journalism industry, associated Press, having a say in this, is significant. And I can like say, firsthand journalism schools are really struggling to figure out how to adapt their curricul what the future of that industry looks like, how to prepare their students.

[01:07:52] We've talked about this on a few recent episodes. I don't know the resolution to this one, Mike. Yeah. And you and I are as close to this one [01:08:00] as probably any, given that we both have a background in journalism, I don't know how this ends, but, this is a very, this industry's gonna be faced with some very significant challenges adapting to ai, because I think one of the biggest friction points we see in AI adoption is a willingness of the people within a company to do it.

[01:08:22] And. Many journalists, most journalists didn't go into the profession to become independently wealthy. They went into the profession because they believed deeply in the stories that needed to be told and the impact those stories would have.

[01:08:37]

[01:08:38] Paul Roetzer: And to tell them, you're not gonna do that anymore.

[01:08:43] You're just gonna have AI do that, you're, you're taking away the reason why they do the job.

[01:08:50] Mike Kaput: Right.

[01:08:50] Paul Roetzer: and so you're gonna just have massive resistance to that. And I don't know that the journalism [01:09:00] schools are going to create enough. I don't, I don't even wanna use the term AI four 'cause I'm not even sure AI four applies to what they're asking these people to be.

[01:09:08] Yeah. human assistance to the aIt's almost a reversed thing. Like they're asking for the AI to be the thing and the human is just there to assist the ai. And I can't imagine journalism schools are gonna be pumping out those kinds of people from their. Classrooms. So I don't know how this gets resolved, and I've thought about this one many times through the years,

[01:09:31] Mike Kaput: and it's telling to see some of the writing on the wall, so to speak.

[01:09:35] Again, not agreeing or disagreeing with it, but Okay. The Associated Press distances themselves from the comments Fair. But this person True saying this stuff is not a journalist. This person's job title is literally senior product manager for ai. That should tell you about the Associated Press' priorities,

[01:09:53] Paul Roetzer: correct?

[01:09:54] Yeah. You can say they don't support it, but that doesn't mean it's not true.

[01:09:57] Mike Kaput: You're right.

[01:09:59] Paul Roetzer: It just means we're [01:10:00] trying to stop a shit storm for a while until we figure out like plan B here.

[01:10:03] NVIDIA CEO Calls OpenClaw “Most Important Software Release Ever”

[01:10:03] Mike Kaput: Yep. All right, so next up, Nvidia, CEO, Jensen Wong made a big statement this week regarding the booming AI agent space.

[01:10:12] He publicly called the new open source AI agent tool, open claw quote, the most important software release probably ever. So we've talked about Open Claw before. It's kind of acts as an always on digital employee running locally on your machine, and you can interact with it by messaging. It's basically an AI agent that will go do whatever you ask it to do and try to do stuff for you on its own.

[01:10:35] And the adoption of this tool has been pretty unprecedented. By early February after it's released, it broke records by hitting over 145,000 GitHub stars and drawing 2 million visitors in a single week. This was created by an Austrian developer named Peter Steinberger, who built the initial prototype in literally like an hour because he wanted it to exist, it became this viral success.

[01:10:59] And he [01:11:00] became a highly sought after developer in Silicon Valley and actually recently accepted a lucrative offer to join openAI's. So Paul, we kind of talked a bit about Open Clause Rise Peter's story, him joining openAI's, but it's pretty interesting. Jensen is now calling this the most important software release ever.

[01:11:20] Like, do you think he's right on that?

[01:11:22] Paul Roetzer: I don't know. I mean, I think it's gaining a lot of traction. I think the underlying potential of it, assuming it's secure and safe. Yeah. And anybody could do something like this and then it can speed up work. Like I If he's projecting that future, then yeah. I could see that maybe being true.

[01:11:36] It's hard to, like, it's a very abstract thing to try and wrap your head around and see what he's seeing. I did retweet something from Allie k Miller. let's see when this was. This was March 5th.

[01:11:47]

[01:11:48] Paul Roetzer: And she had tweeted, I went to a sold out open claw meetup in New York last night. Let me tell you what I learned.

[01:11:53] And then she goes like, point by point, and I'll just hit a couple of these. She said, not a single person thinks that their setup is 100% [01:12:00] secure. So like, these are like advanced users and they have no idea if it's secure or not. one open claw expert said he has reviews setups from cybersecurity experts and laughed.

[01:12:10] His statement to me was, if you're not okay with all of your data being leaked onto the internet, you shouldn't use it. It's a black and white decision,

[01:12:19] Mike Kaput: my God.

[01:12:19] Paul Roetzer: So again, as I've said with open claw, like I don't mess with the stuff. this is one of those where like, this just validates to me why I haven't accepted the risk yet, is because the experts who know the risks are just laughing and being like, dude, everything's accessible.

[01:12:35] Like this could all be leaked on the internet, whatever you give it access to. So I'm in no hurry to, personally test open claw. I'm happy to read about other people doing it, but it is just one of those things like I will happily show up late and when it's safe and secure Yeah. And easy to understand, then I'll, I'll dip my toe in.

[01:12:54] But in the meantime, I'll trust Jensen on, on this one.

[01:12:58] Mike Kaput: Right. Trusting. I wonder [01:13:00] the litmus test is, has Jensen given open claw access to one of his machines? Maybe. Yeah.

[01:13:07] Paul Roetzer: But he loves all the inference. It's using all the computer's using to do inference. So keep in mind when Jensen says something, he sells chips that, are used not only in the training of the models, but the inference when you and I use them and things like open claw use a ton of tokens, which draws on compute power from his chips.

[01:13:28] So,

[01:13:28] Mike Kaput: yes.

[01:13:29] Paul Roetzer: So it doesn't mean it's not true. It, he has a stake in the game, basically, is what I'm saying.

[01:13:35] Mike Kaput: All right.

[01:13:35] AI Art Can’t Be Copyrighted

[01:13:35] Mike Kaput: Next up the debate over who owns AI generated art has a new development. The US Supreme Court has officially declined to hear a landmark case on the issue. So this case centers on Missouri computer scientist Stephen Thaler, who has been on a years long legal Cru Crusade to secure intellectual property rights for algorithms that he created.

[01:13:55] So this all started back when Thaler tried to copyright an AI generated [01:14:00] image, and he was kind of intentionally trying to test the legal limits of our existing copyright system by intentionally listing his AI system as the sole author of this work with zero human creative input. So at the time, the US Copyright Office rejected his claim.

[01:14:16] A district court judge upheld that rejection in 2023, famously ruling that human authorship is a bedrock requirement of copyright. After a federal appeals court affirmed the ruling in 2025. Tha asked the Supreme Court to step in arguing that denying the copyright creates a chilling effect on anyone else considering using AI creatively.

[01:14:38] So by declining to hear this appeal, the Supreme Court is letting the lower court rulings stand, which basically cements the copyright office's current guidance that purely AI generated are based on text prompts, cannot be copyrighted. So Paul, this doesn't solve AI copyright, but is at least one step forward [01:15:00] in at least a little bit of clarity, I would guess, here.

[01:15:02] at least when it comes to AI generated art specifically.

[01:15:06] Paul Roetzer: Yeah. I don't know. I mean, this is, it's, it's, it's good that there's like rulings starting to come. I just expect this is the very front end of this. Like there's so many unknowns still related to AI and creativity and intellectual property. We had, crystal Lazer, an IP attorneys created some content and some courses on our AI Academy specific to this, we've had around some, uh.

[01:15:26] You know, some summits. Samantha Jordan is another IP attorney that we had actually, was it the I for agency Summit? I think Samantha presented.

[01:15:32] Mike Kaput: Yes.

[01:15:32] Paul Roetzer: Yes. So this is an ongoing topic that we're constantly paying attention to, and this is one of the rare moments where it seems like there was something definitive happen.

[01:15:40] Mike Kaput: Yeah.

[01:15:40] Paul Roetzer: but I don't know. I mean, I'm, I'm no lawyer. I just have learned that it seems like there's never an actual end to these things as long as someone can challenge something. So we'll see. and the other thing I would factor is the current government

[01:15:54] Mike Kaput: Yeah.

[01:15:55] Paul Roetzer: Hates copyright. Yeah. So, I don't know what that means.

[01:15:59] [01:16:00] They always seem to find a way around laws when they don't like them. So maybe, I don't know where this goes, honestly.

[01:16:08] Meta Sued Over Smartglasses Privacy Concerns

[01:16:08] Mike Kaput: Well, some other legal related news here, meta is facing a massive new class action lawsuit over its AI smart classes. Last year, meta sold over 7 million pairs of its RayBan Smart Glasses.

[01:16:21] They heavily marketed these to consumers with promises like quote, design for privacy controlled by you and quote, built for your privacy. However, a recent joint investigation by some Swedish newspapers exposed a disturbing reality footage captured by the glasses is routinely routed to a subcontracting firm in Kenya for human review.

[01:16:43] The lawsuit alleges that overseas workers at this firm have been reviewing highly intimate and sensitive footage from users daily lives, including people using the bathroom, undressing, and having sex. While meta claims, it uses algorithms to blur faces and identifying information before the footage [01:17:00] is reviewed.

[01:17:00] Sources dispute that this consistently works worse. Users cannot opt out of this data pipeline, so the lawsuits being filed by a public interest law firm and charges meta and its manufacturing partner Luxottica with violating consumer protection and privacy laws. Arguing that no reasonable consumer would expect their most private moments to be watched by human contractors.

[01:17:24] How concerned Paul, should people be about this?

[01:17:28] Paul Roetzer: If you didn't know how, this is how AI training works, I would imagine you're probably very concerned. This is how AI training works. Humans look at things. Conversations, videos, images, they have to have humans that do it. So if you thought that like everything on your meta glasses was just for you, welcome to reality.

[01:17:50] Yeah. Like, I don't know. I mean, I, again, I sometimes I have this assumption that people are like roughly aware of how [01:18:00] these things work and like what is actually private. I would just say that it's probably pretty safe to assume that there's a lot less private than you think when it comes to all of this stuff and.

[01:18:14] It is disturbing stuff to read, especially if you've never seen how the sausage is made. Is that the saying? I guess like,

[01:18:23] Mike Kaput: yeah,

[01:18:24] Paul Roetzer: there you go. Like, yes, humans look at this stuff that you record, they look at your photos, they look at your prompts. even your most intimate things, they, there's probably a human on the other end seeing that stuff.

[01:18:39] So if you trust the anonymity of that, then how about it like I,

[01:18:46] Mike Kaput: and be careful of reading too much into those marketing taglines that meta's putting out there. It sounds,

[01:18:51] Paul Roetzer: yes. Yeah, and again, like I've always said on this, like, I've never watched an episode of Black Mirror. People find that odd, I don't need to [01:19:00] watch Black Mirror.

[01:19:00] Like I know how, how technology works and I would imagine there's probably Black Mirror episodes, something, everybody always says it to me like, oh, did you see the Black Mirror episode where they did? It's like, no, I didn't and I won't. Because I don't need to see it. but yeah, so assume like when you're watching these Black Mirror things and something's like super creepy that it's probably actually how it works.

[01:19:21] Microsoft Copilot Cowork

[01:19:21] Mike Kaput: Alright, our last topic this week, which we got just before going on the air, actually Microsoft just announced a pretty big evolution to its enterprise offering. They're calling this co-pilot, co-work, which is designed to help Microsoft Co-pilot take action by completing tasks and running full workflows on your behalf.

[01:19:38] So instead of just answering a question or drafting an email, co-work acts more like an autonomous agent. You describe the outcome you want, it builds a plan that can continues in the background with clear checkpoints so you can confirm its progress. This is powered by a system called Work iq. And so using this system copilot, cowork draws on signals across Outlook, teams Excel, [01:20:00] and the rest of Microsoft 365.

[01:20:02] So it can ground its work in your specific emails, messages, and data. And Microsoft has highlighted several huge use cases for this. For example, you can hand over calendar triage to cowork. It will automatically review your outlook schedule, flag conflicts or low value meetings, and proposed changes like declining meetings and adding protected focus blocks.

[01:20:22] You could also ask it, for instance, to prep for a client meeting, and it will automatically pull relevant inputs from your emails and files to generate a briefing document, supporting analysis, and even a shareable slide deck. So this system is designed to keep you in the loop though by checking in if it needs clarification, allowing you to approve recommended actions before they are applied.

[01:20:44] So co-pilot coworkers currently being tested in a limited research preview and will roll out more broadly in late March, 2026. Paul, it's a pretty big deal, but not unexpected. We've been waiting for basically every other [01:21:00] lab to create the Claude Cowork, of their particular offering, haven't we?

[01:21:04] Paul Roetzer: It seems like the most obvious thing, and everybody seems at least six weeks behind, with Claude.

[01:21:09] So yeah, it'll be a big deal. you know, when it becomes functional. for, for the average user, the Claude Cowork, we did actually just have a Gen AI app review on Friday. It came out from Katie Robert, who's a, AI Academy instructor for SmarterX, and she did a 20 minute demo on Claude Cowork. So, it's like one of those tools that just seems extremely valuable in a work environment.

[01:21:34] I think it'll be a quite a while before we see enterprises like really gravitating towards these things, just given how slow people are to just use Gen AI app, period. But these are the kinds of things that could start to have like a, a compounding effect within organizations and value creation. When they become more readily available to everybody.

[01:21:54] Mike Kaput: Yeah. I would imagine, especially just given the enterprise focus of Microsoft copilot and how slowly some [01:22:00] of these, companies can move, if this works, anything like as well as cowork, as I know Cowork does, and people actually start turning it on and implementing it's gonna be a maybe a pretty big wake up call for your average knowledge worker that might not be as on top of this stuff.

[01:22:16] Right. Where you're like, maybe this is the moment we talked about instead of the data or the benchmarks, like, wait a second, it can just do work for me.

[01:22:24] Paul Roetzer: Right. And just point it to a folder and it can just do stuff. Yeah. And it may actually be a key aspect of solving the lack of adoption that we talked about earlier, and the need for all these consulting firms.

[01:22:33] It's like maybe they just need cowork and like For sure make it easy for people to use.

[01:22:37] Mike Kaput: Yep. All right, so that's a wrap for this week, Paul. Just one quick announcement. We are of course running this week's AI pulse survey. We're gonna ask about has the Anthropic Pentagon situation changed how you think about which AI company you use, and also how would you describe AI access at your organization right now?

[01:22:56] So if you go to SmarterX dot ai slash pulse, you'll be [01:23:00] able to take this week's survey. we'd love to hear from you on this. So,

[01:23:04] Paul Roetzer: so far I would say number one is it's changing some people's minds because, well, there was, Anthropic was, reported to be hitting a $19 billion annual run rate right now.

[01:23:15] Mike Kaput: Yep.

[01:23:15] Paul Roetzer: And at last I checked, it was still the number one app in the app store. So somebody's switching to Anthropic

[01:23:21] Mike Kaput: somebody I saw. Yeah. The millions of people might be switching to Anthropic. It'll be interesting to see if companies follow, I've heard some rumblings from people. That their companies may actually be switching based on this, but I don't know.

[01:23:33] Interesting if that's actually going to happen.

[01:23:35] Paul Roetzer: Yeah, we'll see Everything ebbs and flows

[01:23:38] Mike Kaput: for sure. Well Paul, you know, even though we said it's kind of a lighter week, we have plenty to talk about this week.

[01:23:43] Paul Roetzer: Heavy topics. Yeah. Like breaking news.

[01:23:47] Mike Kaput: Well, I appreciate you breaking down everything for us. So Rick, thank you.

[01:23:51] And, until next time,

[01:23:52] Paul Roetzer: yeah, and I think we will have, we'll have a second podcast this week, Mike. It looks like we've got an AI for [01:24:00] answers AI for departments specials. We're gonna go through marketing, sales, and customer success questions from our departments webinar week. And so it looks like we're gonna have a episode dropping on Thursday as well.

[01:24:11] So we are, can, join us twice this week and then we'll be back for the regular weekly on St. Patrick's Day. March 27th or March 17th will be the next weekly. Alright, Mike, thanks again. talk to everyone next week. Or Thursday. Thanks for listening to the Artificial Intelligence show. Visit SmarterX dot AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events.

[01:24:43] Take in online AI courses and earn professional certificates from our AI Academy and engaged in the SmarterX Slack community. Until next time, stay curious and explore ai.