Five companies are about to decide the future of the economy, geopolitics, and your career...and two of them have been at odds since a San Francisco group house in 2016.
This week, Paul and Mike unpack a bombshell Wall Street Journal investigation tracing the deeply personal OpenAI vs. Anthropic rivalry back nearly a decade, revealing the grudges, broken promises, and power plays that are now shaping the entire AI industry. They also break down the impact of Anthropic's accidentally leaked "Mythos" model, why OpenAI is renaming its product org "AGI Deployment," what Uber's CEO admitted about AI replacing human work, and why nearly $300 million in dark money is flooding into AI-focused political groups ahead of the midterms.
Listen or watch below and see the show notes and transcript that follow.
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Click here to take this week's AI Pulse.
00:00:00 — Intro
00:03:15 — AI Pulse Survey Results
00:05:30 — OpenAI vs. Anthropic
00:26:02 — Details Leak on Anthropic’s New Hyper-Powerful Model
00:35:34 — Brutally Honest CEO Perspectives on AI
00:46:38 — State of AI Business Survey
00:48:29 — Anthropic Granted Preliminary Injunction in Fight Against the Pentagon
00:51:06 — This Week in AI Politics
00:56:55 — AI Agent Nightmares
01:02:15 — Apple's AI Reboot
01:05:21 — SmarterX Use Case Spotlight
01:17:53 — AI Academy Spotlight
01:22:59 — AI Product and Funding Updates
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Learn more here.
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: It's just this like completely wild, unknown world we're heading into where basically these five companies are gonna decide everything when it comes to the economy, business, geopolitics. Welcome to The Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:22] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week, I'm joined by my co-host and SmarterX Chief Content Officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career, join us as we accelerate AI literacy for all.
[00:00:50] Welcome to episode 207 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording on Monday, March 30th. [00:01:00] 2026 right before 10:00 AM Eastern time. I don't know if we're getting new models this week, but there's a lot of just chatter going on about what's coming up from all the labs, Mike.
[00:01:11] So it is, I would say this episode, we're gonna be setting the stage for I think what's gonna be a pretty busy spring. And, in some ways we might see some pretty, rapid advancements, I would say, from the models. And these labs are pushing out a lot of stuff. So we're gonna try and provide the context of what's going on and help people sort of frame it into what it means for what they've got going on in their careers and their businesses.
[00:01:40] And it's just, yeah, try and connect some dots. There's a, there's a lot happening. And as we were getting ready for this show, even just like two minutes before we came on, Mike and I were like, oh, wait a second. Didn't this happen in 2024? And so we're gonna do our best to provide a little historical context and to, to what's happening.
[00:01:55] Alright, so this episode is brought to us by AI Academy, by SmarterX, [00:02:00] which helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and an AI powered learning platform. New educational content is added weekly, so you always stay up to date with the latest AI trends and technologies.
[00:02:15] The AI for Departments collection features five core series and certificates designed to jumpstart AI understanding and adoption. We have AI for marketing, AI for sales, AI for customer success, AI for hr, and AI for finance. And Mike is wrapping up AI for operations this week. Right? So that one's gonna be coming soon.
[00:02:34] Well, fingers crossed as well. Yeah. All right, so we've got five already ready to go. If you join AI Academy or if you're already a member, those are all in there already, and operations is coming soon. So tell your peers in your organization if they're trying to figure this out. There's a department series for them.
[00:02:50] So these series in our ideal launchpad for organizations that want to level up their teams and accelerate AI adoption and impact, Mike teaches the AI for Sales series and is [00:03:00] going to be sharing some insights towards the end of today's episode. Some takeaways from that series. So individual and business account plans are available now, or you can buy single courses, and series for one-time fees.
[00:03:11] Visit academy dot SmarterX.ai to learn more.
[00:03:15] Paul Roetzer: Alright, and then each episode, if you're new to this again, every week, we know we're getting lots of new listeners, so we'll give you a little rundown of how this works. We go through what we call our AI pulse, where we take an informal poll each week of our listeners on how they feel about topics we talk about in that episode.
[00:03:30] And then we'll go through three main topics and then rapid fire items. So from, episode 205, So from last week's episode. 'cause then we had an AI answers episode on, that was episode 206. So if you didn't catch that last Thursday, we dropped an AI answers episode. So the first question was, openAI's is building an enterprise deployment arm with private equity backing.
[00:03:51] What's your reaction? And this one, Mike, looks like a perfectly split pie, basically.
[00:03:56] Mike Kaput: Oh, yep.
[00:03:57] Paul Roetzer: So 25% say smart move. AI companies [00:04:00] need distribution, not just models. 26% said, I don't have an opinion. 28% said inevitable. Every AI company will do this within a year. And then 20% said it's concerning. It blurs the line between AI vendor and consulting firm.
[00:04:14] And then the, second question we had was Anthropics 81,000 person study found the number one fear is hallucinations, not job loss. Does that match your experience? 43%. The largest percentage said no. Job displacement is still my top concern. 34% said yes on reliability is the biggest barrier to trusting AI at work.
[00:04:36] And then we had 13% at neither. My biggest concern is something else entirely and 9% is, I'm not particularly worried about AI risks right now. That's interesting. and then we did ask one more. How many AI tools does your organization officially approve for employee use? 45% said one to two tools. 34% said three to five.
[00:04:58] Only 15% said [00:05:00] six or more. And then there was a small sliver that said, none AI is blocked or not addressed.
[00:05:05] Mike Kaput: Right?
[00:05:07] Paul Roetzer: which I would imagine if you're listening to this show and you work for a company that's blocking everything, there's a decent chance you might not be at that company very long. You might be looking for a new career opportunity where you get to apply everything you're learning in ai.
[00:05:19] Alright, so, so that's the AI Pulse you can go, we'll give you the, the new questions at the end. but it's just SmarterX.ai/pulse to participate in those each week.
[00:05:31] Paul Roetzer: Alright, so we are gonna start off today with, it sort of, this is the top of the, kind of spun out of OpenAI canceling their Sora app, the individual app.
[00:05:41] And then we zoomed out and said, okay, like let's talk about the bigger thing going on. 'cause we touched a little bit on this last week, Mike, about how OpenAI was refocusing their efforts. And I think we're starting to get a little bit more sense of why that's happening and kind of where this is going.
[00:05:56] And we wanted to frame it within the OpenAI versus [00:06:00] Anthropic topic. So let's kick things off there.
[00:06:02] Mike Kaput: Yeah, Paul. So we had this week a major Wall Street Journal investigation that actually traces this openAI's Anthropic rivalry way, way back to basically almost a decade ago, to a San Francisco group house actually, that in 2016, a multiple players were living in.
[00:06:21] And it reveals that this feud, and it very much is a feud, is shaping the future of ai and is as much about personal wounds and power struggles as it is about kind of these bigger picture topics of philosophy or safety. So this piece from the Wall Street Journal is based on interviews with current and former employees at both companies.
[00:06:40] There are a ton of details in here that were previously never actually reported. And so the Wall Street Journal, and Paul, you're gonna kind of dive into a lot of these moments more in depth. I'll kind of give like a very surface level view of some of the things they pointed out that have started this kind of rivalry between openAI's and Anthropic.
[00:06:56] So Tensions actually started very [00:07:00] early. So after Anthropic, CEO Dario Amodei, before he was CEO of Anthropic, joined openAI's in 2016, he watched Elon Musk very quickly thereafter order layoffs in ways that he considered needlessly cruel. He also watched Greg Brockman of All People Float the idea of selling AGI early on to the nuclear powers on the UN Security Council, as they're all kind of kind of projecting out where is AI gonna go, what should we do about it?
[00:07:28] And Dario, as early as 20 16, 20 17 started considering that kind of proposal. Tantamount to treason and nearly quit over it early on in his tenure at OpenAI. So when Sam Altman took over OpenAI after Musk exited in 2018, things apparently got more complicated. Altman made Dario a promise that Brockman and Ilya Sutskever would never be in charge or would not be in charge of the moment, and then turned around and made conflicting promises to Ilya and Greg.
[00:07:59] As [00:08:00] research into GPT took off, Dario blocked Brockman from working on the language model project. Daniela Amodei, Dario's sister, who was co-leading that project, offered to step down rather than let Broman join. And apparently by 2020 relations had deter deteriorated to the point where Altman accused the Amodei's of plotting against him to the board.
[00:08:22] This all culminated in late 2020. Dario Daniela, and nearly a dozen employees left to found Anthropic. Before leaving Dario wrote a memo arguing the ideal AI company would be 75% public good and 25% good for the market. Now, five years later, both these companies are valued at hundreds of billions of dollars and racing towards an IPO.
[00:08:44] Now, one of the reasons we kinda mention this is because in recent months, AADE has escalated the conflict sharply. He compared the Altman and Musk illegal battle to basically Hitler and Stalin fighting. He called brockman's $25 million [00:09:00] pro-Trump Super PAC donation, which you might talk a little bit more about later.
[00:09:03] he called that just straight up evil, and he likened openAI's to a tobacco company. Now this is all happening as there's some very real competitive pressures reshaping both companies. So this week, Paul, like you mentioned, openAI's shut down. It saw a video app and that was burning at one point a million dollars per day, and it dropped to just, oh, under half a million users.
[00:09:28] Fidji Simo, the head of applications in OpenAI described Anthropics gains in the enterprise market recently as a wake up call and told staff, the company cannot miss the moment because we are distracted by side projects. So Paul, the Wall Street Journal publishes this deep dive. There's like a lot of personal drama here.
[00:09:50] How much of this is just personal versus the bigger picture? Philosophical.
[00:09:55] Paul Roetzer: It definitely seems like there's just a lot of residual bad feelings. [00:10:00] I would say so, you know, again, if, if you're relatively new to all of this, like if you even listened to the podcast for the last four years, you've sort of heard this story unfold.
[00:10:10] Now, as Mike said, there's details within this that we didn't previously know. A lot of these, elements though were, were relatively known. Certainly the friction between them, but how it all kind of came to be, this is the most detailed unfolding of events that I've seen. the reason we wanna talk about it is, is because it's so relevant to all the other things that are going on right now, you have this battle over government contracts where we've got Anthropic being designated supply chain risk, and we'll talk about this in a couple topics here, but, you know, the judge sort of putting an injunction in place and not allow that, but openAI's steps in the day, they're getting blackballed and is like, Hey, we'll take the contracts.
[00:10:51] And so like for Dario, that this is just like daggers basically, like they have this long history, they're both racing to [00:11:00] IPO this year. They're both trying to beat each other to the market. Basically. They're both being funded by a lot of the same people and companies. They're now in a battle for the enterprise, which every day I'm talking to companies and leaders at companies who are moving to Anthropic.
[00:11:16] Like it is a very, very common recurring theme I'm hearing. And so there's just a massive amount going on. So when you look back in retrospect, November, 2015 is when openAI's is created. So if you've been with us for a long time or you followed the space, it was created intentionally as a counterbalance to go.
[00:11:35] So in the early days it was Musk and Altman and Ilya Sutskever and Greg Brockman. And they wanted to be the alternative to Google, which they considered basically like the evil empire, and they didn't want them to get to AGI first. So to create this nonprofit, to do this research out in the open, it quickly becomes not a nonprofit.
[00:11:53] which creates the friction between Musk and Altman that we're still seeing play out that will go to trial, I think in April. [00:12:00] It's like coming up fast. And so there's just all this drama going on, but the way that the Wall Street Journal tells this is a lot of this does stem from Dario Amedee not getting the kind of credit he thought he deserved for his contributions to.
[00:12:15] Really the whole transformational phase we find ourselves in with language models. So it talks about, you know, again, they found it November, 2015. Brockman tries to get Dario and Daniella to come join them. Greg is hanging out at the house like this group house. They've got Greg and Daniella, if I'm not mistaken, worked at Stripe together.
[00:12:37] 'cause Greg was the chief technology officer of Stripe and Daniella was an executive at Stripe. So I'm guessing that's probably how they got to know each other. Certainly around that same time, was at least going on. And then Dario was working as an AI researcher at Google. So 2016, Greg's trying to get them.
[00:12:56] Eventually they come over, they don't, you know, agree to come on as, as [00:13:00] founders, but they come over pretty soon thereafter. 'cause Greg's hanging out with everybody. and then there's one other name that we haven't, we've probably mentioned this name Mike, but I don't remember talking in great detail.
[00:13:11] So, Holden Karnofsky. this is an probably an important element to this story. So Karnofsky was the founder of a philanthropy that promoted effective altruism, which is the antithesis of techno optimism. So you have the Silicon Valley venture capital world that is pushing for acceleration at all costs, and you have effective altruism, which is kind of seen as the counterbalance to that.
[00:13:35] So Karnofsky, who is Daniella's fiance, is a major player in this, and Brockman actually starts to take an interest in some of the ideas behind the effective altruism, altruism movement. and so they start having all these debates. They're like in 2016, they're having these debates around, okay, well if, if we do end up building AG I f it goes this direction, who should we be telling?
[00:13:59] Should we be telling [00:14:00] Americans about this, you know, 300 million people that hey, it's coming for your jobs, or should we go talk to the government first? and so Dario argued that when it came to sensitive topics, like how fast AI was developing it was actually better to go to the government first.
[00:14:13] So then by mid 2016, Dario joins the lab. He's up working late with Brockman. They're actually working on AI agents at that time. They're looking at video games and other things. This is when, you know, Musk is really heavily involved in opening ai. Altman is yet not the CEO yet, like they're just kind of building this nonprofit.
[00:14:31] Ilia is playing a major role in the research, direction of the company. And then this is when the layoffs happen, you know, led by Musk and sort of like, you know, consolidating things. At this time, fall 2017, Dario actually brings in an ethics and policy advisor, and they're talking about sort of what's going on with the future research direction and the impact it could have and the need to get the government involved.
[00:14:56] and this is when Brockman, you know, within the presentation, sees the [00:15:00] fundraising idea that openAI's sell AGI to governments, including China and Russia. And Dario's like this is a treason, like what are you talking about? So it starts to create all this friction. Then Musk exits in 2018. So now we've got the blow up between Musk and Altman that leads to what, you know today is now going to trial.
[00:15:18] Altman steps in takes over his CEO. They start really going down this path of, you know, the for-profit ideas. Kovski, has since married Daniela and he's actually on the openAI's board. And then tensions really start to flare when openAI's researcher Alec Radford, who we haven't talked a ton about, we probably should have.
[00:15:39] Mike in retrospect, this is a name that matters. He had laid the groundwork individually for these large language models. So he was playing around with this stuff. building off of the Google paper about transformers. And he's developing generative pre-trained transformers or GPTs. So they start seeing the language model direction, like, wow, this might [00:16:00] be something starting in 2018, 2019.
[00:16:02] And so Brockman wants a piece of this. And Dario, who was research director at the time, is like, no way. Like, don't let Greg anywhere near this. And Daniella actually who's co-leading the language model project with Radford. So Daniela Amodei, she tells Brockman, you cannot work on this. And she offered to step down as head of project rather than allow Brockman on it.
[00:16:24] So you start to see the friction here is Brockman, like over and over and over again. You hear this throughout these issues through the years. so when Brockman said that he and Altman were going to meet with former President Barack Obama, so they're, they're now in like the G PT two G PT three range.
[00:16:39] Dario is now playing a major role in the development of this and the scaling laws and all these things. And Dario gets cut out of a meeting with the president. And so now he's pissed, like, why am I not involved in this? That's when he gets a promotion. Altman does the thing like, you know, you said like, all right, you know, Brockman and Sutskever won't lead this.
[00:16:57] Like, you know, and so [00:17:00] eventually BHA is like, you know, he wants to leave and he's like, I wanna report directly to the board or nothing. Like, I'm either out or like I'm reporting to the board. I want nothing more to do with all this drama. He's seeing the difference between like market companies and public good companies.
[00:17:12] He's thinking they need to go in the direction of the public good. So it's just like it becomes this wild unraveling and that eventually leads to them leaving. And then the thing I referenced earlier, Mike, that you and I were talking about right before we jumped on is Brockman's role in all this. So if we go back to episode of, of our podcast episode one 10 in August, 2020 4, 1 17 in October, 2024, and then 1 24 in November of 2024.
[00:17:39] We tell the story of Brockman taking a leave. And what we eventually find, at first it was just he needed a break 'cause he hadn't had one in nine years. And then it came out on episode one 17 that the Wall Street Journal revealed the sabbatical was actually a mutual agreement between Brockman and Altman, stemming from internal friction about Brockman's management style.
[00:17:59] [00:18:00] now for just a frame, that 2024 time period, September 24 is when the oh one reasoning model comes out. So Brockman takes his leave in August, a month later, the breakthrough is released that they had been working on for a while called Project Strawberry, which was the first reasoning model. Mira Murati, who's the CTO at the time, right, Mike?
[00:18:22]
[00:18:23] Paul Roetzer: She then leaves like the week that they announced the reasoning model, and then Greg comes back. So it's just like, it's just wild drama, but it's all tied to what we're seeing play out today and the friction that exists between all these labs. They're. They all know each other. Like they all came up together, they were all working in the same direction, and then it just kind of started going in these different areas.
[00:18:48] the couple of notes I just wanna make here just on the context of what's happening, and it sort of leads into our next topic, Mike, is each of these labs is working out what I would call these dimensions of AI progress. If you've ever heard me give, you know, my [00:19:00] state of AI for business keynote or, sometimes I'll work this into my intro class, but there's these different dimensions that the different labs are pushing on, and a few of the real important ones is AGI Agentic, which we're obviously hearing a ton about.
[00:19:12] I'm gonna drop a link in the show notes to a Lex Friedman podcast. I actually just listened to yesterday's three hours long. Luckily I had to clean my garage out yesterday, so I was like, I had three hours to listen, but it's an interview with Peter Steinberger who created OpenClaw. it's fascinating.
[00:19:27] So if you wanna understand like the moment we're in and like the ex, what's happening with the AGI agentic stuff and how these labs are so bullish now, you, you listen to this whole thing, it's wild. So there's AGI agentic in, in that realm. There's something called computer use, which allows the agents to use your computer.
[00:19:44] Continual learning is a big one. Memory is a really big one. Reasoning, maybe the most important one is recursive self-improvement. It's this idea that as these labs automate AI researchers, those AI researchers agents [00:20:00] can then work 24 7. And they think from there we get to this recursive self-improvement moment where the labs or the models can actually continually improve themselves without human insight and oversight.
[00:20:10] And that then leads to like the fast takeoff moment. And then, world models is another one we've been talking a lot about, like Fei-Fei Li, Yann LeCunn, DeepMind, they're all working on these things. And so what this leads us to is you, you still have these like five frontier labs. So when you think about what leads to a Frontier lab, they need funding.
[00:20:28] They need data centers, they need energy infrastructure. They need compute capacity like Nvidia chips, and they need the most powerful models. And so your tier one labs today are Google DeepMind, led by Demis Hassabis openAI's, led by Sam Altman Anthropic, led by Dario Amodei. Those are the three that matter the most at the moment.
[00:20:47] Your tier ones. Then what I would consider like tier twos would be meta with Zuckerberg and they're kind of the wild card. They've fallen off for the last 12 months. Maybe they get back in the game. And then you have XAI led by Elon Musk, which will go [00:21:00] public as part of the SpaceX, IPO later this year.
[00:21:03] And they're not relevant in enterprise right now in business, but who knows where that goes. And then tier three is like, maybe it's some point Microsoft gets out of their own way and they figure this all out. but generally speaking, you have three major labs and two of them are at war with each other.
[00:21:19] And then when you go into the tier two, the X ai, they're suing openAI's. So it is just this like completely wild, like unknown world we're heading into where basically these five companies are gonna decide. Everything when it comes to the economy, business, geopolitics, and there's obviously labs overseas, especially in China that, you know, should be part of the conversation, but I'm talking specifically about American AI labs, and so knowing these backstories and knowing these characters is actually extremely important because, you know, e even, I don't wanna make this political Mike, but when you look at that list I just gave you DeepMind, Google, they, they somehow managed to stay politically [00:22:00] neutral.
[00:22:00] You know, we'll talk about Sergei's actually on this new council Trump's created. but overall, like Google is trying to just play in the middle because they know governments change and like, they gotta be in the game no matter who's in office. And so they're just gonna play the game. openAI's historically has been similar, but then Brockman shows up and gives, you know, 25 million or 50 million or whatever to the super PAC and becomes the biggest donor to Trump.
[00:22:27] So now, like whether openAI's wants to be perceived in that way or not. There's no avoiding the fact that their president is like the largest donor to the current administration.
[00:22:38] Mike Kaput: Mm.
[00:22:38] Paul Roetzer: Then you have Anthropic who is like the enemy of the administration right now, and they're very much like left center at this point.
[00:22:44] They're trying to like play the game. They're embedded in the tech, you know, the administration, everything they're doing, but like they're, they don't really believe in a lot of those things and then you go to tier two and it's meta and XAI are like a hundred percent in with the Trump administration.
[00:22:59] [00:23:00] Now, the reason I bring that up is because politics sways and so like what happens if in two years or even hell during the midterms?
[00:23:08] Mike Kaput: If
[00:23:08] Paul Roetzer: the power switches and then like the companies that have gone all in on one side or the other, what happens to those labs if all of a sudden the government doesn't award contracts to them and you see what's happening to philanthropic, what if it flips and somebody does the same thing to meta or X ai?
[00:23:25] Hmm. Then the only ones that are left are like, the politically neutral ones are just like, just trying to make the world better, hopefully. So it's just like, there's so many layers to this, and I think for people who care deeply about this, and especially the downstream effects on the economy and the environment and things like that, it's really important to understand who's building this tech and what their goals are for it.
[00:23:46] And, so you can kind of like pick and choose who, who you're, who you're cheering on, and like who you're following and who your company's investing in, or whose technology you're using. Like, it's, it's not a, a binary [00:24:00] decision. Like there are lots and lots of layers to all of this. So I'll stop there, Mike.
[00:24:04] I mean, I could honestly spend the whole episode just talking about this stuff. I just think it's really important for people to understand the complexity of what's happening.
[00:24:12] Mike Kaput: Yeah. Hopefully someone's writing a book about this whole, background and history between these people, because one thing that just jumped out to me and then we can kind of move on is whether you agree with the AI hype or not.
[00:24:28] All of these people have been taking seriously the prospect of AGI or something beyond it. 10 years ago, 10 years ago, before they even had a company, before they even had a business model or anything. They were taking seriously who should have control of this technology, who should be notified of what and when.
[00:24:47] And now we're starting to see some of the fruits of that labor play out, where we're suddenly in this mode where people are starting to seriously worry about how powerful the technology is. So yeah, who is behind the decisions really does [00:25:00] matter.
[00:25:00] Paul Roetzer: I will say, when you listen to the Fridman, podcast with Peter Steinberger, it sure as hell sounds like he's selling the meta.
[00:25:07] So he, I mean, I was actually shocked he talked as much as he did, because Lex asked him point blank, like, everybody's gotta be coming after you. Like, what are you gonna do? And he is like, man, I've scaled a company before with VC money. I don't wanna do that again. Like, I'm not the co, I wanna just build some stuff.
[00:25:21] And he is like, well, who are you talking to? And he goes, well, you know, I've had great conversations with both Sam and Mark and both have some positive things, but I'm kind of excited about the idea of going and working at a big lab and just getting to build some stuff and have a lot unlimited GPUs to access.
[00:25:33] And and then he is like, yeah, Zuckerberg was just playing with OpenClaw build stuff and message me on WhatsApp and next thing I know, I'm like, yeah, I like to jump on a call in 10 minutes. It sure sounds like OpenClaw is gonna get acquired by Zuckerberg for billions of dollars and yeah, Steinberg is gonna go there and become part of that super intelligence lab.
[00:25:53] It's, I don't, I can't see an alternative at this point unless Sam pulls a rabbit outta hat and like convince 'em to come to [00:26:00] openAI's. It seems like it's one of those two places right now.
[00:26:02] Mike Kaput: Alright, so next up, we actually found out that Anthropic accidentally exposed details of an unreleased model that is nicknamed Claude Mythos through an unsecured content management system.
[00:26:15] This is being reported as a fortune exclusive. Roughly 3000 unpublished assets were apparently accessible for a time to anyone without authentication from Anthropics website. Even things that weren't published. These included draft blog posts, internal images and documents about this unreleased model and also about an invite only CEO retreat in the UK that Anthropic was running.
[00:26:38] That was not public knowledge. The elite drafts of this material described mythos this new model as a new tier above opus, which is, and which Anthropic says, is larger and more intelligent than the opus models, and has dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity.
[00:26:58] Now, after [00:27:00] Fortune asked about this, philanthropic confirmed the model is real. They called it a step change over previous models and the most capable they've built to date. They also state mythos is currently far ahead of any other AI model in cyber capabilities and warn that it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.
[00:27:24] So Anthropic actually plan to release it first to cyber defense organizations before making it more broadly available. Anthropic overall blame the leak here on human error in their CMS configuration. This is unrelated to their AI tools, kind of, you know, having vulnerabilities according to them. though it is important to note, their entire brand is built on being the responsible alternative over here and details are leaking out of this thing at the same time.
[00:27:51] openAI's says it has finished pre-training. Its next major model. This is code name for the moment. Spud and Altman told staff he [00:28:00] expects a very strong model within weeks and that he said can really accelerate the economy. Paul, two big models in network
[00:28:09] Paul Roetzer: that don't think means create more jobs.
[00:28:11] Mike Kaput: I, yeah, that's that might be a very intentionally worded way of saying that really accelerate the economy.
[00:28:18] So we've got maybe in the next few weeks, two huge models. Clearly at least one of them is a bit dangerous when it comes to cybersecurity. When do you expect these to drop?
[00:28:29] Paul Roetzer: Yeah, I mean, who knows. Things change when they're going through, like the red teaming to make sure they're safe. It sounds like Anthropic in particular actually already has it in the hands of some beta users.
[00:28:38] So part of it depends on that feedback loop and you know, when they're ready. But my guess is if they've got stuff queued up in a CMS that's unsecured,
[00:28:46] Mike Kaput: it's ready, you're getting ready, you're ready to go to market. So these things probably finished training months ago and they've been in post training and red teaming, getting them ready.
[00:28:55] But again, this is why we always say like, you cannot, [00:29:00] you can't. Make plans based on your current experience with these models. There is always a more powerful model in training. The labs have always, already seen six to 12 months ahead of what you know, to be true about reality. And so like they, they know roughly what the capabilities are and they're probably just trying to make them safe at this point.
[00:29:22] So, I don't know, this Anthropic story's crazy, like first I feel for the marketing team or whoever owns the, keeping the stuff on the cement possibly
[00:29:30] ex marketing team at this stage, I
[00:29:31] don't,
[00:29:32] Paul Roetzer: yeah, I would imagine somebody lost their job over that. and I'm just speaking from experience, Mike of like, we, we don't, you know, ran a marketing agency.
[00:29:39] Like, I can't even fathom being the person that allowed that to happen. So yeah. Part of this is a story about accidental disclosure, like a, you know, kind of a warning I guess, for other people. Like, think about this stuff part about, how much easier discovery of this sort of thing is going to be with agents where like, if you're competitive or if you're.
[00:29:59] In, [00:30:00] more of the black hat kind of stuff, and you're trying to find vulnerabilities and exposures and things like that. You just run your agents 24 7 and go look for this kind of stuff. And then more, most importantly is this idea that there's gonna be a leap in model capability soon. So the exposure was 3000 assets linked to this blog, which is crazy.
[00:30:20] the part that I found really interesting, Mike, is Fortune informed Anthropic. So Fortune finds this, they actually bring in, cybersecurity researchers to assess it, but then they alert Anthropic to the fact that it's all there. And they, to my knowledge, have yet to publish any of the information other than like broadly saying, A new model is coming and there's a CEO retreat.
[00:30:43] They didn't publish the blog post, like they have access to all this stuff, images, documents, blog posts, and for whatever reason, fortune chose not to release the information. My guess is there's probably a quid pro quo here of like. Hmm. We will give you an exclusive [00:31:00] on whatever in the future. Like, don't release it.
[00:31:03] And in exchange you're gonna get like first look at the actual mythos. I don't know, like,
[00:31:08] Mike Kaput: yeah,
[00:31:08] Paul Roetzer: media relations works in funny ways, but there's gotta be some reason they did not do this. So yeah, just this kind wild, I , I'm the bigger models worry me. I mean, they talk specific about reasoning, coding and cybersecurity.
[00:31:25] None of this is new. We've known all the models are getting better at these things, but just the fact of how prepared, unprepared people are for what already exists and to know we're very close to these like next level models is worrying. wall Street reacted not great. So cyber absurdity stocks is, and they feel like Anthropic tanks the market like once a week, like pick a category and like, so cybersecurity stocks slumped based on news.
[00:31:49] We had, CrowdStrike, palo Alto Networks and Zscaler dropped about 6% each that day. Sentinel one tumbled 6% while Okta and Netskope each fell more than [00:32:00] 7%. Tenable plummeted 9%. That was just on Friday. Like, just, just the idea that a new model is coming that's better in sc. Which is funny because we knew this, like, it wasn't like this was Oh wow.
[00:32:12] They figured out how, you know, to cause flaws in cybersecurity. It's like predicted this for two years.
[00:32:18] Mike Kaput: Yeah.
[00:32:18] Paul Roetzer: But anyway, it's how Wall Street works and then the Spud one, you know, we touched on last week, but it's, you know, the idea that they, Altman said the company would be renaming Fiji C'S product organization to AGI deployment.
[00:32:31] Like we are entering the phase where they, they truly all think we are approaching whatever you want to call AGI. Like we are there. And that led me to go back, Mike, to the stages of AI that we've talked about many times on the podcast. But again, I know we have new listeners every week, so I think it's good to frame this.
[00:32:50] so back in July, 2024, Rachel Metz at Bloomberg. Did a story called Opening Eyes Scale ranks progress toward human level problem solving. [00:33:00] And in that she had gotten access to le opening Eyes internal stages, which they later confirmed were in fact true. And so they came up with these five levels to track progress toward building artificial intelligence capable of outperforming humans.
[00:33:15] the company believed at that time. So this give you a sense of how fast we've moved. So in a year and a half, openAI's executives told employees at that time that they thought they were at level one, which was chatbots AI with conversational language. So summer of 2024, less than two years ago. but according to a spokesperson at that time, they were on the cusp of reaching the second, which it calls reasoners.
[00:33:39] So level two is reasoners, which is human level problem solving. That goes back to what we talked about on the first topic, which was September, 2024. So a couple months after this comes out, we get our first reasoning model. So in the, in a meeting in that summer, they actually previewed the oh one model that would then be released in September, right [00:34:00] when Greg Brockman was on his quote unquote leave.
[00:34:02] And Mirati was piecing out of openAI's to go start her own lab. So level one chatbots, level two reasoners, level three agents, systems that can take actions, which we are right in the midst of takeoff with agents. Level four are innovators, AI that can aid in invention, which we are seeing early signs of.
[00:34:20] And level five, which is why OpenClaw becomes so critical to this whole conversation are organizations or AI that can do the work of an organization. And that was a topic I didn't, I didn't want to have to get into in 2026. Like my hope was that we would have another year or two runway before we were talking about level five being within reach.
[00:34:42] But I do think that throughout this year we're gonna have a lot more conversations around entering into the early phases of level five. I think innovators, we will clearly be at that stage by, by this fall. I think you can make an argument we're, we're kind of already there in some [00:35:00] disciplines, but I think across most industries, we will be clearly into level four by the end of this year.
[00:35:06] And I do think that in some industries you'll be seeing very, early signs of level five, and I don't honestly know, I would guess they probably have a level six internally. I don't know what it, what it is, but just, just so people understand how fast we went from level one to emerging into level four in basically, what is that, 20 months?
[00:35:29] Mike Kaput: That is a way shorter timeline than I would've thought.
[00:35:33] Paul Roetzer: Yeah.
[00:35:34] Mike Kaput: Alright. Our third big main topic this week is about a couple different comments from some CEOs, that are a bit brutally honest, so to speak, about AI and its impact. So the first one comes from Uber CEO Dara Khosrowshahi, who broke what amounts to an unwritten rule in tech.
[00:35:53] This week he did an interview on the very popular podcast Diary of A CEO and he said he has personally [00:36:00] heard executives privately admit the true scale of AI disruption and then watch those same people go on TV and tell audiences that everything will work out fine, which is something we have talked about on this podcast.
[00:36:12] He said that he understands why they do it, because being honest about job displacement scares investors dries up fundraising. However, he said he estimates that AI will eventually replace the work of that 70 to 80% of humans do. Including in knowledge jobs within the decade and physical roles like driving within 15 to 20 years.
[00:36:32] Which begs the question, he was asked, what do Uber's 9.5 million drivers and couriers do next? And he literally said, I don't know. Now at the same time we got some comments from PWCs US CEO, Paul Griggs, who told the financial Times that partners who are quote, not paranoid about being AI first will be replaced.
[00:36:53] And he said, quote, I don't think anyone gets a free pass here. Anyone, an employee who thinks they can opt out of AI is quote, not [00:37:00] going to be here that long. I interestingly, p WC has cut 5,600 staff last year. They're shifting tax and consulting services into certain AI powered subscription tools that at least in the first steps of operation work without a PWC person in the loop.
[00:37:17] So Paul, I thought those are two pretty telling comments. I mean, we've talked quite a bit about what you're hearing behind closed doors, how people are not talking about this publicly. Are, is the dam starting to break here? Because six months ago we wouldn't have heard any of this, it feels like.
[00:37:32] Paul Roetzer: Yeah. I just don't feel like there's gonna be any way to avoid it.
[00:37:34] Like we've said before, every, every three months, these CEOs have to get on earnings calls and it's getting really hard to not say out loud what they've been saying privately. So it does echo what I've been, you know, trying to create urgency around these last couple years, which is what executives are saying privately, telling me privately what they're gonna do.
[00:37:51] And what they're saying publicly have been two completely different things for like a year and a half, two years now.
[00:37:56]
[00:37:57] Paul Roetzer: And so this is, you know, it's really where I'm spending a lot of my [00:38:00] time and, you know, I'm going on a, a trip with my family coming up here and I have long plane rides and I , like, I think I'm gonna use that time to just try and unplug and think more deeply about this because you and I 've shared a little bit Mike with, with you of the direction I'm going here and I've actually had.
[00:38:20] Some conversations with some, some listeners at, at some big enterprises who are thinking about these things as well and no, you know, no names and things, but I appreciate their perspectives on, on this. It's very helpful for me to think this through. where I'm currently at is I think AI forward managers and above directors, VPs, C-Suite, who have a deep understanding of AI capabilities, plus domain expertise and institutional knowledge, are gonna be in good shape in the near term.
[00:38:46] Like, I think if, if you go all in on this stuff, you figure it out and you can help design workflows and systems and integrate agents like, you're gonna be worth way more money today than, than you were yesterday. Like, [00:39:00] and your companies will figure that out. So I think your career prospects, if you fit into that AI forward manager and above companies are going to be looking for that talent.
[00:39:10] I think professionals across all levels who are resistant to learning AI and evolving are gonna have a very difficult time remaining employed where they are, as you highlighted with the PWC example. And finding employment in, you know, once we get outside of the next one to three years, like I t is the brutal reality that I don't, I don't like it, like, but I just really feel like across industries, across jobs, people who just resist this for whatever their reasons are, and some of them are very good reasons, and I empathize with those reasons.
[00:39:40] I don't know what else to tell you. Like, you, you, you, you won't be employable. Like, it's, it's a very, very brutal reality. and then my biggest concern is entry level work. Like, I just keep coming back to this. I don't know what you're gonna hire those people for. And I have some theories, like I have some [00:40:00] ideas I'm at least working toward to try and crystallize in my own mind, and that's why I need time to think more about this.
[00:40:06] but I don't know the answer to what those people do when the layer above can do all the tactical work. By simply prompting a system, it can do all the things they would've done to learn the administrative work. Like all of it's gonna be easily done by these models. And that's before we have the step change that's coming, you know, apparently this spring.
[00:40:28] So I don't know. I, and then you mentioned Mike, this National Bureau of Economic Research paper. I dug into this one a little bit that was related. I had not seen this yet. great use for Notebook lm, you know, you and I both, Mike, you have a whole podcast on all of our episodes in Notebook lm, but it's a great summary thing for me.
[00:40:46] Yeah. So I'll take these like dense research. So I'll just read you the summary that Notebook LM wrote on this. I thought it was really helpful. So, 2026 National Bureau of Economic Research working paper examines how AI is transforming corporate productivity in labor markets. [00:41:00] Through a survey of approximately seven 50 financial executives.
[00:41:03] The authors identify a productivity paradox where executives perceive high performance gains from AI that have yet fully materialized in official revenue data, which we see all the time. We talk about that. Sometimes it's, you gotta look at like leading indicators, but it's not gonna show up yet in like GDP or revenue within the organization.
[00:41:20] And then say, while adoption is widespread, investment intensity and motivations differ significantly between large and small firms with larger companies focusing on labor cost reduction, which is talked about. Despite concerns regarding automation, the study finds minimal aggregate employment declines, suggesting that AI currently functions more as a tool for task enhancement than for total job replacement.
[00:41:42] However, a significant reallocation of labor is underway as demands shift away from routine clerical roles towards skilled technical positions. Ultimately, the research suggests that AI driven growth is primarily fueled by innovation and product development rather than simple capital, capital deepening.
[00:41:59] And then the [00:42:00] one thing I'll note related to this, Mike, is the audience of people they interviewed is from, November, December, 2025. So relatively new data given, given kind of the moment, um. But they are interviewing CFOs.
[00:42:13] Mike Kaput: Yeah.
[00:42:13] Paul Roetzer: And while there are exceptions to the rule, the CFO is generally not the person I've been meeting with in enterprises who has the greatest comprehension of the moment.
[00:42:24] We find ourselves in, from a technology perspective, what these things are capable of doing. So they don't always have the highest degree of AI literacy and capabilities themselves. They're not pushing the models every day and finding business cases. It's not usually the CFO doing that. And therefore those CFOs who you're asking about this might not even be aware of like the reasoning capabilities or the agentic advancements that are happening.
[00:42:44] And then when you think, like the research was done in December, that was before like the Claude Code moment that
[00:42:50] Mike Kaput: Yeah.
[00:42:50] Paul Roetzer: Basically changed everything and before OpenClaw and like how much has changed just in three months. And so while it's always good to look at this kind of data, you do have to frame it with, okay, who [00:43:00] were they asking?
[00:43:00] What are, what is the AI literacy and competency levels of those people? Not that they're not super smart and super accomplished. But it's just not their job to be the one that's like staying up on all the latest model news. and so you have to, again, it's, it's just information. It's good. Put it in the filing system in your brain of like trying to understand the context of where we are and how to talk to people in your organization.
[00:43:23] But it's not an end all, be all. It doesn't mean that all of this is exactly true within your company or industry. It's a very dynamic, a dynamic place and we need all these different perspectives, but you, you have to piece together your own story, I guess is what I'm saying.
[00:43:38] Mike Kaput: Yeah. You know, one other thing that jumped out to me that reinforced a lot of what we've discussed over the past, you know, year at this stage is.
[00:43:48] The PWC, us, CEO, Gregs, who said basically an employee who thinks they have the opportunity to opt out of AI is not going to be here that long. I assume that's been [00:44:00] very clearly communicated internally. I hope if he is telling the financial times that if it's not, that's probably a good memo to get out this week, but I know that can read as harsh to a fair amount of people.
[00:44:12] But I really appreciate the honesty because I know behind closed doors, I know of organizations where people are already complaining about employees who are not embracing this stuff when they, because they know they have to. Yeah. And so it's not been expressed to them clearly that this is a condition of their employment, whether you agree with it or not, at least the expectations are very clear and I think that's more, more important than ever.
[00:44:37] Paul Roetzer: Yeah. I t's like anything else in life, like I mean, think about kids and stuff, like sometimes you gotta tell people the hard truth and. They may not get it yet. And it might be, you know, in the case of your kids it might be five years until they grow up and like, oh my God, my parents were right. Like,
[00:44:54] Mike Kaput: right.
[00:44:54] Paul Roetzer: And I feel like this is kind of one of those situations where people don't want to hear this. And I totally [00:45:00] understand that. Again, I complete empathy for how hard this is and I 'm honestly like how unfair it's going to be. But there, there is no, we have no control of that. Like this is happening. the models are gonna get smarter, they're gonna get more generally capable.
[00:45:17] They're increasingly gonna do the tactical things you and I do for our work every day. And pretending like that's not happening or that it's not gonna affect you or your family or your peers is just not a winning scenario. And I agree, like as harsh as it seems to say what the CEOs are now, you know, more increasingly publicly saying, I would much rather they just said it.
[00:45:43] Then pretend like it's not gonna happen. Right. And I know plenty of enterprises and leaders who know what's going to happen and just refuse to publicly say it or to say it to their own people. And I just, I don't know. Like I [00:46:00] would really rather, we just dealt with the hard stuff now and had time to be proactive about doing something about it than pretending like it's just gonna be okay.
[00:46:10] Because it always has been before when general purpose technologies came into the world. Like we just figured it out. And it's like that is either, either choosing to lie or being ignorant to how different this transformation is versus the previous generative or general, purpose technologies.
[00:46:32] Mike Kaput: Yeah.
[00:46:32] Paul Roetzer: And there's not much room in between that it's one or the other largely.
[00:46:38] Mike Kaput: All right, Paul, before we dive into rapid fire, just a reminder, this episode is also brought to us this week by our 2026 State of AI for Business Report. We are currently in survey mode collecting data for this report. This is an expansion of our popular state of marketing AI report that we've done every year.
[00:46:54] So this year we are going beyond marketing to collect tons of data on how AI is being [00:47:00] adopted and used across companies in every function. We are surveying already thousands of business professionals across all industries and functions. The survey period is in its finals about 10 days or so here. So if you have not taken the survey and you're part of the podcast audience, we'd really appreciate getting your perspective.
[00:47:18] If you go to SmarterX dot ai slash survey, you can take the survey. It literally takes like five minutes, minutes to complete in return for completing it, we will send you a copy of the full report when it drops, plus you get entered to win. A chance to get or extend a 12 month SmarterX AI Academy membership.
[00:47:38] So go to SmarterX dot ai slash survey. We are just about to wrap this survey up, and we'd love for your voice to be included.
[00:47:46] Paul Roetzer: And I'll throw in like a personal ask on this one. I know we have so many incredible people listening to this podcast every week that are in all kinds of different disciplines, roles, industries, and it would be extremely, appreciated to just get your perspectives [00:48:00] on this.
[00:48:00] Yeah. Like Mike said, it takes like five to seven minutes and we want as diverse of backgrounds and roles and industries and departments as possible to make this data as valuable as possible for all of us. you know, we talk a lot about research and all these different ways of doing it, and we wanna make this like, you know, an example of what can be done in an industry, make it as real time as possible, get this turned around as quick as we can to give you all that information.
[00:48:23] So yeah, if you can take those five to seven minutes, it would mean a lot to me and Mike in particular, and the rest of our team.
[00:48:29] Mike Kaput: All right, let's dive into some rapid fire topics this week, Paul. First up, we have an update in the Anthropic first Pentagon Saga yet again. So, this past Thursday, federal Judge Rita Lin issued a preliminary injunction blocking the government's supply chain risk designation against Anthropic.
[00:48:45] So this is in response to lawsuits. Anthropic has filed challenging that distinction, and in the ruling, Lynn wrote that nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary [00:49:00] and saboteur of the US for expressing disagreement with the government.
[00:49:04] And she also found that Anthropic is likely to succeed on the merits of its lawsuits. So this injunction blocks 17 federal agencies from enforcing this ban and using Anthropic, including the original February 27th order from, secretary of War, Pete Hegseth, and also President Trump's social media directive to not use Anthropic.
[00:49:25] The Pentagon is not backing down, at least publicly, so hours after the ruling. CTO, Emil Michael, we've talked about in the past couple episodes called this a disgrace. He claimed it contained dozens of factual errors and arguing. He argues that one of the two supply chain risk designations that they have put into effect remains in full force under a separate statute.
[00:49:48] The government has seven days to appeal this. So Dean Ball, who is a commenter, we talked about quite a bit here as well, previously served in the Trump administration, called this a devastating [00:50:00] ruling for the government. Paul, where does this actually leave us? I mean, I think we're hoping for to see a resolution here, but it seems like this is just the next battle front.
[00:50:10] Paul Roetzer: Yeah. I guess we're just waiting for the appeal. Yeah. I mean, goes to the judge said everything that everybody was thinking basically. Right, right. I don't, sure. Seemed like this was just a vendetta. Like I've said on the podcast before, like I just, egos and vendetta and politics and. It's not really about the technology or philanthropic.
[00:50:28] We touched a little bit earlier on, again, part of the reason to go into the main topics upfront was to frame this for people about Yep. How politics does unfortunately play a role in this increasingly. And yeah. So I think that that's, you know, really the key issue here is that it's becoming increasingly political and, I don't know.
[00:50:50] I hope they eventually negotiate it. Like that's what I keep thinking is gonna happen, that they'll eventually come down. But each side keeps digging in, so we'll wait and see, see what the appeal. I'm sure this will take [00:51:00] forever to actually come out the other end, but it seems in the meantime, the government's gonna keep using their tech anyway, so.
[00:51:06] Mike Kaput: Yep. Alright, so next up some more political news. We have some more AI political moves this week. So, first up, Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez introduced the AI data center moratorium. Which would pause all new data center construction nationwide until congress passes federal AI legislation that has protections for workers, consumers, the environment and civil rights.
[00:51:31] It is one of the most aggressive AI policy positions staked out this Congress. It is worth noting over a hundred local communities have enacted their own data center moratoriums. according to the bill, basically this ban would only be lifted after passing those federal ai, regulations or legislation that would have those kind of protection.
[00:51:53] So once the ban's in effect, they gotta pass a law that actually satisfies the conditions here to get the ban lifted. [00:52:00] Now, this bill is unlikely to advance, but does reflect some very real political pressure and talk and shows how perspectives on ai, like we've talked about, are scrambling party lines ahead of the midterms.
[00:52:12] Now second in basically the opposite direction, president Trump has appointed Mark Zuckerberg, Jensen Huang, mark Andreessen, Sergey Brin, and other major tech leaders to a new president's council of advisors on science and technology, focused on AI that is co-chaired by David Sachs and Michael Kratsios.
[00:52:31] Notable absences from this council so far include people like Sam Altman, Elon Musk, people from Google, from Microsoft rather. As a note, co-chairing this council is going to be David Sachs's new role within the administration because he very recently stepped down as AI and Crypto Czar Sachs also told Bloomberg that Congress could pass bipartisan AI legislation within months, creating a national framework that would override the patchwork of state laws.
[00:52:59] So we've [00:53:00] talked about the White House has recently released their kind of legislative. Blueprint or wishlist for ai, which calls for child safety protections, streamlined data center permitting, IP protections and more. It is an open question whether or not the two parties can cooperate to actually pass bipartisan AI legislation, especially before midterms.
[00:53:20] So Paul, I'm curious what you make of these two recent developments, at the very least symbolically, that data center moratorium seems to be trying to tap into some populist anger about data centers.
[00:53:33] Paul Roetzer: Yeah, so as I said in the previous one, AI's becoming more political, which we, you know, assumed, a pause is not gonna happen.
[00:53:40] So their efforts to raise awareness about the issues is, is good. It's gonna get citizens more educated and involved. Hopefully we don't play the, you know, the Fearmongering card, but it's definitely, it's, it is not to get the legislation passed, like right. That's stuff not happening. I also would not hold my breath on any federal Regi legislation around ai.[00:54:00]
[00:54:00] Like, I think it's just a stall tactic to even be pretending like they're, they care to do that. I don't, I don't know when that changes, but I , I would be really surprised if there was actually any AI federal legislation before the end of the year. this council, there's almost nothing known about it.
[00:54:18] Like, yeah, I mean, the White House's own announcement about it was like three paragraphs long and it just pretty much said that these are the people who've agreed to be on it, and then that it could be up to 24 members. That's pretty much all we know about it, so it's not really worth talking about much other than there's some big names on it and some names that aren't on it, which is noteworthy.
[00:54:40] And then the related thing is another pro AI pack popping up. So Axios had this, a new pro ai political operation is jumping into this year's midterms with a plan to spend more than 100 million. The latest push by a big money group to promote a deregulation agenda. So the group is dubbed Innovation Council [00:55:00] Action has the blessing of sacs, who we talk about a lot lately.
[00:55:04] it's distinct from other pro-industry groups in that it's focused on boosting President Trump's priorities. The new group is led by Taylor Budowich, a former White House Deputy Chief of Staff for Trump. He's also formerly, led the Pro Group Mega Inc. A super PAC and securing American greatness political outfits.
[00:55:24] He was a top official on Trump's 24 campaign. The group compiled a scorecard assessing how supportive lawmakers are of Trump's AI agenda, which will be used to determine who the group supports or opposes on either side, probably, I mean, it's mostly gonna be, you know, Republicans, but at this point they're just, they're gonna fund anybody that you know is deregulation, techno acceleration.
[00:55:45] just because of the organization is a nonprofit not required to disclose its donors. A dark money organization is general what it's called. So Innovation Council will play a critical role. This is from Sachs. Innovation Council will play a critical role in advancing the innovation agenda, champion by President Trump and [00:56:00] this administration.
[00:56:00] We welcome its support at this important juncture. Other AI focused political groups include Leading the Flu, the Future with his, which has raised $50 million. That group lists donors include Greg Brockman. Joe Lonsdale, who's a co-founder of Palantir, if I'm not mistaken, Mike. And Mark Andreessen of Andreessen Horowitz and Meta has launched a pro ai super PAC effort that is expected to spend around 65 million for midterms with plans to focus on state level races.
[00:56:28] So, quick math, just those three alone is almost 300 million in ads about, AI deregulation and trying to elect people who want to accelerate at all costs. So. That is why I would not hold my breath on any federal legislation. That's, and it's just, you're gonna see more AI ads than you could want to ever see.
[00:56:51] so yeah, it's gonna be, it's gonna be interesting.
[00:56:55] Mike Kaput: All right, so next step. As AI tools get more [00:57:00] powerful, more people rely on them for real work, the security risks are scaling up just as fast. So we got this week a case study in what that can look like. So most AI tools are actually built on top of layers of software packages, and in fact, often open source software packages that developers install and end up trusting to power their software.
[00:57:21] And Andrej Karpathy, who we've talked about many times, former director of AI at Tesla, and openAI's, co-founder, flagged what he called a software horror, which was an attack on one of these open source packages that millions of people, and thus programs depend on. So he outlined this in a post mx. This package called Lite LLM has 97 million downloads per month.
[00:57:45] It is widely used across the AI ecosystem. And this past week, attackers slipped malicious code into a routine update. What it meant was anyone who had it installed had their passwords, cloud credentials, API, [00:58:00] keys, and other sensitive data silently stolen and sent to the attackers. This spread far and wide because a lot of tools depend on light LLM behind the scenes.
[00:58:10] So the poisoned version of this program was live for less than an hour, but it was only discovered because it had a bug that crashed. A developer's machine. Carpathy noted that if the attacker hadn't made that mistake, this could have gone undetected for days or weeks, and the attack was part of a broader campaign that hit five different software ecosystem.
[00:58:29] So the point is here, the reason we're talking about this now is because. AI agents are about to make risks like these much, much worse. So actually openAI's just backed a startup called Isara at a $650 million valuation. They're building software to coordinate thousands of AI agents working together.
[00:58:47] That sounds great. We're moving into this AI agent era, but as agents start installing software making decisions, managing systems on their own, this kind of thing where your agents are [00:59:00] going to download open source software that has been poisoned or compromised is going to grow dramatically in its frequency.
[00:59:08] So Paul, I was curious, you know, Cari highlighting this is a big deal. This is an enormous open source software program or package being used by people. I mean, if you have agents running for you, how can you be sure you're not off downloading something that's. Handing over your personal information because it's been exploited or poisoned?
[00:59:30] Paul Roetzer: I have no idea. And I'm pretty convinced that most people using these things have no idea. Like it's right. There's just so many unknown risks. and that's why like people, people I keep talking, they're like, I can't believe you're not doing this and that. And it's like, dude, I don't even understand the risk associated with that stuff.
[00:59:45] Like, so I 'm just in no hurry to find out. and you know, I think part jokingly, I've said for the last couple years, like IP attorneys is like one of the safest professions to go into for the next decade because of [01:00:00] all the issues tied up in ap, in AI and the use of this copyright materials and all these things, cybersecurity, that, that's a safe profession.
[01:00:09] Like there's man, I mean, it is just the surface areas where you can be attacked and the complexities that are gonna need to be solved for, to, to use this kind of stuff within enterprises is endless. So. I don't know. I mean, that's the thing is like we're gonna race ahead and have these really advanced models and these AGI agenta capabilities and then like the risks just compound when you start doing this stuff.
[01:00:36] And that's going to create a lot of friction for adoption within organizations. And you know, it's, which honestly at the end of the day probably gonna be a good thing. Like
[01:00:43] Mike Kaput: right,
[01:00:44] Paul Roetzer: the model companies aren't gonna slow down. And so I think like enterprise and human friction might be the only thing that saves us here is just that like, it's gonna take a while for us to figure all this out and integrate it into what we do.
[01:00:59] [01:01:00] And just 'cause the models are capable of replacing some human labor doesn't mean that they're going to right away. And in the end that that's a good thing. I think,
[01:01:10] Mike Kaput: you know, it's almost the flip side of what we talk about as the benefits of some of these tools where, you know, we talk about vibe coding or vibe marketing or whatever.
[01:01:18] AI gives non-specialists this ability. To do specialist things, but there's a danger there because now I'm suddenly exposed to all sorts of decisions that are in domains I have no experience in. Yeah. So like if I'm gonna go vibe code something and an agent recommends, Hey, we're gonna go download these three open source packages to facilitate what you want, a vibe code.
[01:01:42] Okay, great.
[01:01:43] Paul Roetzer: Right.
[01:01:44] Mike Kaput: But like there's probably 18 different questions that a software developer would have that I don't even know to ask
[01:01:50] Paul Roetzer: Totally.
[01:01:51] Mike Kaput: That are very dangerous.
[01:01:52] Paul Roetzer: Yeah. I mean, I can, we talk about this, but like I can build apps all day now. Like I can just play around and claw and just build some stuff and it's amazing.
[01:01:59] But [01:02:00] to move it into production and to put stuff publicly live and open up, it's like that's not my area of expertise and something I'm trying to solve for. And like we'll figure it out. But I'm in no rush to like put things out before I understand what we're doing.
[01:02:15] Mike Kaput: Yeah. All right, our next topic, apple, is planning to open up Siri to rival AI assistance in iOS 27.
[01:02:24] They are ending ChatGPT's exclusive role inside Apple software. So according to Bloomberg, users who have Google, Gemini, Anthropic, Claude, or other AI apps installed will actually eventually be able to route Siri queries directly to those services through a new extensions system in settings. Apple plans to announce these changes at WWDC on June 8th.
[01:02:47] So this basically eliminates the need for one-off integration deals. Like the original openAI's partnership, any AI app in the app store could potentially plug into Siri and Apple is actually going to take a cut of paid [01:03:00] subscriptions through its payment system. separately, apple is building a standalone Siri app like we've talked about with a full chat bot like conversation interface.
[01:03:10] A unified search system, the goal is to transform Siri into, from a voice assistant into an actual system-wide AI agent. But a lot of these updates were first announced in 2024 and have been delayed multiple times. Lastly, behind the scenes, the information reports that Apple's partnership with Google is a bit deeper than previously known.
[01:03:29] So Apple has complete access to Google's Gemini model in its own data centers and is actually able to distill it into smaller models that run directly on Apple devices. So Paul, some interesting updates here. Most notable that Apple is trying to extend or expand the types of AI that can be used with Siri.
[01:03:49] In the meantime, while they apparently get their act together,
[01:03:53] Paul Roetzer: continue waiting game, like it's at some point Apple's gonna figure it all out and you know, they'll show up and it could change everything [01:04:00] from an adoption perspective and from a usage perspective of ai because they have trust and they have access to everything.
[01:04:06] Like all the apps, all your data. I've talked about, like, if they solved the health side, I would totally, you know, rely on Siri more than I would anybody else because they already have all that health data in my phone. So they're the wild card here, and it seems like a smart strategy to just, you know, let everybody else spend the hundreds of billions of dollars building data centers and energy infrastructure and frontier models, and they'll just, they'll serve 'em up to the billions of people that use their devices and not try and compete in that game.
[01:04:35] So, in the end, I mean, it may work out in their favor that they just missed the game up front and they're gonna kind of show up late and figure it out. The one thing, I don't know if I'm thinking in the right direction here, but like, doesn't this make perplexity just like irrelevant? Like, I mean, I, we don't talk about perplexity much anymore anyway.
[01:04:52] Yeah. Isn't that like their whole thing is like you can just choose whatever model you want and like connect whatever you want and like
[01:04:58] Mike Kaput: Yeah. That's a [01:05:00] big selling point of perplexity and some other tools.
[01:05:02] Paul Roetzer: Yeah. It's like if I can just do that through Apple, through my Mac devices and through my iOS devices, like what would I ever need?
[01:05:08] Something like a perplexity for
[01:05:10] Mike Kaput: Yeah. Just another chapter in perplexity needs to get acquired quick.
[01:05:15] Paul Roetzer: Yeah. Yeah. Sell, sell the Top, which would've been 18 months ago.
[01:05:18] Mike Kaput: Sell the top. Right.
[01:05:21] Mike Kaput: Alright, so next we have kind of a new segment we're trying to do every week here. we hear from listeners all the time that one of their favorite parts of the show is when we talk about how we're actually using AI at SmarterX.
[01:05:33] And we do that in a bunch of different contexts as part of different topics. But we wanted to try to make this kind of a regular segment. So every week we're going to attempt to give you a quick dedicated look under the hood at real AI use cases that we are exploring, building, or deploying in our own work.
[01:05:50] So Paul, obviously, you know. You're working a lot on leadership and strategy items as well as just overall organizational design. I'm working a lot on [01:06:00] content marketing, sales enablement, productivity, performance. So between the two of us we're definitely covering quite a bit of a lot of the different types of knowledge work you might be doing if you're a listener.
[01:06:10] So we are going to share direct from us kind of what we're doing week to week. so to kick us off, Paul, you have been working on some stuff related to kind of AI learning journeys. Maybe tell us about that. After we talk about that. I can share what I've been doing with some AI powered slide creation.
[01:06:27] Paul Roetzer: Yeah, sounds good. So, like I've said before, some of the stuff, honestly, like I just traditionally wouldn't even talk about publicly before we just did the whole thing. But I would say like more and more we're just trying to kind of build in public to a degree and share what we're learning as an AI native company and try and just help other people along.
[01:06:44] So, yeah, I mean, I'll, I'll share a little bit about. Kind of what I've been working on. So I spend a lot of my time more on like the vision and innovation side of the company and trying to think about how this technology empowers us to innovate in new ways, build new things, create more value faster for, [01:07:00] you know, our, our customers.
[01:07:02] And so on the, our AI Academy side, which is where a lot of my time goes, I always tell the team, like, we're not in the business of selling courses, we're trying to power personal and business AI transformation. So, You know, you can go to LinkedIn learning and get amazing courses. You can go to Coursera, you can go to udemi, you can go directly to openAI's or, Anthropic, like everybody's Google.
[01:07:20] They all got great stuff. And like, we would recommend those courses. Like, we're not trying to compete with any of those companies. As a matter of fact, we would do deals with those companies, like collaborate, partner, things like that. And we, we have some partnerships in the works with a number of those companies.
[01:07:33] So. I think more broadly about, well what does it take to actually drive a transformation, either individually, like for me as an individual leader or or practitioner or for my organization, you know? And then what role specifically does courses and certificates that we talk about on the podcast, like what role do those play in that bigger transformation?
[01:07:51] But I think more broadly, because we talk to these companies every day and you realize, listen, you can buy courses from us and get access to all these things and it's [01:08:00] gonna be great. But that is like one part of the transformation story you need to think, you know, more holistically from a change management perspective.
[01:08:08] So we need assessments, employee surveys, executive briefings, so that they're on the same page. Employee communications plans to like roll this stuff out and tell them jobs are changing and future of work looks different. You need the learning management system, the courses, which is what RAI Academy plugs in.
[01:08:23] You need personalized learning journeys that like personalized use cases and tech based on departments and roles and. Things like workshops. So I've basically been devising what I'm calling an AI transformation system. and so this is something I'll share more publicly and kind of publish some stuff on.
[01:08:36] But generally speaking, I look at holistically and say, what are all the components you would need to actually drive this transformation? And then how can we help people visualize those things? And so I've been working on this for a couple years and different elements of it, and I made a lot of advancements in the last two weeks in particular.
[01:08:51] But the design and the visualization of it is just not my area of expertise. And so I have sketches, like I literally lost a sketch at a hotel in [01:09:00] Arizona. I left it behind years ago, like this is, it was actually probably spring of, of last year. I left this thing at a hotel and it, I hadn't taken a picture, but I remembered, I actually had a friend who was the designer take a picture and he had sent it to me like, thank goodness I was able to retrieve it.
[01:09:13] Um. But I can't get there. Like I just kept running into these barriers where I couldn't figure out a way to visualize this thing. So then last week I thought, well, wait a second, what if I just wrote like as I would a project brief for a designer or a developer, what if I wrote the whole story of what I'm trying to do?
[01:09:30] And so last, like Tuesday or Wednesday, I spent three hours writing a prompt and I'll just like, I'll read an element of this thing. The whole prompt is 1100 words and 7,200 characters. So it's like, it's not an insignificant prompt. But I said I want to create an interactive visualization representing paths of AI transformation across our AI transformation system.
[01:09:48] It's a collection of resources and systems that accelerate literacy and success. The core component of our element of the AI transformation system is our AI academy. We see literacy as a fundamental part of personal and business AI [01:10:00] transformation and personalized learning journeys are at the heart of what different.
[01:10:03] Our approach. We want to show learning journeys that are made possible by our courses and experiences, but we also want to convey how those are just part of the overall process. The visualization should convey a sense of time and progress. Now again, there's another thousand words to this thing. So then I just put it into Gemini, Claude and Chat.
[01:10:20] GPT Gemini gave me an infographic, so that was useless. Claude gave me a solid V one with a drag and drop capability of building these custom journeys and timelines where you could go by month, and it was amazing. And then GPT 5.4 thinking gave me a really solid prototype that was similar in style to Claude, where I could interact with it and I could actually build these custom journeys.
[01:10:42] So both of those were far beyond anything I had conceived of with my sketches. And I was like, okay, my sketches were just obsolete. Like they were, they were more like what I got outta Gemini. It was basically like a, I got a, a illustrated version of my sketches from Gemini, and the other two things gave me a totally interactive thing.
[01:10:57] So, you know, I think it's a really interesting [01:11:00] example, but I think it's a good example of the need to test multiple models when you're doing these high value use cases. And to think about this project brief approach, like if you really wanna do something high value. Take the time to write a prompt as though you were giving it to an outsourced person or an internal person who's going to run with that project.
[01:11:19] Think the whole thing through. And I did it in depth. I thought through every element of the transformation system I'm designing, I gave descriptions for any one, every one of 'em. I built our entire like course catalog into this thing. Like it was very extensive. And then it's one of those where you're just like, okay, that's as good as I can do.
[01:11:36] Hit go. And then you just sit back and like pray and wait. And then seven minutes later you're like, holy shit. Like this is, I can't believe it just did this. And then you start like, moving things around and using the filters and you're just like, oh my God. Like, yeah. I mean, just months and months of work.
[01:11:53] and what would've cost me tens of thousands of dollars easily to work with a developer to build I had in seven [01:12:00] minutes and I, yeah, it was just shock, but in a, in an amazing way.
[01:12:04] Mike Kaput: That is incredible. Yeah. And yeah, definitely seems like some very different outputs based on the model.
[01:12:09] Paul Roetzer: Yeah, for sure.
[01:12:12] Mike Kaput: so I will just quickly also share something I've been working on.
[01:12:15] So, you know, I am obviously creating quite a few of our course series for AI Academy, and there's this big problem I run into every time I sit down to kind of do a course series, which is, you know, I spend weeks on research synthesis scripting, outlining, and basically wrap up the course, except it's still not really wrapped up.
[01:12:35] I face a final slog, which is I have to literally create hundreds of slides before I can record anything. Each department series we do, for instance, has four courses, that's hundreds of slides per series. We have tons of templates. We've streamlined this process quite a bit, but it still takes hours of work to do.
[01:12:52] And unfortunately, especially, it's not even the hours, it's like not. Intellectually rewarding work, let's say that to be nice. [01:13:00] But what I've been trying to do for months and months and months is try to get AI to create slides for me. There's like generic AI slide tools that have been decent, for a while, but we have like really specific branding and templates for better or for worse, like we have something we have to follow.
[01:13:15] So I can't just say, Hey, create a deck for me from scratch and it's good you can do that. You've been able to do that for months. I had something that's like a little more bespoke. So finally I was actually able to get Claude code to do this with a pretty high degree of fidelity for this specific stuff I'm working on it.
[01:13:31] I, your results and your mileage may vary, but what was really cool about this is, you know, basically taking the time upfront from what I've learned with what works and doesn't with Quad Code is to really pull together an excellent set of example files and guidance and actually put it into planning mode.
[01:13:48] Before creating anything. So being like, here's what we're trying to plan out. Here's all the nuance and context. Here's what's gone wrong in the past, and by the way, here's a folder with all the examples. And after some wrangling back [01:14:00] and forth, I actually got to a point where we now have a skill where Claude Code can take some scripts and actually put those into your presenter notes in the right places and for each slide, and actually build the slides for you with some placeholders.
[01:14:12] It's not perfect at everything, but my gosh, like last Friday, I think it was, it got to a place where instead of hours and hours and hours, this process took maybe 20 minutes, like back and forth, obviously hours and hours before that to make it actually work. But my gosh, I was so happy that I finally got to this point here.
[01:14:32] And I don't know if it's, you know, my, I think it's a combo of, you know, I tried this a couple months ago with Opus 4.6 and Claude Code. But, now I think it was a more diligent approach to the context. I think also because Claude has gotten better with PowerPoint, this might have been the unlock, maybe, you know, sometimes even just trying this with the same approach before, you just need a few cracks at it before it actually takes.
[01:14:55] So, really cool stuff. Highly recommend trying it out.
[01:14:59] Paul Roetzer: Yeah. That's awesome. [01:15:00] And that does go back to like 2023 when we, we were getting these early previews from Microsoft and Google of what was to come, and these, like, all of your productivity apps are just gonna have AI infused in and they're gonna do these things.
[01:15:10] And it's like, okay, cool. So like PowerPoint.
[01:15:12] Mike Kaput: Yeah,
[01:15:14] Paul Roetzer: we'll get to that. And then it ends up, Claude builds a better way to do PowerPoint than Microsoft does. so it'll be interesting, Mike, because, so Mike and I, you know, obviously I build courses also and do public speaking. I build slide first. So Mike's a script guy.
[01:15:29] Mike develops the scripts first and he does it. I actually don't script things and so I'll, I'll often do like an outline of what I want it to be. But I generally build best when I just start putting slides together, and then I'll form my thoughts from there. And then sometimes I'll put speaker notes in.
[01:15:47] But most of the time for me, when I do presentations or courses, I don't have scripts at all.
[01:15:52] Mike Kaput: Yeah.
[01:15:52] Paul Roetzer: And so it's like a, you know, I don't even know that Mike's approach will work for me because I'm, we just approach, our workflow is different, but it's [01:16:00] awesome and it's like, wow. It almost makes me wanna like, try scripting the next time I do it to see if I can, you know, figure it out or, or at least say like, here's my deck and you make, you know, like, let's make this better.
[01:16:10] but yeah, it's interesting. Everybody's got different workflows for how they do these things.
[01:16:13] Mike Kaput: No, that's a super important point too. And that's why I kind of dissuade people from saying like, look like I could give you the skill if you wanted to let the Claude skill, like it's gonna be useless to you.
[01:16:24] It's so bespoke to what I do and how I work. Plus, with something like Claude Code, it's referencing other skills and preferences and memory. It's. About what I like and don't like. Yeah. It also requires like eight other skills that are required for course creation. So the point here is just know what's possible and then go experiment.
[01:16:40] Doing it on your own, in your particular context, I think is really the most valuable takeaway for me.
[01:16:46] Paul Roetzer: Yeah. And it goes back to the whole, the AI transformation system idea I was sharing. It's like personalized use cases are so critical. And if you just approach this broadly and said, all right, let's do, let's automate the creation of PowerPoints or, you know, apple keynotes or whatever, [01:17:00] Google Slides, that's not even, that's not uniform.
[01:17:03] Because you have different workflows, you have different ways people think. and so you, you really have to drill in and create these very specific personalized use cases. And when you do that right, and you take the time upfront, that's when you can unlock dozens or hundreds of hours of productivity or efficiency, by just doing a little extra, like take, like I did, like take the three hours, write the thorough prompt, like think it through, like you're gonna give that project brief to somebody.
[01:17:30] Yeah. So, yeah. And it's cool, like Mike, and, I mean, Mike and I see each other all the time, but like, we're, we're, you know, all doing, we're busy, like doing whatever. I don't even, like, sometimes we don't even hear about what each other's working on unless like passing, like grabbing a coffee, like, oh dude, did you see this thing I did?
[01:17:43] And I'll show 'em real quick on my computer. And it's like, oh, we should talk about that on the podcast. All right. And then like, we don't talk about again until we get on these episodes and it's like, oh, sweet. I didn't even know we, we'd figured out how to do that internally. That's cool.
[01:17:53] Mike Kaput: Right. Alright, Paul, so for this next segment, we did this for the first time, like really formally [01:18:00] last week where we're kind of spotlighting, kind of what we're working on with AI Academy.
[01:18:06] So each week we're going to start spotlighting one of the courses in AI Academy that is currently live. and the real point of this is like Paul, you know, you had kind of, teed up for me. Which course we're talking about this week, and you know, it won't always be me, but me being the instructor of today's course that we're gonna talk about, we'll either talk to myself or another instructor and give you a peek behind the curtain of like what's actually in these courses and give like a value driven takeaway for, from the course that you can use right now, whether or not you even take the course ever or do anything with AI Academy.
[01:18:39] Well, I kind of bring some of the value we're creating in AI Academy to the wider audience. So,
[01:18:46] Paul Roetzer: yeah, I think we got AI for sales this week, right?
[01:18:49] Mike Kaput: So this week we are talking AI for sales, which is our four course certificate series built specifically for sales professionals. So I was gonna maybe just run through a couple big takeaways that [01:19:00] came away from that one for me.
[01:19:01] If you're a sales professional out there, these I think could be pretty helpful in kind of getting you started or taking you further with ai. So first up, what really jumped out here to me as I was putting together this course and doing research for it, is that sales reps really only spend about 30% of their time actually selling.
[01:19:20] And that number has not changed in several years. According to some research, research from Salesforce, I believe, their state of sales AI report. And basically you just spend way too much time on stuff that is either leading up to the sale or that is admin or distractions from the sale. And so what we do in the course is we really approach this very practically in helping you find those immediate use cases that can actually free up your time so you can sell more.
[01:19:50] That's kind of the goal number one of this course, is there's plenty of other bigger strategic level considerations that are gonna take you A to Z through your AI journey. [01:20:00] But it's really about making you more productive, freeing up your time so you don't have to do all this stuff. That's a distraction.
[01:20:06] And kind of the way we do this is first we start out with the advice of when you're looking for your own AI use cases. We go through tons of strategies to do this in the course. There's one that's really helpful, which is just this simple filter to think about. So number one, kind of run what we call the checklist TE test, which is if you're thinking about all the stuff you do in a day, if you can write the steps out for that, like if you could teach this to a new team member pretty easily, and they could follow it without needing to ask too many questions, guess what?
[01:20:37] That is something you should highlight as a candidate for AI automation or augmentation. So any sales rep could sit down right now and in 10 minutes, kinda walk away with a few ideas of like, what are you doing in a day that has the same steps every time? You do not need to be doing that yourself. Now, take away number two here for Sales Pros is.
[01:20:58] When you go to [01:21:00] think about, okay, well what AI can do that stuff for me, this sounds simple. We've said it on the podcast before, but it could not be more important for salespeople specifically, is audit your existing tech stack before you buy anything new. Now, all the new AI, shiny stuff we talk about is incredible, but sales does so much in existing systems that you can pursue longer term technology projects and new tech that you wanna integrate into your CRM, but look to your existing CRM and systems first because things like Salesforce, Einstein Hub, HubSpot, Microsoft, they all have really powerful AI increasingly baked in.
[01:21:39] And even if the AI is not perfect, using the thing you already have that's already approved makes your life so much easier. And then finally, we've said this before about knowledge workers in general, but especially salespeople, I think you're on the go so much, you're so, you have so many things to do.
[01:21:56] You need to be focused on quota. It's really important to remember this basic but powerful [01:22:00] advice, which is if you're still prompting AI of any type, whether it's in your CRM or a separate tool, if you're prompting it like a search engine, you need to actually maybe evolve your approach. So we actually show this side by side in the course.
[01:22:13] you know, a one sentence generic prompt is going to give you a very simple, very generic output. However, if you really structure your prompt with things like giving the AI role, telling it its task, giving it context examples, telling it what format you want, that is the way you get truly exceptional results from ai.
[01:22:34] And I hope my previous example actually about slide creation for instance, can communicate that the more context you give these tools, what Paul was saying with the extensive structured prompt. This is the way to get real value out of tools if you have not already.
[01:22:49] Paul Roetzer: And so many of those micro takeaways for really any, you know, we're spotlighting sales here, but really that those three steps are applicable to whatever department you're in or whatever your role is.
[01:22:59] Mike Kaput: [01:23:00] Absolutely. Alright, Paul, to wrap up this week, we've got some AI product and funding updates. So I'm gonna run through these and if anything is comment worthy, feel free to chime in.
[01:23:09] Paul Roetzer: Sounds good.
[01:23:09] Mike Kaput: All right. So first up, Harvey is the AI platform for legal work that is used by over a hundred thousand lawyers across 1300 organizations.
[01:23:18] They just raised $200 million at $11 billion valuation for this startup, so their total funding is now exceeding $1 billion. Next up, the openAI's Foundation announced it will invest at least a billion dollars in 2026 across life sciences. Jobs and economic impact, AI, resilience and community programs.
[01:23:41] So the foundation actually received a 26% equity stake in openAI's as part of the company's restructuring. It's worth about $130 billion on paper. Also related to openAI's, they have shelved plans for their adult mode indefinitely. They, that follows pushback from staff and [01:24:00] investors about the effect of sexualized AI content on society.
[01:24:03] So this joins Sora on the list of these side quests being dropped as openAI's Refocus its core business. Anthropic is launched computer use and a feature called Dispatch for Claude Pro and Mac, subscribers on Mac Os. So computer use Lets Claude control your mouse keyboard and screen to complete tasks across applications.
[01:24:24] Dispatch enables continuous conversations across devices so you can assign Claude a task from your phone and pick up the results on your desktop. Google has set a 2029 deadline for migrating its systems to what they call post quantum cryptography. They warn that quantum computers are going to pose a really significant threat to current encryption standards, and it might happen a little earlier than they expected.
[01:24:50] So Android 17 is already integrating quantum resistant protections. SpaceX is preparing to file its IPO prospectus with regulators. They're [01:25:00] targeting a June Public listing. Advisors predict the company could raise more than 75 billion, which would actually surpass all the money raised by us IPOs last year.
[01:25:09] Combined, they were last valued at 1.5 trillion. Microsoft has told managers at its Azure Cloud and North American sales divisions to suspend new hiring, citing the need to restrain costs and improve margins. So this freeze covers tens of thousands of employees. Microsoft stock is down significantly this year.
[01:25:28] It's one of the worst performers in big tech. And finally, a cluster of news about meta. This week, mark Zuckerberg first is building a personal AI agent to help him be CEO. So he helps him retrieve information. He'd normally go through layers of people to get Meta. Employees are now using personal agent tools like my Claw and second brain, as they're called internally to talk to colleagues and their agents on their behalf.
[01:25:53] And apparently AI tool usage is now a factor in employee performance reviews. CTO Andrew Bosworth is taking over [01:26:00] META'S AI for work initiative, overseeing the push to make the 78,000 person company as nimble as AI native startups. Meanwhile, meta has launched a new executive incentive program that to fully pay out would require them to have a $9 trillion market cap by 2031.
[01:26:17] That's a 500% increase from the current 1.5 trillion. And finally on the research side, meta introduced something called Tribe V2, a Trimodal Brain Encoder Foundation model that is trained on 500 plus hours of FMRI recordings from 700 plus people. This creates a digital twin of neural activity and enables predictions for how the human brain responds to sight and sounds.
[01:26:43] That last one sounds kind of a little sinister. Paul,
[01:26:47] Paul Roetzer: I hate ending podcasts like this, but anybody but meta, I would've liked to seen this research. Like what, what is a social network gonna do with that? Like, predicting how [01:27:00] human brains respond to sights and sounds. I can't come up with a positive use of that.
[01:27:05] Mike Kaput: Yeah. What are they gonna do? You can answer. What positives you? I don't know. I
[01:27:10] Paul Roetzer: know what they're gonna do with it. I'm trying to figure what, like, what is the other thing that could Good, good could come out of that,
[01:27:16] Mike Kaput: right?
[01:27:16] Paul Roetzer: Yeah. I mean, when I saw that s like, oh God. Yeah.
[01:27:19] Mike Kaput: Stop.
[01:27:20] Paul Roetzer: Yeah. They don't have the best track record of doing things like that for the good of humanity.
[01:27:24] Mike Kaput: Not exactly.
[01:27:27] Paul Roetzer: One, maybe, maybe they'll turn a turn. Turn a positive way though.
[01:27:30] Mike Kaput: I That would be nice. You know, we can, we can see some positive news. One final reminder here. we mentioned top of the episode our AI pulse survey. This week is gonna be in the field, SmarterX.ai/pulse. This week's survey, we're gonna ask about your perspective on some of this company messaging about AI and jobs.
[01:27:50] We're also gonna ask your perspective on the new data center construction in the US and get your thoughts on all of that. So if you could please go [01:28:00] take the pulse at SmarterX dot ai slash pulse. We'd love to hear from you. Paul. Thanks for breaking down a busy week in AI for us.
[01:28:07] Paul Roetzer: Yeah, and quick note. So we're gonna be next Tuesday, which is April 3rd.
[01:28:14] Know what the date is?
[01:28:15] Mike Kaput: I believe. So
[01:28:16] Paul Roetzer: our regular weekly is gonna be replaced because I will be not available to record it. So Mike and I are gonna do something different. We're actually gonna do a quarterly trends briefing, that we have to find time, Mike, in the next two days to record.
[01:28:30] Mike Kaput: Yes.
[01:28:30] Paul Roetzer: So we are gonna drop an episode next Tuesday, but it's gonna be a Q1 trends briefing.
[01:28:37] So we're gonna look at everything that's kind of happened over the last quarter. We usually do this as part of our AI Academy. Um. We're thinking about moving this to where our academy members may actually be able to join live in the future. Not for this one, but as like a value add for our members. We may actually do it, but we're thinking about moving the trends briefing to a regular podcast episode because it's so valuable and it's so helpful to frame this for everybody.
[01:28:58] So, [01:29:00] just something to look forward to next week. Again, no weekly, we're gonna do our best to catch up on all of it when we get back. April 14th, I guess would be the next weekly. We'll do, yeah, but we will have an episode for you next week while I'm away and it'll be a Q1 AI trends briefing for business.
[01:29:14] So, keep an eye for that and, yeah, have a great week and a half or so before we talk to you again, and we appreciate it. Have a great week. Thanks for listening to the Artificial Intelligence show. Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy and engaged in the SmarterX slack community.
[01:29:49] Until next time, stay curious and explore ai.