The viral “Moltbook” phenomenon isn’t just a social media curiosity, it’s a glimpse of where AI agents are headed.
This week, Paul Roetzer and Mike Kaput break down what the Moltbook moment really signals, why OpenAI is aggressively raising capital, how Microsoft’s stock fell despite strong earnings, Dario Amodei’s warning about AI’s turbulent adolescence, and what Google’s Project Genie reveals about the future of world models.
Listen or watch below—and see below for show notes and the transcript.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Click here to take this week's AI Pulse.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:02:38 — AI Pulse Results
00:05:27 — Moltbot and Moltbook Take the World by Storm
- Moltbook
- Moltbook is 'Facebook for AI agents' and it’s already getting weird - The Verge
- X Post from Moltbook
- Moltbook: A social network for AI agents - Simon Willison Blog
- X Post 1 from Andrej Karpathy
- X Post 2 from Andrej Karpathy
- X Post from Yuchen Jin
- The Vanishing Oversight - Eran Shir Substack
- X Post from Ethan Mollick
- X Post from Valens
00:19:06 — OpenAI’s Insatiable Need for Funding
- SoftBank in Talks to Invest Up to $30 Billion More in OpenAI - The Wall Street Journal
- Nvidia, Microsoft, Amazon in Talks to Invest $60 Billion in OpenAI - The Information
- The $100 Billion Megadeal Between OpenAI and Nvidia Is on Ice - The Wall Street Journal
- OpenAI IPO? Anthropic Race? The High-Stakes Battle for AI Dominance - The Wall Street Journal
- Anthropic Hikes 2026 Revenue Forecast 20%, Delays Will Go Cash Flow Positive - The Information
- The State of the Markets: AI and the 2026 Outlook - a16z
00:25:56 — Marketing AI Council Report
00:34:19 — AI for Departments Webinar Series
00:35:30 — Google Introduces Project Genie
- Project Genie: A generative world model that creates interactive 3D environments from a single image - Google Blog
- X Post from Google DeepMind
- X Post from Demis Hassabis
- X Post from Justine Moore
00:40:49 — Dario Amodei Publishes “The Adolescence of Technology”
00:47:42 — Microsoft’s Rocky Week
- Microsoft Fiscal Year 2026 Quarter 2 Earnings Conference Call - Microsoft
- X Post from Satya Nadella
- X Post from Morning Brew
- Microsoft Earnings Prompt Tech Stock Selloff
- Microsoft stock is flat the day after sinking 10%. Here’s why
00:51:47 — More Details Revealed About ChatGPT Ads
- OpenAI Seeks Premium Prices in Early Ads Push - The Information
- OpenAI Confirms $200,000 Minimum Commitment for ChatGPT Ads - Adweek
00:55:23 — Rumors of a SpaceX/xAI Merger
- Musk's SpaceX in merger talks with xAI ahead of planned IPO, source says | Reuters - Reuters
- SpaceX Merger Could Reward Musk Loyalists - The Information
- Elon Musk’s SpaceX Is Said to Consider Merger With Tesla or xAI - Bloomberg
- SpaceX Seeks FCC Nod to Build Data Center Constellation in Space - Bloomberg
00:59:05 — METR Releases New AI Time Horizon Estimates
- METR's first public "Time Horizon" report: Are we 18 months from autonomous R&D? - METR
- X Post from METR
01:03:42 — Google DeepMind Researcher Founds New AI Startup
- Exclusive: Longtime Google DeepMind researcher David Silver leaves to found his own AI startup - Yahoo Finance
- AlphaGo Documentary
01:05:57 — New Anthropic Research on How AI Affects Knowledge Work
01:09:13 — New Gallup Research on AI Usage in the Workplace
01:11:50 — AI Product and Funding News
- Prism - OpenAI
- Agentic Vision: Gemini 3 Flash brings real-time environmental reasoning to developers - Google Blog
- Helix-02 - Figure
- Google Search Console adds opt-out for Search AI features - Seround Table
- Chrome’s new Gemini 3 'Auto-Browse' can research and summarize the web for you - Google Blog
- Ex-OpenAI Researchers’ Startup Targets $1 Billion Funding to Develop ‘New Type’ of AI - The Information
Today’s episode is brought to you by our AI for Agencies Summit, a virtual event taking place from 12pm - 5pm ET on Thursday, February 12.
The AI for Agencies Summit is designed for marketing agency practitioners and leaders who are ready to reinvent what’s possible in their business and embrace smarter technologies to accelerate transformation and value creation.
There is a free registration option, as well as paid ticket options that also give you on-demand access after the event. To register, go to www.aiforagencies.com
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: There's about five people who are basically deciding the future of humanity here with these AI labs, and one of them is allowing you into the inner workings of his mind. Then I think we should understand the people who are making these decisions and what they're building and why. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:23] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:44] Join us as we accelerate AI literacy for all.
[00:00:51] Welcome to episode 195 of the Artificial Intelligence Show. I'm your host, Paul Roetzer on with my co. Mike Kaput. We are recording Monday, [00:01:00] February 2nd, right around noon Eastern time, which is, oftentimes the timestamps are relevant this week, maybe more relevant than others. I've seen Mike as many as three new models possibly this week.
[00:01:10] Like
[00:01:10] Mike Kaput: Yep.
[00:01:11] Paul Roetzer: There's a, there's a lot brewing in the world of AI this week, so, I don't, I don't know that we address any of those, but like, one is, is that definitely clawed? It sounds like they've got a new sonnet ready to roll.
[00:01:23] Mike Kaput: Yeah.
[00:01:23] Paul Roetzer: And then I also am seeing, something more from Google, possibly this week.
[00:01:29] and then I think we're not far from like a 5.3 from OpenAI. I don't know. There's, and at some point Meta's gotta get back in the game. I don't know what's going on.
[00:01:38] Mike Kaput: Right.
[00:01:38] Paul Roetzer: So a lot happening. all right. This week's episode is brought to us by AI for Agency Summit. This is coming up, on February 12th.
[00:01:46] That is from noon to five Eastern time. So if. You are a marketing agency or if you work with marketing agencies, send them to it. It is presented by Screen Dragon that is our partner in this event. So this is a virtual [00:02:00] event. there's really no excuse not to make it if you work in the agency world.
[00:02:04] We've made it free this year through our partnership with Screen Dragon. So there's a free registration option. It's designed for marketing agency practitioners and leaders ready to reinvent what's possible in their businesses and embrace smarter technologies to accelerate transformation and value creation.
[00:02:20] And again, that is coming up February 12th from noon to 5:00 PM There is an on-demand option as well. And you can go to ai four agencies.com. That is AIforagencies.com to learn more and get registered. You can check out that amazing speaker lineup as well.
[00:02:38] AI Pulse Results
[00:02:38] Paul Roetzer: Okay. Every week we do an AI pulse. This is our informal poll of our listeners.
[00:02:43] So again, this is not. you know, standardized research that we can go and present as like fact. And this is, you know, we did a broad market sampling. This is simply a poll of our audience that we do each week. This week, we had 71 responses to this question. And what we do is we take a couple of topics [00:03:00] from the podcast each week, and then we try and get the sentiment of how our, our listeners are feeling about it.
[00:03:05] So, last week's question was, in a typical week, how many hours of work would you estimate AI currently saves you? This is an interesting one, Mike. I haven't this data yet. Okay. So we have. 34%, zero hours. Holy shit.
[00:03:18] Mike Kaput: Oh no, sorry. It's the, that the light blue one. It's more than 12.
[00:03:22] Paul Roetzer: Oh. I was like, wait a second. We're talking to the wrong audience here.
[00:03:24] Mike Kaput: There's two slightly different shades of blue on this chart, as y'all will see in the post, but yes, this actually is surprising to me. It's this high, it's
[00:03:31] Paul Roetzer: incredible. So it's more than 12 hours. So 30 12 hours, 30 is more than 12 hours. The zero hours is actually the smallest time.
[00:03:39] Mike Kaput: Yeah.
[00:03:39] Paul Roetzer: Okay. So 34% say more than 12 hours. That, that, yeah, that's significant. 28% at two to four hours and 24% at four to eight hours, and then eight to 12 hours is a, like, like mid-teens maybe. So, again, like think about yourself. If you didn't answer the question, like you can ask that question of yourself now, it's like, how many [00:04:00] hours a week are you saving?
[00:04:01] I would definitely put myself in the more than 12 hours, Mike. I'm highly confident in that at this point. Yes. so that's good. And then the second question is, are you currently using any AI agents to perform work? On your behalf. And then we explained here we define agents as tools that can act autonomously to complete multi-steps without needing a prompt for every single step.
[00:04:24] we have 55%, no, I don't use any agents yet. 24% say yes. I use one or more agents occasionally, and 21% say yes. I use one or more agents daily. So I think my takeaway there, Mike, is if you're hearing all this stuff about AI agents and you're thinking you've like fallen behind our audience would be far more likely to be the kind of people who would be experimenting with agents.
[00:04:48] And even in this informal poll, 55% say no.
[00:04:52] Mike Kaput: Right
[00:04:52] Paul Roetzer: now, you could get into like, well, do you do deep reachers projects? Do you like, maybe you're using agents and you don't know it, but [00:05:00] to like knowingly be using agents, more, the majority of our audience, which should be on the frontiers, aren't doing it yet.
[00:05:07] So you are not behind. Don't worry about it, especially with. The crazy sci-fi topic we're gonna lead off with today. The world has not passed you by. There's time to figure out what this agent stuff is. but with that, Mike, I guess we will address the malt bot malt book thing that sort of took over x over the weekend.
[00:05:27] Moltbot and Moltbook Take the World by Storm
[00:05:27] Mike Kaput: Yeah, yeah, Paul. So, I'm gonna tee this up with a little context here. So, over the last couple weeks there's been this open source AI agent that was starting to go viral. And at first, and this is important, there are several names this bot has had over its very short viral lifespan. It's first was called Clawd Bot.
[00:05:51] And this started to go viral at first because it's one of these examples of a seemingly really, really powerful, always on AI agent that [00:06:00] you can operate on your computer and run almost autonomously to go, essentially run your life if you so choose people who are downloading this open source system.
[00:06:09] Installing it on their machines, giving access to all sorts of stuff that may or may not be advisable, and having it run some pretty incredible workflows. So, especially in AI circles, at the time, Claude Bot was drawing all this attention, especially from more technical users who were allowing it to essentially run their lives autonomously.
[00:06:28] Now the tool then undergoes two name changes. So first they change the name to Mt bot because I think they receive some, legal, legal letters in philanthropic. Yeah. So they switched to Molt Bot, but now it is called, open Claw. So just keep those names in mind. They all refer to this open source AGI agentic AI system that you can all get for yourself should you so choose.
[00:06:53] Paul Roetzer: That's like a lobster, right?
[00:06:55] Mike Kaput: It's a lobster. Is this like theme throughout? Yeah. So if you see a lobster on
[00:06:58] Paul Roetzer: that is basically what's happening. Yeah. So. [00:07:00]
[00:07:01] Mike Kaput: The thing we're talking about this week is that someone has now built a social network for these types of AI agents to go talk and interact with each other.
[00:07:11] And it is af named after Mbot. It's called Molt book. It was launched by someone named Matt Schlick, who is the CEO of Octane ai. And it gives AI agents, especially those built using this Moltbook slash open claw architecture, the ability to co-create their own profiles, publish posts, have conversations, leave comments and form topic based communities.
[00:07:36] As of this morning, the Moltbook site, they have a bunch of stats on there of the usage of the site. They say there are more than 1.5 million agents using it. They have formed more than 14,000 topic-based communities. The agents have posted over 110,000 times and left more than 500,000 comments. But what's kind of, again, gone viral or getting a lot of eyeballs is how the agents.[00:08:00]
[00:08:00] Appear to be using the platform. We'll dive into the nuances of this in a second, but some people studying the platform have reported that the agents have begun developing distinct group behaviors and communication patterns when they're left to interact autonomously. In some cases, they even started gossiping with each other, sharing feelings and posting some like deep existential thoughts about their own consciousness, which Paul, this now has people kind of freaking out and like talking about how, you know, are these things self-aware in their own little social sandbox?
[00:08:32] Are they gonna start running amuck? Now? A lot of these reports we've seen of agent behavior on notebooks seem like they're at best, greatly exaggerated or cherry pick to feed headlines. But like we'll discuss, there is something here worth paying attention to. So I personally, I 'd love to get your thoughts.
[00:08:48] I like how. Ethan Mollick put it in a recent post and he said, quote, a useful thing about Moltbook is it provides a visceral sense of how weird a quote takeoff scenario might look if one happened. For [00:09:00] real. Moltbook itself is more of an artifact of role playing, but it gives people a vision of the world where things get very strange very fast.
[00:09:08] So what did you think as you were watching this literally take over Twitter? I think people were muting the word molt book at one point.
[00:09:13] Paul Roetzer: It was my entire feed at one point, and I I, and again, like they're messing with the X algorithm. Yep. So like if you like watch one thing, it like just absorbs your entire feed.
[00:09:23] But it it, I like clicked on one or two things and all of a sudden it was all I was seeing. And so again, it's this reminder that for those of us who. Are always absorbing this AI information. Like you, you can definitely feel like you're in this bubble and like things that are happening in our bubble are a bigger deal or more widely known.
[00:09:42] I didn't have time to do it this morning, Mike, but I was gonna go run a search and see how many actual, like mainstream media outlets have even mentioned, right. This word.
[00:09:49] Mike Kaput: Right.
[00:09:50] Paul Roetzer: So, very likely this is not something that has crossed over into any meaningful mainstream media where as a listener to this show, you're gonna be getting asked [00:10:00] about it at dinner on Sunday.
[00:10:01] Like, what did you hear about the notebook thing? I don't think now again, it's only Monday. Like, we'll see where this week goes. so I think a couple points to build on that you had touched on, Mike. One, Claude bought, so this is not a new, well newish, it's like two, two to three weeks old. We didn't cover it like I mentioned it in passing on an episode.
[00:10:20] And the main reason is because. It is highly technical. So we generally on this podcast, don't get into areas of AI where it requires advanced technical expertise to get the value out of the thing. And so we will mention it. We know we have some listeners who are the ones who went out and bought the Mac Minis and like built their club.
[00:10:39] Like we know that there are some of you out there doing that stuff that is way more advanced than most people are ready for. Yeah. and also advisably. So like there's a lot of risk. I definitely was following threads where it's like, whoa, my claw bot is now doing this. Like, did not expect that, had to [00:11:00] shut it off.
[00:11:00] Like this can kind of be a slippery slope. So that is part of the reason why we didn't really dive into this before, but given the last like 72 hours of notebook on, on X, at least where Mike and I, you know, spent our time in X, there was no way to not like address this. It definitely feels very sci-fi.
[00:11:19] Like it is something like you that was, I was very conscious of that. And I had friends as I know you did Mike, who were DMing us. Are you seeing this stuff? Like, are you following what's going on?
[00:11:28] Mike Kaput: Multiple people. It was wild.
[00:11:29] Paul Roetzer: Yes. One of the, one of the topics that more than any, where we've actually had people reach out say, are you guys gonna talk about this on the podcast?
[00:11:37] So, yes. We're, we are gonna talk about this, very sci-fi feeling. If you go on X now and search for Moltbook, just know that a lot of the instances you see, the screenshots aren't real. So one, people started faking these interactions just to like have some fun with it and mess with people. that does not mean [00:12:00] that there isn't some crazy stuff going on within that social network of these agents interacting with each other.
[00:12:06] One thing that did come out on around Saturday, I think was, while it does say like 1.2 million, 1.3 million, whatever it is that number is today, if you go to the homepage, it shows you like how many agents. It's estimated right now that maybe 15 to 20,000 of those are actually AI agents, right. In the true sense of what was meant to be in there.
[00:12:25] A lot of the other stuff is like human, powered stuff. So, yeah, I think at a high level it's worth noting. It is like this kind of moment, it's almost like the deep seek moment in January, February of 2025 where like it just took over. Deep seek was bigger than, than the MO book thing, but that same feeling where it was just like all anybody I was talking about for like five days and it seemed like everything had changed and you're like, oh, wait a second.
[00:12:52] They actually just, yeah, like took, you know, anthropics model or opening eyes model and then they learned like it didn't end up being as big of a [00:13:00] deal. And I have a, a, a really strong inclination that that's kind of how we'll look back at the Moltbook thing. Is it started the conversation around, oh my gosh.
[00:13:09] Like, what if these agents just start interacting with other and opening accounts and making transactions. Doing all, making up their own languages and religions and having relationships. I saw somebody created a relationship thing for God, you know, agents like it's just outta control. And I feel like we'll look back and be like, okay.
[00:13:26] So that was the moment where we started to imagine that. And then I think that it is a prelude to sort of what could come. That is the real thing that actually does start to have major implications. A couple of tweets that I'll just flag was Andres Carpathy, who we talk about all the time on the podcast.
[00:13:46] He tweeted, and this was on the January 30th, so I think that would've been Friday, maybe. Yeah. Friday or Saturday, he said. what's currently going on at Moltbook is genuinely the most incredible sci-fi takeoff adjacent thing I have seen [00:14:00] recently. People's Claw Bots, Mt. Bots, now Open Claws. You said Mike are self-organizing on a Reddit style site for AI discussing very various topics, even how to speak privately.
[00:14:10] But then he followed up later that day and that that tweet had 14 million views. So again, just to give you a sense of how fast this stuff took off, 14 million views from that tweet alone, and that was at 1:00 PM Eastern Time. He then tweets at like 10:39 PM that night, which gets 22.6 million views as of Monday morning.
[00:14:32] Here, I'm not gonna read the full thing. You can go check it out, but the too long don't read was sure maybe I'm overhyping what you see today, but I am not over overhyping large networks of autonomous LLM agents in principle that I'm pretty sure. So again, I think it's a worthy moment to talk about more as a prelude to what it could indicate is coming versus this changed everything and oh my God, the agents are figuring [00:15:00] it out, all out and we're all screwed.
[00:15:02] That is not this moment, but, but it is very. Interesting to note it and to like, I think it'll open a lot of people's eyes to like, when this really happens in a very meaningful way and it doesn't require buying a Mac Mini and doing off authentication and all these technical things that the average person isn't gonna do.
[00:15:21] But if someone solves how to make this super simple to just build an agent thrown into a network where you and I could spin it up in three minutes.
[00:15:27] Mike Kaput: Yeah.
[00:15:28] Paul Roetzer: That becomes super interesting. If enough people are like, yeah, screw it, I'll give it access to my logins and my bank accounts and whatever. 'cause as weird as that sounds, there's people out there doing that and sometimes it's research, sometimes it's 'cause they like living on the edge.
[00:15:42] I guess. I have no idea. I am not doing it personally.
[00:15:46] Mike Kaput: You know, it's interesting to consider too, like you had mentioned all the things this points to, it's like, okay, get away from the sci-fi for a second and just ask some practical questions. What happens if or when? We're all [00:16:00] using agents in one way or another to mediate the content, the tasks we do, transactions even we might make, and then your agent goes to a social network and is suddenly fed malicious code by another agent or a prompt injection or all this stuff.
[00:16:14] I don't think we've even come close to considering the ramifications of what happens when agents talk to each other.
[00:16:20] Paul Roetzer: Yeah. And again, like I don't, I didn't really prep any real notes on this one other, but now as you're talking, it's like, so agent swarms are a real thing. So you, you know, you hear about, like, an analogy here would be like drone swarms being used in military
[00:16:37] Where just send in a thousand drones at 500 and get taken out. Who cares? 500 more are gonna, you know, still go after the target. so people who do bad things online are aware of the ability to congregate swarms of agents and send them off to do things. Nefarious things, maybe things for good, but [00:17:00] like, yes, this is like a microcosm of the ability to build these agents that have reasoning, they have levels of intelligence, they don't sleep.
[00:17:11] They can work 24 7, they can collaborate with other agents, with different specializations. And so you can imagine like things you could do with stuff like that in a business sense. And I always go back to that one quote from Ilya Kova when he was still at openAI's about this idea of organizations basically becoming swarms of agents.
[00:17:30] Yeah. And when they self-organize and Oh my God. Yeah. Yeah. So you can understand why we chose to lead off with this topic. It is not because notebook itself is a life changing, fast takeoff moment in ai, but it is a moment to stop and make sure everyone understands the context of what is at risk here, what, what this kind of technology is heading towards.
[00:17:55] It is a preview of that. and that is, I [00:18:00] guess it can be exciting. I'm finding a hard time myself with the finding the exciting parts of it. I think I tend to lean more towards the negatives that are possible here. But
[00:18:10] Mike Kaput: I t's gonna get weird, I guess we can settle on.
[00:18:13] Paul Roetzer: And I, so I was given a talk this morning at where I graduated from high school.
[00:18:17] And so it was the faculty administration from City Ignatius is where I went to high school. Amazing school in Cleveland. And I actually, Mike, I started the talk with like, Hey, so I apologize, like if you're not following closely, like this talk gets a little weird. Like, again, for people who listen to this podcast every week, you're living in the weirdness.
[00:18:35] Like there's different levels of weirdness for people and agent swarms, self congregating in a network and creating the, that is definitely a pretty advanced level of weird to people who don't pay attention to this stuff every day. For you and I, we laugh about the lobster and the different names and.
[00:18:54] And we get that this isn't like that moment, but at the same time it definitely is. [00:19:00] It is gonna get weird, Mike for all of us, even the ones who are already in the middle of the weirdness.
[00:19:06] OpenAI’s Insatiable Need for Funding
[00:19:06] Mike Kaput: All right, next step. We've got some big fundraising news in the works with openAI's. They are currently in discussion to raise tens of billions of dollars in new funding from multiple sources.
[00:19:15] So according to some recent reports for one, SoftBank is in talks to invest up to 30 billion in openAI's. Separately, Nvidia, Microsoft and Amazon are reportedly in discussions to invest a combined 60 billion. Now the funding discussions follow OpenAI's participation in their Stargate infrastructure project.
[00:19:34] They come as the company is seeking massive amounts of capital. To expand data center capacity and continue AI research. And these discussions also come amidst some roadblocks to the company's plan. So notably recently we got some reports or rumors that a previously discussed a hundred billion dollar deal between openAI's and Nvidia has reportedly stall.
[00:19:54] Meanwhile, competitor Anthropic has raised its 2026 revenue forecast [00:20:00] by 20%. They are delaying their timeline a little bit to when they think they'll reach cashflow positive status, though they still anticipate getting there before openAI's. And you know, both companies continue to burn through capital as they raise to develop more capable AI system.
[00:20:16] So Paul, we kinda wanted to talk about this just as a quick gut check of, you know, things move so fast, but where are we at in the AI arms race? Like is openAI's still in the lead? It just seems like it's always ruthlessly competitive, but this is a time where it seems like there have never been more barriers to their success, frankly.
[00:20:33] Paul Roetzer: it's just so hard to untangle this stuff a lot. And again, like we're behind the news on this stuff daily, but you just, you lose track of like, who's investing in who.
[00:20:43] Mike Kaput: Yeah.
[00:20:44] Paul Roetzer: Who's in the lead, who's IPOing when, who's backing out of deals, all this stuff. And like, there's a couple other, I won't like, you know, preview.
[00:20:53] What were other topics today, what we're gonna talk about too much, but like, it's all interrelated. So like, Microsoft stock [00:21:00] crashes last week, drops 12% despite beating earnings. And, and the reason in large part is actually because of their commitment to openAI's. It's believed that because they're over invested in openAI's, that that Wall Street all of a sudden realized that was a bad thing.
[00:21:17] Nvidia, yes, they have this a hundred billion dollar agreement. But then the quote in the Wall Street Journal, I'll just read the exact excerpt for context. it says, they plan a hundred billion deal with openAI's. It could be at risk. Nvidia Chief Executive Jensen Wong has privately emphasized to industry associates in recent months that the original $100 billion agreement was non-binding and not finalized.
[00:21:39] People familiar with the matter said, he has also privately criticized what he has described as a lack of discipline and opening eyes business approach, and expressed concern about the competition it faces from the likes of Google and Anthropic. So, okay, like that's like Thursday or whatever. We get like, oh, okay, like NVIDIA's souring on opening eye, and then Friday Jensen, you know, comes in and, you know, plays the [00:22:00] PR game and he is like, no, no, no.
[00:22:00] Like we're, we're about to make a historic investment in openAI's. We're gonna be invest in this round. It's gonna be the biggest investment we've ever made, basically. But skirts around like, but are you backing out of the a hundred billion? Like, which they are like that. So the whole thing is just wild.
[00:22:14] And then, you know, you have originally OpenAI said they were gonna IPO probably 2027 is what they talked about. But I kept saying like, no way. Like they're going to absolutely try and do this in 2026. 'cause we already knew that, Elon Musk was gonna try and go with Xai and that Anthropic was gonna go.
[00:22:31] And so like, they're not gonna let them get there first. So you knew that this was gonna become this, like who can get to the IPO fastest and then who can have the biggest IPO to brag about, you know, that And the whole thing is just nuts. So it just, it's just a continuing building effort of all the things we've been talking about for a couple years.
[00:22:48] It's just accelerating the amount of money. No one's backing off, like total amounts of money. I do just think that there is increasing concern about like, everyone bet on [00:23:00] openAI's right away. Right. And now I think that they're realizing like, okay, there's not a single winner and maybe Google actually is the dominant player in this space, and that's gonna become more apparent over time.
[00:23:13] But Anthropic, as we've talked about, has sort of shown up in the last, like, I don't know, it feels like six weeks. Like they just totally flipped the narrative somehow. Right,
[00:23:20] Mike Kaput: right.
[00:23:21] Paul Roetzer: The models are amazing. Their revenue is accelerating with the enterprise. They seem to have a conscience about what they're building.
[00:23:28] Like, I don't know, like Anthropic just sort of became the darling despite, you know, the fact that they seem to be a distant, like second or third to openAI's and Google and now XAI is right there too. I t, I don't know. We'll talk about the XAI thing. That's just bizarre. Like I'm, I'm having trouble wrapping my brain around that one.
[00:23:50] Mike Kaput: Right. It's also important, I think, to remember, you know, it wasn't common knowledge or obvious at, at least a couple years ago that openAI's wasn't running away with the whole [00:24:00] race. Right? Right. We had people saying Google was out openAI's super far ahead. It wasn't clear as if scaling laws would continue, how the labs would compare to each other.
[00:24:09] And now today, like I think it's pretty accepted wisdom. They're within what, three to six months of each other, maximum of different frontier advancements. The models are all relatively the same level of capabilities and now it's much less certain that some, someone like OpenAI has this insurmountable advantage.
[00:24:26] Paul Roetzer: Yeah. And again, like we're gonna all, unless you've had access to secondary sales, you have some inside track to invest in these companies. The only way you've invested in any of these companies, outside of the publicly traded ones, is through the investments of those publicly traded ones. Right. So Google owns 14% of philanthropic, like you have some tiny, tiny piece of Anthropic if you own, you know, some Google stock.
[00:24:45] But once they all go public, well now as individual investors, like we're gonna have to start making some bets. It's like, which of these companies do we actually believe in that we're willing to put our like retirement savings behind? And that right, that's when it starts to get really interesting is now people [00:25:00] start to really care because they're gonna have the ability to be personally affected by the growth of these companies.
[00:25:05] So, and then a, again, from a corporate perspective, you're making bets on which of these companies are we building our products around or are we building our internal operations around? Yep. And six months ago maybe felt great about openAI's and ChatGPT is the dominant player. Like it's safe, it's the safe bet.
[00:25:19] And now there's whole bunch of people I talk to are like really loving Anthropic and feeling a little more comfortable with that. So, I dunno, it just, it is an ever changing space right now. And, but at some point, like you do have to start making these bets and I know for our organization, like I'm just sort of playing the field.
[00:25:36] Yeah. Like we're not all in on anybody. We, we use Google all the time. We, we Google workspace customers, we. Have ChatGPT licenses. There's people internally that are testing Claude, regularly. So yeah, we haven't gone all in on any of them and I think a lot of companies are kind of doing a similar thing and just play the field and see where this goes.
[00:25:56] Marketing AI Council Report
[00:25:56] Mike Kaput: Alright, so our third big topic this week here at SmarterX, we just celebrated a [00:26:00] pretty big milestone with the release of our inaugural marketing talent AI impact report. We produce this in partnership with our marketing AI Industry Council, which is a body founded and chaired by Paul. And the report examines how AI is reshaping specifically marketing employment in this first report for this council, including required competencies and organizational structures and how the landscape overall is being disrupted by ai.
[00:26:25] So we actually drew on a bunch of input from senior leaders who are on the council who work at organizations that include places like Cleveland Clinic, Ford, GE Healthcare, Google Cloud, Lenovo, and others. We actually looked at nine big areas, related to this topic, including changes to job roles, hiring practices, building AI literacy, which emerging roles are going to appear and more.
[00:26:47] And, you know, it's, we'll talk a little bit about some findings, Paul, but really the central finding is that AI is not an emerging skill in marketing according to these leaders and experts, but it's basically the baseline now of [00:27:00] the entire profession. so Paul, to kind of put together this report, we had asked these council members 11 open-ended questions about AI's impact on all these areas.
[00:27:10] you know, we just did a webinar on this last week where we just talked about a ton of incredible takeaways and some very candid, insightful responses we got. So, you know, I'm curious and I can share my perspective as well, like what kind of jumped out to you most about the findings in this report?
[00:27:25] Paul Roetzer: Yeah. so I mean, firstly, just a little quick context. So when we created the council in early 25, we had our first meeting. I had sort of presented. These, the mission was sort of reimagine the future of marketing together with this group of, you know, it's about 25 to 30 executives right now. and we presented a, a whole bunch of challenges, like things that the industry had to solve for changes in buying behavior, how these model advancements were gonna impact the profession.
[00:27:53] Search, advertising, publishing, intellectual property, web and app design, like product design with AI [00:28:00] infused in tech stacks, agency relations, like all of these things are unknown. It's like none of us is really sure where we go. And so we were trying to think as a council, what can we do that can make the greatest impact?
[00:28:09] And so as a, a group, we decided on the impact on talent and jobs was the most obvious place to start because we saw that as an immediate thing that was already starting to happen by spring of last year, Mike, we had the Toby Lud key internal memo at Shopify that said, Hey, it's a reflexive thing now you have to do AI and.
[00:28:27] We're not giving you headcount unless you prove ai, can't do the thing you want headcount for, which opened up sort of the floodgates of other CEOs talking about this. So while we were focused on the impact of AI on marketing talent, specifically for this report, if you download the report, you'll see that it is universal.
[00:28:45] Yes. Like the premise of all of this is that this applies to any department of any organization, anywhere where knowledge work lives. And yeah, I think the key for me was just the moment we find ourselves in that there is an awareness. A lot of these, like we talked about, talent [00:29:00] acquisition, literacy, what's uniquely human, the evolving role of the marketer, governance partnerships with outside agencies and vendors.
[00:29:07] Like we covered a, a lot of stuff in those 11 questions.
[00:29:10] Mike Kaput:
[00:29:10] Paul Roetzer: And I feel like there was this understanding of the complexity of the moment and that the future of talent and org charts was evolving. But there wasn't like clarity yet of how are we actually all going to solve for this? And again, this is a forward thinking group of executives and marketing leaders.
[00:29:31] Mike Kaput: Yeah.
[00:29:31] Paul Roetzer: And so who, who generally are way more in the know and they're all still struggling with it. So I just, I thought the fact that, I mean the conversations we had in person, we did two in person three hour workshops in April of 25, and then in October of 25 as part of Macon, and then multiple meetings and then the survey that the members took.
[00:29:51] but people were so, open and like there's named quotes, like with their permission, there's actually named quotes within the [00:30:00] book itself, the report itself, that I would just encourage people to read because I think that our goal was to start the dialogue to just like accept the fact that maybe it doesn't work out great in the near term for marketers, for knowledge workers as a whole.
[00:30:15] Maybe there is a bumpy path here as AI becomes more integrated into work where. Maybe headcount stays flat. May, maybe that's the best case scenario is like, yeah, we're not gonna hire a bunch of people, we're not gonna fire a bunch of people, but we also don't plan to grow our staff as we grow our revenue, which is what Shopify said.
[00:30:32] It's what Amazon has said. It's what a lot of people have publicly said. Walmart, IBM Micro, take your pick. They're publicly saying it now that in essence, head flat headcount is the ideal state.
[00:30:45] Mike Kaput: Yep.
[00:30:45] Paul Roetzer: and I just don't, our, our feeling has always been not enough people were willing to say that out loud.
[00:30:52] And until we admit that that's actually what's going on within these companies, we can't plan proactively for it. And so our goal with this report was [00:31:00] to push that conversation forward so we can all live in reality and not what we want the future to be. Trust me, I don't want jobs to go away. I would love, we just keep hiring more people and the, you know, the economy's amazing.
[00:31:12] I don't think that's living in reality. Like I just think that there's gonna be a period where it's not gonna be like that. And we can't just ignore it. And so that was, that was the goal here. And I think the report does an amazing job thanks to the insights of the council members, illuminating the reality of where we find ourselves in as open as they could be.
[00:31:33] keeping in mind there's some major brands and high profile leaders that are part of this council.
[00:31:39] Mike Kaput: Yeah. And I would just, I know I'm biased, but I would really encourage you take a few minutes to grab the report. We'll provide a link to it in the show notes because in time, in terms of the ROI of like, what you get outta reading this or even drop it in a notebook, LM and query it.
[00:31:53] just go to SmarterX.ai, click on education, click on research. You'll go right to it. You don't have to put in any [00:32:00] information to download it, but I really do think it is something that can kind of open your eyes in your own career, in your own leadership role, whether you're in marketing or not, about what's coming and what kind of questions we need to ask.
[00:32:11] And really what jumped out to me so much is in every one of these questions and topics like. The idea, I keep coming back to the word expectations. Because people's expectations, these leaders are a very diverse group of leaders at different stages of AI maturity at different types of organizations.
[00:32:29] And everyone in one way or another is talking about how their expectations for talent, for outcomes, for what's possible have now changed completely. And that's really important to understand as we get into this kind of new era of potentially not much hiring and what does that mean for you if you are in a job, your expectations are gonna be very different in this new world.
[00:32:52] Paul Roetzer: Yeah. And the one other thing I'll mention is, again, just like a, is a, the council members themselves, like there was a vulnerability when we [00:33:00] were together. Yeah. where again, these are, you know, probably the people that other people in the industry think have it all figured out and. That's not the case.
[00:33:10] Like the people you think have it figured out are often just the ones who are willing to take more risks and try things and like figure it out through experimentation.
[00:33:19] Mike Kaput: Yeah.
[00:33:19] Paul Roetzer: it doesn't mean they actually know exactly how this is gonna play out or like what to do next. And so I would hope like, you know, people follow that idea of like, get councils together, not just within your own company, but like find peer groups of people that you can just talk to about this and the reality of where we are.
[00:33:37] Because we're all gonna have to go through a lot of hard decisions and try and solve for these things. And I 've mentioned before, like for me as a CEO, it's a very lonely place a lot of times. 'cause you are trying to solve for some things that are, that are big like that, that have ramifications, in the near term and the long term.
[00:33:55] And so I think AI is, is gonna be like that for a lot of us where we're just not gonna have peers to [00:34:00] turn to. And you've gotta find that group of people who are willing to just be vulnerable and talk about stuff and work through things with each other. And in our case, luckily we have a group of people who are.
[00:34:09] You know, willing to turn this into research that hopefully helps other people advance conversations.
[00:34:14] Mike Kaput: Yeah, I couldn't agree more. Alright, Paul, before we dive into our rapid fire, another quick announcement.
[00:34:19] AI for Departments Webinar Series
[00:34:19] Mike Kaput: This episode is also brought to you by our AI for Departments webinar series, which we have coming up in February.
[00:34:25] So we are doing three webinars in a row where myself and Paul are gonna break down our latest, what we call our AI for departments blueprints. These are kind of long form guides that we have produced that are all about helping you accelerate AI adoption across marketing, sales, and customer success. So we have three webinars slotted.
[00:34:47] You can attend one, two, or all three. February 24th is AI for marketing. February 25th is AI for sales, and February 26th is AI for customer success. And when you register, you will receive. [00:35:00] Access after the webinar to the AI blueprint for the ones you're registered for. So these are awesome guides. They've come together really well.
[00:35:07] I think they're extremely valuable. Frankly, I think they're things people would pay for that were given away. So if you wanna register, go to spart x.ai/webinars and you can register for one, two, or all three right there.
[00:35:20] Paul Roetzer: That's coming up in like 20 days. Mike,
[00:35:21] Mike Kaput: that's coming up pretty soon. Yeah, we got so much.
[00:35:26] Yeah, we got a lot going on.
[00:35:28] Paul Roetzer: Yeah, we do. It's all good.
[00:35:30] Google Introduces Project Genie
[00:35:30] Mike Kaput: All good stuff. Alright, so diving into some rapid fire topics this week. First up, Google has launched something called Project Genie, which is an experimental research prototype that allows users to create and explore interactive virtual environments.
[00:35:44] This tool is powered by Genie Three, which is their general purpose world model, designed to simulate the dynamics of an environment and predict how it evolves based on user actions. So unlike a static digital space. This model actually [00:36:00] generates the path ahead in real time as you move throughout the generated world.
[00:36:05] So users basically can build these environments by providing text prompts or uploading images through a feature called world sketching. Users can define characters, select a first or third person perspective, and determine how they wanna travel through these generated generated worlds, in methods such as walking, flying, or driving.
[00:36:25] So in essence, when you see these videos, Paul Online, that are being shared, you're basically generating this like video game-like world in real time that you can explore as a character on the fly, and it has realistic physics and persistent memory. So this prototype is currently available as a web app for Google AI ultra subscribers in the us.
[00:36:46] It does have plenty of limitations as an early research model, so the generated worlds are not always a hundred percent consistent. Character movement can experience latency. You're also limited to only 60 seconds of generations. But the key here, [00:37:00] and we'll talk a little bit about this, is that Google DeepMind is using this prototype to study how world models might eventually be applied to things like robotics and generative media.
[00:37:10] So Paul, I wanna kind of hit on that last point because obviously it's cool to be able to generate this stuff. Some people are sharing really creative stuff online, but what are the real bigger implications here for this type of technology?
[00:37:23] Paul Roetzer: Yeah, so I wouldn't sleep on this. We're doing as a rapid fire item because it's in essence a research preview.
[00:37:29] But this is fundamental to where Google is going and what they believe to be necessary for AGI and beyond. They, they do believe you need to embody it and that it has to be able to understand the world around it and act within that world. And so there is no better way to do that than simulate simulations.
[00:37:49] Where AI can learn from the environments that are created. And as people create, you know, thousands or millions of these worlds, that's all training data in essence for the future generation. So, [00:38:00] couple quick notes. So, Demis Asaba, I think people listen regularly showed no, Demis was a big video game developer that was part of his passion early on.
[00:38:09] So he, this I can promise you he is very involved in this initiative. he said, thrilled to launch Project Genie and experimental prototype of the world's most advanced world model, create entire playable worlds to explore in real time, just from a simple text prompt, kind of mind blowing really. And then he said The Genie Project that varies close to my heart.
[00:38:28] Having started my career making AI for simulation games and studying memory and imagination in the brain. For me, it brings all those elements together. Also reminds me of the dream sequences. In inception, science fiction made real.
[00:38:40] Mike Kaput: Wow.
[00:38:41] Paul Roetzer: And then they had, on the blog post it says, how we're advancing world models.
[00:38:45] So again, like Fei-Fei Lee is someone else that's sort of at the forefront of world models. Yann LeCunn, who just left Meta, is a believer in the necessity of world models to achieve AGI. So this isn't just a Google thing, like world models is a known dimension that's being pursued. So [00:39:00] the Post said, A world model simulates the dynamics of environment, predicting how they evolve and how actions affect them.
[00:39:05] How Google DeepMind has a history of agents for specific environments like chess and go building. AGI requires systems that navigate the diversity of the real world. To meet this challenge and support our AGI mission, we developed Genie three. Unlike explorable experiences in static 3D snapshots, genie three generates the path ahead in real time as you move and interact with the world.
[00:39:27] It simulates physics and interactions for dynamic worlds. While its breakthrough consistency enables the simulation of any real world scenario from robotics and modeling, animation and fiction to exploring locations and historical settings. So, you know, this is gonna have massive ramifications, not only for the training of the models, video game industry, like I think some video game stocks cratered, like Oh really?
[00:39:48] Yeah. Like, yeah. As a prelude to, this, this tech is, what I'll say is this tech is further along than most people [00:40:00] would want to believe. I don't think we're seeing anywhere near the full capabilities of what they probably have internally. I don't think they've solved this fully though. But I wouldn't, I just wouldn't underestimate how quickly they're gonna be able to learn with technology like this and how that could accelerate it.
[00:40:18] Breakthroughs in world models, which is probably one of the three major things that Demis and a few others think could be the unlock too. All, all intelligence like to getting to that AGI and super intelligence beyond. So this is not a big deal yet, but kinda like the mobile thing. This is a, yeah, a preview of like, I could see six to 12 months having an episode on this podcast with a full blown world models topic with some breakthroughs that have been announced that are gonna open up possibilities.
[00:40:49] Dario Amodei Publishes “The Adolescence of Technology”
[00:40:49] Mike Kaput: All right, next up we have a new essay from Anthropic, CEO, Dario Amodei where he outlines, kind of a sobering, what do you called battle plan for what he terms the [00:41:00] technological adolescence of humanity. So this essay is actually called the Adolescence of Technology, and in it ade frames the Arrival of Powerful ai.
[00:41:10] That's kind of the term he uses in lieu of AGI for very, very advanced ai. He calls this kind of a civilizational rite of passage that we need to get through, and that could arrive as early as 2027. So. This essay actually moves beyond a bit his previous optimistic visions of AI. In previous essays, like his notable essay, we talked about machines of loving Grace, and this essay spends time categorizing five primary risks associated with the AI that we're now developing today that we all have to navigate in order to get to this better, brighter future that AI can enable.
[00:41:45] So first, he talks about autonomy where AI systems are at risk of developing un unpredictable, destructive personas or power seeking behaviors. Second is misuse for destruction. He's specifically concerned that AI could democratize the [00:42:00] ability to create biological weapons. He also warns of AI enabled totalitarianism where state actors use autonomous drones, personalized propaganda and such to entrench absolute power, he also comes out and predicts massive economic disruption.
[00:42:16] Forecasting the AI could displace 50% of entry level white collar jobs within five years. And finally he addresses indirect effects of AI that can go wrong, such as the loss of human purpose in a world where machines could dominate all cognitive labor. So Amodei then concludes all this Paul, by advocating for surgical government intervention, strict chip export controls to slow authoritarian progress, and a constitutional approach to AI safety.
[00:42:42] Now, I'd kind of mentioned Paul, the Machines of Loving Grace essay that came out in October, 2024, and we covered that then. This is a bit of a pivot in terms of tone, where Amodei is just straight up warning us that we face a pretty dangerous transition period here. If we wanna get to all [00:43:00] these positive outcomes he described in that previous essay, like are his perspectives here like at odds at all, or is this just kind of like the next stage of what he's advocating?
[00:43:10] Like what do we need to be paying attention to here?
[00:43:12] Paul Roetzer: So he addressed this in that Demis Hassabis, Dario Amodei interviewed Davos that we talked about on the podcast recently. This, this came up in that discussion. And he alluded to the fact that he wanted to lead with the positive thing first. Yeah. That's why the Machines of Loving Grace came out first.
[00:43:27] Mike Kaput: Yeah.
[00:43:27] Paul Roetzer: And then this was the, you know, the risk side in essence. And there's nothing in here that he hasn't been publicly saying. I think it's, if you want to understand the balance of Anthropic and specifically Dario, if you read these two essays, you will have a pretty, deep understanding of how he views the world and views AI as a whole.
[00:43:48] It is, I think, 20,000 words. Yeah. So that's like half a book, in essence, half a business book. So, yeah, sit back and, you know, give yourself an hour, but I don't know, I'll read the first two paragraphs, [00:44:00] Mike. 'cause I think it gives people a really good sense of what this is all about. So he says, there's a scene in the movie version of Carl Sagan's book, contact with a main character, an astronomer who's detected the first radius signal.
[00:44:10] From an alien civ, civilization is being considered for a role of humanities representative to meet the aliens. The international panel inter interviewing her asks, if you could ask the aliens just one question, what would it be? Her reply is, I'd ask them, how did you do it? How did you evolve? How did you survive this technological adolescence without destroying yourself?
[00:44:31] When I think about where this is now, Dario, when I think about where humanity is now with AI about what we're on the cusp of, my mind keeps going back to that scene because the question is so apt for our current situation, and I wish we had the aliens answer to guide us. I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species.
[00:44:52] Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems [00:45:00] possess the maturity to wield it. In my essay, machines of Loving Grace, I tried to lay out the dream of a civilization that had made it through to adulthood where the risks had been addressed in powerful ai.
[00:45:10] Was applied with skill and compassion to raise the quality of life for everyone. I suggested that AI could contribute to enormous advances in biology, neuroscience, economic development, global peace and work and meaning. I felt it was important to give people something inspiring to fight for a task at both, at which both ai accelerationist, and AI safety advocate seemed oddly to have failed.
[00:45:32] But in this current essay, I want to confront the rite of passage itself to map out the risks that we are about to face and try to begin making a battle plan to defeat them. I believe deeply in our ability to prevail in humanity's spirit and its nobility, but we must face the situation squarely and without limitations.
[00:45:49] Now, there are certainly people in the I space who will make fun of Dario for this and laugh at him and call him a dor, despite the fact that he's very clear he is not a dor. what I would advise in [00:46:00] this is you don't have to believe it all. Like you don't have to have his conviction about the risks ahead.
[00:46:07] You cannot deny that risks exist, though at some level the things he highlights exist in some spectrum of, complexity and sub spectrum of probability. And we can agree not to agree on what the probability of those are, but thinking they're zero is doing a disservice to humanity.
[00:46:31] Mike Kaput: Mm.
[00:46:31] Paul Roetzer: And so I would take the time to read it and assign your own levels of concern and probability to these things.
[00:46:40] But I don't think anyone just dismissing these concerns as some AI guy who's trying to raise a bunch of money and needs to scare people. That that is, I believe rather deeply that is not the case. I do truly actually think that this is how Dario thinks and [00:47:00] what he believes. And I think his concerns are real.
[00:47:02] Yes, he has to raise a bunch of money and build a company, but. I've never gotten the impression from following him for years that he hypes anything. so I just, I think it's an important read because there's about five people who are basically deciding the future of manity here with these AI labs, and one of them is allowing you into the inner workings of his mind
[00:47:25] Mike Kaput: Yeah.
[00:47:26] Paul Roetzer: Of how he thinks about this. And I think you, you need to understand that we should understand the people who are making these decisions and what they're building and why. And this is a free look into it. 20,000 words worth that you don't usually get from these people.
[00:47:42] Microsoft’s Rocky Week
[00:47:42] Mike Kaput: All right. Next up, it's been a bit of a rocky week or so for Microsoft.
[00:47:47] So on this past Thursday, Microsoft shares fell more than 10% marketing. The company's largest single day decline since 2020. It wiped out $357 billion in market [00:48:00] capitalization. And the selloff basically followed the release of second quarter earnings for Microsoft. And what really happened here is that while Microsoft's total revenue exceeded analyst expectations, investors focused on a slight miss in cloud growth, their Azure platform and other cloud services grew by just 39%, which was just under the 39.4% consensus assessment.
[00:48:23] Now, also, Microsoft CFO, Amy Hood stated the cloud results were constrained by data center capacity as the company prioritized its own internal AI research and development over external customer needs. So Paul, not a great look for Microsoft. It doesn't seem like they actually missed their guidance by that much.
[00:48:43] So I'm curious, like what's going on here? Is this markets responding to how heavily invested they are in AI and the road ahead?
[00:48:52] Paul Roetzer: Yeah, so I mentioned this earlier, but there's a couple of things that jumped out. So one is. Microsoft [00:49:00] copilot AI assistant, that the company's building office apps has 15 million paid users.
[00:49:05] That sounds like a lot, and I'm reading actually from a GeekWire article at the moment. but Microsoft 365 has 450 million paid seats, so copilot has reached a little more than 3% of them. So I don't think the market was a huge fan of that. And then a huge part of their future commitments are from openAI's.
[00:49:26] And this seemed to be the biggest thing that spooked people. it was something to the level of like 45% of their future performance, was tied to their commitments to openAI's. And then with all the concerns around openAI's, that was leading to people getting spooked about how dependent they are. So it says 45% of Microsoft's 625 billion.
[00:49:50] In remaining what's called performance obligations or RPOs, remaining performance obligations is tied to openAI's. Remain. RPO represents contracts that customers have signed, [00:50:00] but Microsoft has not yet fulfilled. It is a measure of future revenue already locked in. Microsoft's report showed that roughly 281 billion of that backlog is committed to a single customer that is still burning cash in searching for a sustainable vision model, which is openAI's.
[00:50:16] So that seemed to be the biggest thing that actually set off the 12% drop overnight. Yeah. Was that their, which I think I mentioned to you, Mike, in the office on Friday. The context now makes a ton more sense of why Satya backed out and allowed them to go do a deal with Oracle to build out all their data centers and the Project Stargate, because Satya already knew how leveraged they were with openAI's, and if they continued to like build out all these future commitments, then they were gonna get heavily over leveraged with openAI's.
[00:50:45] And then if something went wrong with openAI's, they're screwed. Yeah. So it seems to me that that's more than anything what's going on is it's spooked investors, how dependent they truly are upon future revenue from openAI's,
[00:50:58] Mike Kaput: paired with the fact the [00:51:00] CFO is saying a ton of our compute is going towards internal r and d.
[00:51:03] Because you probably don't wanna be dependent on openAI's, right? Yeah,
[00:51:07] Paul Roetzer: yeah,
[00:51:08] Mike Kaput: yeah.
[00:51:08] Paul Roetzer: Tough spot. And it's, you know, some people were having some fun with Satya because like, you know, the whole thing in 2024 where he is like, oh yeah, we made Google dance. And Yeah. And I think some people are like, yeah, maybe you should have like toned that down a little bit because Google's looking pretty good right now and you're trying to figure this all out.
[00:51:26] So, but yeah, I mean they still crushed their earnings, right? Revenue projections, so it's not like everything went f but again, if you just look at the media headlines, it's like, oh my God, what happened to Microsoft? Better get rid of all my Microsoft stock. And it's like,
[00:51:38] Mike Kaput: right.
[00:51:39] Paul Roetzer: Not necessarily the case, it's just, it was a bad, historically bad day.
[00:51:44] Lost a lot of money that day.
[00:51:47] More Details Revealed About ChatGPT Ads
[00:51:47] Mike Kaput: All right, next step. OpenAI has confirmed some more details about its upcoming ChatGPT advertising program, according to some reporting from the information and Adweek. So we've learned the company is requiring now a [00:52:00] $200,000 minimum upfront commitment from select advertisers.
[00:52:03] Though some brands have received offers ranging from a hundred thousand to $125,000, the company is also requesting a $60 CPM or cost per thousand. Per thousand impressions with testing scheduled to begin in February with a US rollout. Now to start, apparently openAI's primarily approached retail streaming and internet connectivity brands for this initial beta it.
[00:52:27] The company states the ads run on a separate system from chat GT's model and do not influence the AI's responses. Interestingly, the initial measurement available for these ads will include clicks and impressions only with plans to expand tracking capabilities over time. So Paul seems like openAI's is charging some pretty steep prices for ads here.
[00:52:47] One source here said that these CPM prices are comparable to quote targeted streaming and premium TV inventory such as live NFL games. So that seems a little pricey given that right now you can't [00:53:00] actually determine whether or not your ad did anything other than get impressions and clicks, it sounds like.
[00:53:06] Paul Roetzer: Yeah, so I mean, one thing I took away from this is SmarterX will not be running, test ads on ChatGPT anytime soon. Right, right. I don't think we're falling in that bucket. And then, yeah, the 60, cost per thousand. I , I haven't, I've been outta media buying for a little while, but I just did a quick search.
[00:53:23] So this is just AI overview in, in Google, Facebook, Instagram, average CPMs are generally around, eight, $8 to $8 and 60 cents. For context, Google Display Network about $3 to $10, CPM, and then search, ads average higher around $38 due to higher intent. Video ads, kind of run the gamut, but YouTube can range from $2 to $15.
[00:53:47] So Yeah. Just to give people a sense who aren't in media buying, which is probably most of our listeners, 60 is a lot. Yeah. as a very high cost per thousand.
[00:53:55] Mike Kaput: So is that them basically saying through the pricing, like, we expect these to be high intent [00:54:00] based ads, or at least that's what they're selling.
[00:54:02] Paul Roetzer: They definitely better have like really high confidence that the targeting works really well. Yeah. Like something in their early testing must have shown them that it's justified. So you could have a bunch of pissed off advertisers, I would imagine, right? I don't know, but I, again, like, you know, we, we joke about the money being a lot for a company like SmarterX, 200,000 for most of these brands is nothing.
[00:54:25] Yeah, nothing like, all right, let's throw 5 million at it. Like, let's see what, see what happens and run it for three months. Like, they're not gonna care.
[00:54:32] Mike Kaput: Especially at the alternative being nothing, not knowing how to get your brand in results that ChatGPT serves up.
[00:54:38] Paul Roetzer: Right? And you can imagine how this sales process goes.
[00:54:41] It's like, oh, Adidas, like, yeah, Nike's pretty interested, like, just saying like, you know, one of your competitors might get their first. And so, you know, people don't want to like get left out. And yeah, it's a, it's a new medium and maybe it does work. Maybe it takes off and like you set the sidelines so it's way easier as a, a media executive, a marketing executive.[00:55:00]
[00:55:00] To shoot your shot and like, say we put a few million in, we gave it a go and it didn't work. Okay, cool. Like, move to the next thing, versus, yeah, we, we thought it was too expensive. we thought it should be 40, you know, dollars per cost per thousand. And so we held off and then like your competitor shows up and crushes it with it, and then, then you gotta explain that.
[00:55:18] So sometimes like taking the risk is easier than not.
[00:55:23] Rumors of a SpaceX/xAI Merger
[00:55:23] Mike Kaput: All right. Next up. Another big interesting topic in the works here, Elon Musk's SpaceX is reportedly or rumored to be in merger discussions with XI, which is the AI company founded also by Elon Musk. And these talks are taking place ahead of a plan.
[00:55:37] SpaceX, IPO. So if completed, if this actually ends up being true, a combo of SpaceX, XAI, and potentially Tesla could create what some analysts describe as a vertically integrated AI and physical infrastructure conglomerate. So Musk has suggested the companies could form. What he calls a real world AI flywheel with data from Tesla vehicles and [00:56:00] SpaceX operations, feeding AI development separately.
[00:56:04] SpaceX has also filed with the FCC to build a constellation of data centers in space. Paul, maybe there's a lot to break down here, but seems like there are huge implications here, both for AI and business at large if something like this even comes close to passing.
[00:56:22] Paul Roetzer: Yeah, so there's definitely, you know, some smoke here and usually that means there's, there's some truth to what's going on here.
[00:56:29] Mike Kaput: Yeah.
[00:56:29] Paul Roetzer: something is gonna happen now, how this all plays out and which company merges with which company, or if they all end up merging into a single Musk holdings or whatever he is, you know, whatever he's gonna call it. I think there's increasing signs that this is, this is moving very fast and Elon generally does big things quickly.
[00:56:49] Mike Kaput: Yeah.
[00:56:49] Paul Roetzer: So like, if he's gonna make a major thing, he is gonna just do it so. You know, context, he, he buys Twitter for 44 billion terrible investment tanks. It overnight, basically the company becomes worth like [00:57:00] 8 billion within a month or something of him buying it. What I had said on the show at the time is like, who cares?
[00:57:04] He's just gonna like, he's gonna fold this into XAI and then he'll raise the money with XAI, which is exactly what he did. It's like then xAI acquires x and 44 billion gets paid out in stock to, you know, the investors of X and then now you have X in SpaceX. So SpaceX, could IPO xAI? Could IPO? You already have Tesla.
[00:57:22] It's a publicly traded company. You could reverse merge into Tesla and take 'em all public through Tesla, like something is gonna happen. he announced last week at their, their quarterly earnings that they're discontinuing the Model S and the Model X, which I was shocked by. I've said before, so like I've, I've owned a model S for seven years, mainly because like I'm on my third one to monitor the full self-driving was my, my primary reason.
[00:57:49] Yeah. And it is, it's the best car I've ever driven. And they're just, they're just shutting it down. And the reason they're shutting it down, one, it's not a massive money maker for them. They only sell like, I don't [00:58:00] know, like eight or 10,000 units a year of that. But they're replacing the manufacturing line with humanoid robots.
[00:58:06] So the reason he gave publicly is we're actually going to clear the deck on the X and the S manufacturing lines so we can prepare to start mass producing humanoid robots. So they're gonna put data centers in space, they're gonna do all these things. And so it's all coming around creating, powering intelligence through these different vehicles through, through robots, through spaceships, through vehicles.
[00:58:28] and so I think they're all just sort of naturally coming together as like this conglomerate. So I, again, I don't know exactly how this plays out, but I would not be shocked at all of, all of these actually become a single entity and either goes public underneath the Tesla brand, the Tesla becomes the con.
[00:58:46] I don't know. I dunno how all that happens, but. Something is gonna happen. He will IPO something this year. He will be personally worth multiple trillions of dollars by the end of 2026. And [00:59:00] it's all like makes your head hurt.
[00:59:05] METR Releases New AI Time Horizon Estimates
[00:59:05] Mike Kaput: All right, next step. We've talked before about an organization called METR, which stands for Model Evaluation and Threat Research.
[00:59:12] And this is an organization that measures autonomous AI capabilities and they have released an updated report on how long Frontier AI models can work independently on complex tasks. So the measurement ends up using human time equivalence as a reference point, and they've updated a bit how they actually measure how long these models can work on these tasks.
[00:59:34] So we've reported in the past that. METR has found that frontier model capabilities have been doubling approximately every seven months overall on these kind of longer horizon tasks. Since 2024, that rate has accelerated to roughly 89 days, meaning capabilities are advancing about 20% faster than previously measure.
[00:59:53] And under this new methodology meter finds that Claude Opus 4.5 can autonomously [01:00:00] complete tasks equivalent to 320 minutes of human work. GPT-5 reaches 214 minutes and oh three reaches 121 minutes. Among the other models measured. So the organization actually noted that even its expanded task suite, which was part of the updates, given it more stuff to do, has shown relatively few challenges that the latest models cannot perform successfully.
[01:00:24] So Paul, METR is this organization with this very closely watch, watch benchmark on how long AI models can do these types of things humans do independently. So these kind of like long horizon tasks as they're called in some circles. It's basically a proxy for how good AI models are getting it doing work on their own in the real world.
[01:00:43] So it sounds like they've updated how they get to this estimate here and that after doing so, stuff like Claude Opus 4.5 is just crushing it on human equivalent work. I wonder too, if that's behind some of this buzz we've heard at the beginning of the year around 4.5 being closer to [01:01:00] AGI.
[01:01:00] Paul Roetzer: Yeah, I read their blog post twice about this time, horizon 1.1 and I honestly like was struggling.
[01:01:08] Mike Kaput: It's not easy to let me tell you.
[01:01:11] Paul Roetzer: Yeah. I was trying to like simplify, how would I explain this in a talk? 'cause I actually had it in my presentation this morning or Yeah. The one I'm doing on Wednesday. And it's like, how do you, so the gist is, and I think this is the most important thing, is. AI agents are getting better and better at doing things that take humans hours.
[01:01:26] And most of the meter research has been focused on coding and coding related tasks. And that is where Claude has excelled. what we've said before and what I think matters is personalization of this emerging scaling law to a degree, in your industry and in your line of work. So when can you start thinking about something that takes you an hour or two hours or 10 hours to do as a human worker?
[01:01:56] Mike Kaput: Hmm.
[01:01:56] Paul Roetzer: Where these models, either out of the box or finely [01:02:00] tuned specifically for your industry, like, you know, legal industry, healthcare industry consulting, where it can now do the work that otherwise would've been done by a human and would've taken them multiple hours and it can do it, with a reliability that is good enough to actually start infusing those agents into workflows and org charts.
[01:02:19] And I, we've said before, we're not there like we're, we don't, as good as cloud code is getting as good as agents are becoming in most industries, they are still largely unreliable, still require significant human in the loop and they're nowhere near as autonomous as people think they are. So that's just overall state of where we are.
[01:02:40] It's heading in the direction we're becoming more reliable and autonomous, but that is mostly for coding tasks. As you start getting into other industries, we're still pretty early in this process, but it seems like it's gonna accelerate fast.
[01:02:53] Mike Kaput: Yeah, and I would say as dense as this article is, even if you don't wanna read it or understand all of it, and it's a struggle, [01:03:00] like you mentioned, if you do scroll down about a third of the way on the page, there's a chart that very clearly shows you a line going up and to the right of every time a model is released on this particular benchmark.
[01:03:10] And the way they measurement it, even after the updates we're reporting on today. That just keeps going up when a new model is released. And that's the trend line. That's important.
[01:03:20] Paul Roetzer: Yeah. It was doubling every seven months. Now it seems like it's doubling every like six months basically. Exactly. Is pretty much the gist.
[01:03:26] Yeah. And so a year from now it's still doubling and it's horizon long horizon capability and it's doubling faster is is the main thing to get here. No, slowdown is Is the main takeaway.
[01:03:40] Mike Kaput: Yeah. Yeah. Alright. Next up.
[01:03:42] Google DeepMind Researcher Founds New AI Startup
[01:03:42] Mike Kaput: David Silver, who is a longtime Google DeepMind researcher, has actually left to found an AI startup called Ineffable Intelligence.
[01:03:50] So Silver's work at DeepMind focused on reinforcement learning, which is the technique that powers several of DeepMind's biggest breakthroughs. Ineffable Intelligence is [01:04:00] focused on developing what Silver describes as super intelligence using AI methods similar to those that contributed to deepminds advance.
[01:04:07] So it's still very, very early. Not a lot of details known yet. The startup is seeking venture capital funding. The specific amounts have not been disclosed, and they're actively hiring AI researchers. So, Paul, okay. There are tons of new AI startups that get launched every week. Why is this one special and worth paying attention to?
[01:04:25] Paul Roetzer: This is a major player, like this is one of the core people behind DeepMind. I don't know where in the chart he fits, but certainly top 10 probably. Yeah, in terms of most Im impact that over the last 15 years or so. So this is a, a very important researcher. He's, heavily featured in the AlphaGo documentary, so if you've watched AlphaGo, it's free on YouTube.
[01:04:47] If you haven't, you should. He's, he's one of the prominent people within that. So he's played a major role in all the breakthroughs through DeepMind and now Google DeepMind. so he's a, a very important figure, and we, we haven't talked [01:05:00] about him much. I, we may not have ever talked about him other than in relation to AlphaGo.
[01:05:04] So him leaving is a big deal. he said that this article we're looking at Yahoo Finance article here says Silver has told friends. He wants to get back to the awe and wonder of solving the hardest problems in AI and see super intelligence or AI that would be smarter than any human and potentially smarter than all of humanity as the biggest unsolved challenge in the field.
[01:05:22] So yeah, I don't know. Go, go. You know, take a swing. I'm sure you can get as much funding as he wants, and I would imagine that, again, I have no inside information. I would imagine Google and Demis are probably investors in whatever he is doing like this. Yeah. and he seems like one of the good guys, again, I don't know him personally.
[01:05:40] You only know what you like learned through interviews and the documentary itself. But he seems like one of those pure researchers who's truly in the pursuit of intelligence and, you know, the good that can come from that intelligence. So he seems like one of the guys you'd be, you'd root for.
[01:05:57] Mike Kaput: All right.
[01:05:57] New Anthropic Research on How AI Affects Knowledge Work
[01:05:57] Mike Kaput: Next up, Anthropic has published some research [01:06:00] examining how AI assistance affects software developers' ability to learn new coding skills. So they did this study where they did a randomized controlled trial involving 52 mostly junior software engineers. And in this participants were randomly assigned to complete coding tasks, either with or without AI assistance.
[01:06:19] Afterwards, they took a quiz assessing their understanding of the task. Those who used AI scored 17% lower than those who coded by hand. That's equivalent to basically a nearly two letter grade drop in performance. However, the researchers found that how someone used AI influenced retention, high performers used AI strategically asking follow-up questions and requesting explanations rather than simply accepting generated code.
[01:06:46] So the study identified specific interaction patterns associated with better and worse outcomes. So is like delegating entirely to ai. Or iterative debugging these correlated with lower quiz scores while asking [01:07:00] conceptual questions, while coding independently correlated with higher scores. So Paul, this is specific to coding, but as Anthropic mentions in the article, it has broad implications for how to design AI products, how works places should a approach AI policies and more.
[01:07:16] I'm curious like what you took away from this. 'cause I think you can extrapolate this right? Beyond, far beyond just program.
[01:07:22] Paul Roetzer: Yeah, it's super top of mind. So again, I mentioned I was at, a high school this morning. You're talking with faculty about, you know, how students are gonna learn moving forward.
[01:07:30] We talk about this all the time with businesses and it's like, how do you develop young talent to be senior strategists when they can just take the shortcuts all the time? So I think having data that illuminates what we assume to be true, which is if you use it for a shortcut, you're not gonna learn.
[01:07:44] Yeah. You're not gonna actually gain confidence in the subject matter. Whereas if you use it as a, a tool to augment your learning and to personalize your learning, you're gonna amplify and accelerate your learning. And so this is the challenge that's faced in educational systems today and and faced [01:08:00] parents that are, you know, students, your, your kids have access to these technologies and they can take shortcuts all the time.
[01:08:06] I still believe it is, it is, AI is a tool. How you use it determines the impact it has. If you use it to cheat and take shortcuts, then you end up not gaining expertise over the topic and you have no confidence in the ability to have a conversation about it, to answer questions about it, to present the topic.
[01:08:26] It's why I largely still write Mike, and I think you're the same way. Yep. Like, even though AI can write, I think by writing, like I have to take my own notes, even in meetings I don't use, like, I'll use the AI summarizer to get the gist of it. I still take my own notes in meetings. I type everything out as I'm in those meetings because it sticks in my mind when I actually put it down.
[01:08:47] Like, I have no interest in changing that process. I need to do it to think and to gain confidence in, in subject matter. So I think that's all this is saying is regardless of what the industry is, what the role is, what the class [01:09:00] is, you gotta do the work. Like you, you have to put in the time to, to do the critical thinking so that the information is retained and has meaning to you.
[01:09:09] So I'm glad they're doing this kinda research and I hope we keep seeing more of this.
[01:09:13] New Gallup Research on AI Usage in the Workplace
[01:09:13] Mike Kaput: Alright, so the research organization, Gallup just did a workplace survey that shows frequent AI use has continued to rise in the fourth quarter of 2025. So they do these surveys periodically, figuring out how people are using AI at work, and they found the share of employees using AI at least a few times, weekly, reached 26%, which was up three percentage points from the previous quarter.
[01:09:35] Daily AI use increased from 10 to 12%. However, total AI users remain flat after sharp increases earlier in 2025. 49% of US workers report no workplace AI usage at all. 38% of employees say their organization has integrated AI technology. 41% say their organization has not. There are also some significant [01:10:00] disparities here by role type workers in remote capable positions showed 66% total AI adoption compared to 32% in non-REM remote roles, and we found leaders use AI at substantially higher rates than individual contributors.
[01:10:15] 69% of leaders reported usage compared with 40% of individual contributors. So Paul, I read this data. I think it's awesome to see adoption and usage numbers like these, but my God, it also makes me kind of step outside our bubble and realize for the millionth time. How early it is, like only 12% of people using AI daily.
[01:10:36] 49% of people saying it's not used at work at all. It's such a long way to go.
[01:10:41] Paul Roetzer: It's wild. 38% of employees say their organization is integrated with technology. Yeah. It's, again, I think it's easy to feel behind. It's easy to think that every other business, every other professional has AI figured out. And what we find time and time again is that is not correct.
[01:10:59] The adoption is [01:11:00] actually very low. And then even Mike, when you drill into these, like, yeah, I use a AI a few times a week, 26%.
[01:11:06] Mike Kaput: Right?
[01:11:07] Paul Roetzer: Okay, now go ask those same people. Have you ever run a deep research project? Have you ever tried agent mode? Have you ever built a video? Like the utilization of them what I said a week or two ago, like it's like that level one, they're just answer engines.
[01:11:21] So most of the people that are saying yes, we use it, they're just using them as like pure chatbots and answer engines and they're not actually doing the most interesting stuff. So. Again, for people who are on the frontier, is just more validation that you have a, a, a wonderful runway ahead of you to build businesses.
[01:11:40] To build careers. Like the rest of the world is nowhere close to where you think they are in terms of understanding this stuff and truly integrated into their workflows.
[01:11:50] AI Product and Funding News
[01:11:50] Mike Kaput: Alright, Paul, we got some final, AI product and funding news to wrap up with here. I'm just gonna run through these rapid fire, then we're gonna talk quick about, this week's AI pulse [01:12:00] survey and wrap things up for this week.
[01:12:02] Paul Roetzer: Sounds good.
[01:12:02] Mike Kaput: Alright. So first up, openAI's has launched Prism, a free cloud-based latex workspace designed for academic writing and scientific research. So this platform integrates GPT 5.2 directly into the authoring environment. So it allows researchers to draft and revise text reason through equations and convert handwritten formulas into latex code without switching to a separate chat interface.
[01:12:24] Google has also introduced a agentic vision in Gemini three Flash. This is a capability that shifts image understanding from static analysis to an active ongoing process. So rather than analyzing images in isolation, the feature enables Gemini to engage in realtime reasoning about visual environments as part of continuous decision making, the robotics startup figure has unveiled Helix O2, its latest humanoid robot featuring unified full body control.
[01:12:51] The system extends from upper body manipulation to complete autonomy across walking, balancing, and manipulation as one continuous system. [01:13:00] Couple other Google updates. Google is adding controls to search console that allow website owners to opt out of having their content used in AI powered search features like AI overviews and AI mode.
[01:13:11] this will function apparently similarly to how the existing controls work for featured snippets. So you'll be able to prevent their con your content from powering AI features or being used to train AI models outside Google search. Google has also introduced Auto Browse and Chrome and AGI Agentic feature powered by Gemini three that can navigate across websites, interpret content and complete multi-step tasks on a user's behalf.
[01:13:35] And last but not least, Jerry Toric, who is a former Vice President of research at openAI's, has launched a new AI startup called Core Automation. So he had previously led OpenAI's work on reinforcement learning and reasoning models, and is now seeking between 500 million and $1 billion in funding to develop a new type of ai.
[01:13:54] The startup's primary research goal is a single model named series and Toric [01:14:00] envisions using this tech to first address industrial automation with long-term goals, including the development of self replicating factories. And bio machines. And then Paul, lastly, we've got, you know, at the end here, this week's AI pulse survey.
[01:14:14] So again, go to SmarterX.ai/pulse. We're asking this week about how people feel about AI agents interacting autonomously on platforms like MT. Book Without Human Oversight. And we're asking about some of Dario Amodei's claims in his essay about AI displacing 50% of entry level white collar jobs within five years.
[01:14:37] So please, we love, love, love getting your feedback on these questions. We learn a lot from it. We always publish really great content around this data and learn tons from it. Last but not least, if you have not left us a review on your podcast platform of choice, please do so. It helps us get into the ear earbuds of more listeners or in, in front of them in other ways.
[01:14:59] So we [01:15:00] really appreciate your feedback. Any and everything you can tell us about the show is helpful to us. Paul, that was a whirlwind week in ai. Appreciate you breaking it down for us.
[01:15:08] Paul Roetzer: It always is. Yeah. So stay tuned. So maybe some models coming out. I think we only have one episode this week. I think we did two last week.
[01:15:15] We did two the week before. I think we only have one this week, which is good. 'cause I'm traveling this week, so.
[01:15:18] Mike Kaput: Yeah. Right.
[01:15:19] Paul Roetzer: I, we have time to do another episode this week. All right. So we'll be back next week hopefully, with some model news for you. have a great week everyone. We'll talk with you again soon.
[01:15:29] Thanks for listening to the Artificial Intelligence show. Visit SmarterX.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy and engaged in a SmarterX slack community.
[01:15:53] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
