A Google principal engineer claims Claude Opus 4.5 completed a year's worth of work in a single hour. Now, the industry is grappling with a sudden, massive leap in coding capabilities that has experts warning that everything is about to change.
In this week’s episode, Paul and Mike dissect the sudden acceleration in model performance and explore Yann LeCun’s claims that Meta "fudged" benchmarks, Sal Khan’s proposal for a "1% solution" to fund worker retraining, NVIDIA’s strategic deal with Groq, and more.
Listen or watch below—and see below for show notes and the transcript.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Click here to take this week's AI Pulse.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:04:14 — AI Pulse
00:05:41 — How Close Are We to AGI?
- Measuring AI Ability to Complete Long Tasks - METR
- X Post from METR on Opus 4.5
- X Post 1 from Jason Kernion
- X Post 2 from Jason Kernion
- X Post from Jaana Dogan
- X Post from David Holz
- AI Capabilities Progress Has Sped Up - Epoch AI
- SmarterX Co-CEO Webinar
- SmarterX Co-CEO Tool
00:31:48 — AI Change Management
00:38:18 — OpenAI Is Hiring a “Head of Preparedness”
00:41:59 — Khan Academy Creator Calls for Job Displacement Fund
- A 1 Percent Solution to the Looming A.I. Job Apocalypse - The New York Times
- X Post from William Isaac
00:47:30 — Jevons Paradox in AI
00:55:20 — The Rise of Vibe Revenue
00:57:57 — Salesforce Says Trust in LLMs Is Declining
- Salesforce Executives Say Trust in Large Language Models Has Declined - The Information
- Why Our Story on Salesforce’s Declining Trust in LLMs Hit a Nerve - The Information
01:03:25 — Nvidia Does Landmark Deal with Groq
- Groq And Nvidia Enter Non Exclusive Inference Technology Licensing Agreement To Accelerate AI Inference At Global Scale - Groq
- Nvidia Buying AI Chip Startup Groq For About 20 Billion Biggest Deal - CNBC
- Nvidia Licenses AI Inference Technology From Chip Startup Groq - The Wall Street Journal
- Nvidia Struck 20 Billion Megadeal Groq - The Information
- X Post from Gavin Baker
01:06:21 — Meta Acquires Manus
- Manus Joins Meta For Next Era Of Innovation - Manus
- X Post from Alexandr Wang
- Meta Buys AI Startup Manus for More Than $2 Billion - The Wall Street Journal
- X Post from Chris McGuire
- X Post from Greg Isenberg
01:08:34 — Yann LeCun Speaks Out
- Computer scientist Yann LeCun: ‘Intelligence really is about learning’ - The Financial Times
- X Post from Stefan Schubert
- X Post from Paul Roetzer
01:14:14 — OpenAI Preps for Largely Audio-Based AI Device
01:17:39— AI Predictions for 2026
01:20:35 — OpenAI Releases Prompt Packs for ChatGPT
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off an individual purchase or a membership by using code POD100 at academy.smarterx.ai.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: What jobs are they training for? If you know that their technology is basically designed to replace all cognitive labor, what are they training for? What are they providing $10 billion to prepare them for when no one seems to know what the jobs three, five years from now look like?
Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:27] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career, join us as we accelerate AI literacy for all.
[00:00:55] Welcome to episode 189 of the Artificial Intelligence Show. I'm your host, Paul [00:01:00] Roetzer. I'm with my co-host Mike Kaput. We are back after, I guess, a week away. Like we, we were technically kinda away for two weeks, but we did drop an episode right before Christmas, so we are recording this on Monday, January 5th.
[00:01:13] it is good to be back. This was like, I tried real hard not to work a ton over break, Mike, but then you like, wake up. I get up at like, on days the pod, I'll get up at like 5:00 AM and start prepping for the podcast. And it was just like, all right, like we're back in it. Yep. But there was so much I wanted to talk about while we were away, so, so many like new models weren't dropping and stuff, you know, in the 10 days or so in between these episodes.
[00:01:36] But, a lot happened and just a lot of like, conversations online in the AI circles that we want to get into. So we're gonna do, a variation of our weekly here usually. So if you're new to the podcast, this is, I'll explain kind the format. If you're not new, you're familiar with what we do.
[00:01:55] Basically each week we do three main topics, the big three things that we want to talk [00:02:00] about, and then we do rapid fire items and usually it's about seven to 10 of of those. So today is gonna be a bit of a hybrid. We were gonna do just all rapid fire to try and get through everything from, the break.
[00:02:12] But there, there's one topic up front that just sort of, took on a life of its own in some ways as I was prepping today. So we're gonna have kinda one main topic, but you'll see throughout the episode how everything else sort of connects to this lead off topic, I guess. So, so that's gonna be the format today.
[00:02:29] We'll have one main topic and then a couple of the other ones. We'll go a little bit longer, but mostly we're just gonna do rapid fire style here and it probably will run a little bit longer than our usual, you know, 1 10, 1 15 I think is what they usually run. Alright, so this episode is brought to you by AI Academy, by SmarterX.
[00:02:45] AI Academy helps individuals and businesses accelerate their AI literacy and transformation. Through personalized learning journeys and an AI powered learning platform, there are nine professional certificate course series available on demand right [00:03:00] now with more being added each month. There's also over 20 gen AI app reviews.
[00:03:04] We drop a new app review every Friday. so definitely check those out. If you're already a member, make sure you're checking out those new ones as they drop each week. this week we wanna spotlight the AI for Industries collection. So each collection is made up of series and certificates. And so there are three currently available.
[00:03:23] AI for professional services is again, a certificate series, AI for healthcare, AI for software and technology, and then coming very soon, AI for insurance. So again, the idea is each month we'll be dropping one or two AI for industry collection series and certificates. So check those out as we add more.
[00:03:42] If any of those sound, great to you. If you are already an AI Mastery member, they're on demand for you right now. If you are not, you can buy an individual series. So you can just go get AI for professional services as a standalone series, or just become an AI mastery member, or, you know, buy memberships for your team.
[00:03:59] and [00:04:00] all of these CER certificates and series are right there so you can learn more about AI Academy and our AI mastery membership program at Academy dot SmarterX. Do AI again, that is academy dot SmarterX dot ai.
[00:04:14] AI Pulse Survey
[00:04:14] Paul Roetzer: Okay. we also, each week we do an AI pulse survey where we ask questions of our audience.
[00:04:20] So these are informal polls to see how our audience feels about topics that we talk about. And so the questions are a prelude to a couple of the things we're gonna get into today. So there is two questions this week. You can go to SmarterX dot ai slash pulse to, to participate in this poll. The first question is to what extent are people problems, which we're gonna talk about, fear, resistance to change, lack of buy-in hindering your organization's AI adoption.
[00:04:47] So again, how much, are people problems affecting your organization's AI adoption? And then the second question is, how concerned are you about a fast takeoff in which AI begins to significantly [00:05:00] impact jobs before society and the economy have time to adapt? That is gonna be a big focus of today's conversation, Mike.
[00:05:07] So again, go to SmarterX dot ai slash pulse if you wanna participate in that poll, and then join us for episode one 90. At the start of it, we will kind of give you, a breakdown of how that poll plays out. Alright, Mike. So, like I said, not a lot of like breaking news. I mean, a couple of crazy acquisitions that no one was sort of like predicting at the end of the year, including one on Christmas Eve that was like, I'm sitting there like, oh geez, look, acquisitions was not ready for that one.
[00:05:36] but again, just big picture topics. So let's, let's get into, the first main topic of the day.
[00:05:41] How Close Are We to AGI?
[00:05:41] Mike Kaput: Yeah, for sure. Paul, it's good to be back. we're starting off with, you know, over the break there was a bunch of chatter and we'll get into this about, certain model capabilities and how closely we're kind of approaching what some might call AGI.
[00:05:56] So a few different stories here and some [00:06:00] commentary around this topic. So first, the evaluation group meter, METR estimated that the new Claude Opus 4.5 model has what they call a time horizon of nearly five hours. We've talked about this before, but this metric measures the max duration of a task based on human expert.
[00:06:19] That AI can complete successfully at least half the time. So researchers noted that their current testing suite is actually reaching its limit for measuring these upper bounds, and the result is the highest time horizon the group has published to date. So Claude Opus 4.5 is essentially miles ahead of their previous benchmarks in doing these long time horizon tasks.
[00:06:42] Now a separate analysis at the same time from epoch AI indicated that the overall rate of AI progress has nearly doubled in the last two years. The group's capabilities index identifies a sharp inflection point in early 2024, where improvement rates jump significantly driven largely [00:07:00] by the emergence of reasoning models and reinforcement learning.
[00:07:03] So we talked about before that meters data specifically shows task horizons are doubling roughly every, every seven months. Which means basically if these trends are holding, researchers are predicting that AI agents could reliably handle, tasks as long as a week long within the next two to four years.
[00:07:22] Now Paul, this kind of kicked off because there was all this buzz around Claude Opus 4.5 Claude codes specifically in these AI circles on X over the holidays. Some of it was spurred by these findings. Others were just people, several notable people, basically saying that they felt these tools indicate that we're nearing or we're already at some type of AGI.
[00:07:45] I don't know if this was just coordinated conversation or what, but when you looked at that, what can you kind of break down for us? What were people talking about and what does it mean for where we're headed?
[00:07:56] Paul Roetzer: Yeah, so just to recap, like Opus 4.5 came out November [00:08:00] 24th, so right around Thanksgiving, and we talked about it on episode 180 3, December 2nd.
[00:08:05] So it wasn't news that Opus 4.5 was out. When they released it, they also released an updated version of Claude Code, which is basically like the coding assistant. So if you're, if you're not a coder, you're probably not experiencing Claude code and sort of feeling what other people are feeling in this space.
[00:08:22] So, you know, it's interesting. As I said, I tried not to work much during the holidays. my, my work usually consists of, I still get up at like 6, 6 30 in the morning. and then I would, while I'm drinking my coffee, like checking Twitter and going through, you know, making note flagging things, liking things, putting 'em in our sandbox of, of stuff to talk about before the kids are kind of up and moving.
[00:08:43] So I would get my couple hours of just, you know, regular monitoring of what's going on. And right around Christmas, you just started to notice this weird, like, bubbling about Claude code, like it was just everywhere. All of a sudden in, in the circles of people that I [00:09:00] follow. And so the way I sort of went back this morning and tried to piece the timeline together is I just went back and looked at my likes because I, I, I'll usually like, and sometimes I'll retweet things or share, you know, context, but a lot of times I'm just liking things almost as like a bookmark for me for the show.
[00:09:15] So, best I can tell, I'm gonna walk you through the timeline because I think it's really, interesting how this did sort of emerge. So, Igor Baskin, I, I'm, I'm probably saying the name wrong, but, XAI co-founder. So he is a co-founder of XAI with Elon Musk and former Google DeepMind and openAI's. So on December 26th, day after Christmas, he tweets Opus 4.5 is pretty good.
[00:09:40] Then Andres Carpathy, who we've talked about many times on the show, a co-founder of, of openAI's and a renowned AI researcher, he replies and said it's very good. People who aren't keeping up even over the last 30 days already have a deprecated worldview on this topic. I, that one [00:10:00] to me was already interesting, like just to have him saying this and reinforcing this idea of how quickly these things are moving.
[00:10:06] The thing we often say, Mike, about like today's version of AI is the dumbest form we're ever gonna have in human history, reminding people all the time that the labs have smarter, more powerful versions of the models than you do. And so a lot of the times that we spend talking on the show about what these people are saying and doing, part of the reason we do that is they are seeing and experiencing the world differently than you are.
[00:10:29] They have access to things that you and I do not. And so trying to piece together what they're saying, what they're doing, what research papers they're putting out, gives us the ability to look around the corner. Now, back when I started researching AI in 2011, 2012, you could see around the corner like 18 to 24 months.
[00:10:48] Like there was a, there was a pretty good sense and in some cases you could argue we were seeing around the corner by a few years. That timeline has shortened dramatically. So now when we see around the corner, quote [00:11:00] unquote, it's probably like three to six months maybe. Like if we're lucky, that's for the people who are tuned in.
[00:11:06] Now, for the rest of the business world who aren't paying attention, you may see, still be seeing a year or two in advance of what they're seeing. But so Andres Carpathy, who everyone in the AI world pays attention to, for him to be saying it's really good, makes people stand up and listen. So now context, if we rewind to March of 2025.
[00:11:26] So 10 months ago, Dario Ade gave an interview at the Council on Foreign Relations. It was an event, and I'm sure we talked about on the podcast at the time, but there was a quote that that kind of captured the headlines from Dario. He said at that conference, if I look at coding programming, which is one area where AI is making the most progress, what we are finding is we are not far from the world.
[00:11:52] I think we'll be there in three to six months where AI is writing 90% of the code, and then in 12 months, we [00:12:00] may be in a world where AI is writing essentially all of the code. Okay. So at that time, most people thought that was crazy. Now we always caution people that Dario doesn't tend to exaggerate.
[00:12:14] So usually if he says something like that, he has already seen it or sees a very clear path to something like that occurring. Okay. So having this context on December 26th, also day after Christmas, Jackson Kian, who is an Anthropic researcher, tweets, I'm trying to figure out what to care about next. I joined Anthropic four plus years ago, motivated by the dream of building AGI.
[00:12:40] I was convinced from studying philosophy of mind that we're approaching sufficient scale and that anything can be learned in a reinforcement learning environment. So now I feel like Opus 4.5 is as much AGI as I ever hoped for and I'm not sure I know what I want to spend my waking hours [00:13:00] focused on.
[00:13:00] And then he goes into like some of the things he's thinking about, like specifically around like, alignment and safety is one of the key things. So the next day, after this tweet blows up because now you have someone at Anthropic saying, yeah, we basically kind of got the AGI, I think with Opus 4.5.
[00:13:17] Like what am I gonna do next? So he said Some reactions to my AGI framing are a refreshing reminder of people's current experience with chatbots. To use Claude code is to see Claude write arbitrary software, run into errors, reliably fix them, make helpful suggestions, and perfectly follow any given instructions.
[00:13:37] We don't yet have a great experience like this for non coders. try Claude code in the Claude app on non code tasks, though, is what he said. But I think that's largely an app problem, not a machine intelligence problem. So Mike, I'm gonna stop there for a second. I'm I, I'll, I'll keep going 'cause it got more interesting.
[00:13:55] But this is where the AGI thing started to sort of emerge again, from best I [00:14:00] can tell. And I don't know if you were paying attention as this was going along throughout the break too, but, this is where I really started to kind of, my ears started perk up. I'm like, okay, this is gonna be an interesting break.
[00:14:09] Mike Kaput: Yeah, exactly. I started hearing these signals too, and honestly over the last couple months, hearing people talk about Claude Opus 4.5 led me to really dive back into it and I don't know what it is. It is again, a, like we've said, a lot of times it's really just kind of going on vibes, but there's something different about this model and maybe it's just simply the personality or the way it works with how my brain works.
[00:14:32] But I had a few moments where I took a step back and was like, whoa, okay. That did a lot of things that I didn't exactly expect it to do. In a really, really high level of competence. So I can assume on the coding side, it's supercharged and that's what's driving this discussion.
[00:14:50] Paul Roetzer: Yeah, and as we've said, the, the, the precursor to the disruption of all knowledge work is coding and AI research, because that is the most valuable thing for these labs to build.
[00:14:59] [00:15:00] So they're fine tuning models to be great at doing this. so if 90% of all code is being written by the ai, that enables them to generate more code, take more shots on goal from an AI research perspective. So this is the first domino. So then we go to January 2nd. So there, you know, there's other things happening.
[00:15:17] There's, you know, there tweets to, to note. But the one that then jumps out and just kind of seems to blow up is, Jonna Dogan, principal engineer at Google. So this, as of this morning, has 7 million plus views on X. So she tweets, I'm not joking, and this isn't funny. We have been trying to build distributed agent orchestrators at Google since last year.
[00:15:41] There are various options. Not everyone is aligned. I gave Claude code a description of the problem. It generated what we built last year in an hour. So now this is a principle engineer at Google One admitting that they're using Claude code, which they should be, they should be testing the other models.
[00:15:59] But [00:16:00] then two, just straight up saying like it did what we just spent a year doing. So then she continued. It wasn't a very detailed prompt. So someone said like, well, what did you give it basically? So she replied, said It wasn't a very detailed prompt and it contained no real details given I cannot share anything proprietary.
[00:16:16] I was building a toy version on top of some of the existing ideas to evaluate Claude Code. It was a three paragraph description. Someone then replied and said, when will Gemini get to this point? And she replied, we are working hard right now, the models and the harness. Now, I gotta be honest, I didn't actually know what the harness meant.
[00:16:33] So here's a a little tip. If you are an X Twitter user. Grok is phenomenal as a, as an in integrated element of X. So anything you don't understand, you can literally just click the Grok, the Grok icon and it'll like pull up and then you can just talk to it. So I said like, explain harness for me. So harness refers to the agent harness or orchestration harness, which is the software infrastructure and frameworks that surround a [00:17:00] foundational model like Gemini or Claude to turn it into an effective AI agent system.
[00:17:04] So key aspects include tool calling, memory management, multi-step reasoning loops, context persistence, execution control verification. So basically what this Google researcher is saying is, Anthropic crushed this. They, they managed to take the base model and then build this infrastructure around it, which everyone is trying to do.
[00:17:25] So that was January 2nd. Then also on January 2nd, Ronan Anil, and I don't know if that, that that's his actual name or if that's just the Twitter name. I didn't have time to check into that, but this is also January 2nd. He says, my current theory is that everyone was secretly using Claude code to do real work and seeing improvements from sonnet, over Opus.
[00:17:48] Then once Karpathy said he used it over the holidays, so going back to our first tweet on the 26th, everyone else followed my entire timeline is Claude Code Heaven. So this then [00:18:00] blows up because someone says even Google engineer and then retweet the John Jonna post that we just talked about. So then Ronan replies, I used to be a Google engineer tool two leveled up all the way and feel if I had agentic coating and particularly Opus, I would have saved myself first six years of work compressed into a few months.
[00:18:24] That then blows up. So then he comes out and he is like, Hey, full disclosure, I work at Anthropic now, but like I was at DeepMind in January, he led on the work on the Gemini models and worked prior to that in the Google Brain team on foundational research around training algorithms. So this is not some like random person on X.
[00:18:43] So now we have a current Google engineer, we have a former Google engineer who is a lead on building Gemini, who is now saying this is different. Like we have entered a different realm. Then on January 3rd, so just two days ago, we have Dave Holtz, who is the founder of Midjourney. [00:19:00] So he tweets. I've done more personal coding projects over Christmas break than I have in the last 10 years.
[00:19:06] It's crazy. I can sense the limitations, but I know nothing is going to be the same any anymore. To which Igor, who we talked about earlier, is the co-founder of XAI. Said, there are decades where nothing happens, and there are weeks where decades happen, to which Elon Musk replies, we have entered the singularity.
[00:19:26] So we've talked about, you know, Ray Kurz Well's, books on the singularity, and the ator is near and near. so the singularity is again, Paul grock into this like said, Hey, Grok, explain the singularity. It's the hypothetic hypothetical future point where technological growth driven primarily by AI becomes uncontrollable and irreversible leading to profound and unpredictable changes in human civilization.
[00:19:51] So we've talked about the singularity on this podcast many times. The basic premise was first we would get to AGI, where it's basically, you know, human level, average [00:20:00] human level at most cognitive tasks. Then we would get to super intelligence where it's beyond human level at all tasks. And then we end up in this self of improving realm of the singularity.
[00:20:11] So going back to ade, in March. When asked about the impact of, well, if you're doing 90% of your code with ai, like, well, what happens? And he said, I think, that eventually all little Islands, meaning all forms of work will get picked off by AI systems, and then we will eventually reach the point where This can do everything that humans can do.
[00:20:32] And I think that will happen in every industry. I think it's actually better that it happens to all of us than to kind of pick people randomly. I actually think the most societally divisive outcome is if randomly 50% of jobs are suddenly done by ai. Because what that means. The societal message is we're picking half.
[00:20:53] We're randomly picking these people and saying, you are useless. You are devalued. You are unnecessary. So then the interviewer says, and [00:21:00] instead we're going to say, you're all useless. Laughing Ade says, well, we're all going to have to have that conversation. Like we're going to have to look at what is technologically possible and say we need to think about usefulness and uselessness in a different way than we have before.
[00:21:15] Our current way of thinking has not been tenable. I don't know what the solution is, but it's going to be different. We're all useless, right? We're all useless is a nihilistic answer. We're not going to get anywhere with that answer. We're going to have to come up with something else. So Mike, I'm gonna share up.
[00:21:32] To, to kind of wrap the session, I'm gonna share a personal experience to take this outside of the coding realm, but did you have any thoughts on any, any of those, notes or tweets or anything before I move on? No,
[00:21:42] Mike Kaput: I mean, I think you're about to get to this, but I do, it just feels like a weird tipping point in using at least Opus 4.5 specifically for some of the knowledge work and complex documents and strategies I've been working on, especially over break.
[00:21:57] Yeah, I think the models are all within, [00:22:00] what, three to six months of each other. So this isn't even like an Opus 4.5 like promotion right here. It's this idea that something feels like it changed a little bit, and it sounds like you might have experienced that too.
[00:22:11] Paul Roetzer: Yeah, so I will, and again, like I get, and I know our whole staff listens to this podcast and sometimes our staff learns about things through the podcast.
[00:22:20] And so like, I'm gonna preface this with, I haven't talked to anybody about this, like literally I haven't, I'm really interacted with the team much over the last 10 days. I've been pretty much, you know, just kind of hanging with my family and playing games and going to the gym and reading stuff online.
[00:22:36] So, again, in my quiet moments though, in the mornings, I was in business planning and so I've shared this story before, but I have a Co-CEO GPT I built, I actually, the template to do this is online. I shared it publicly. I did a whole webinar on how to do this. There's a class in our academy of how to do this.
[00:22:53] So I, I'm just gonna preface this by saying this Co-CEO is a GPT I built that has knowledge about our [00:23:00] company, our revenue plans, our growth plans, our structure, what matters to us, our value system, things like that. So the co-CEO already has a knowledge base of some context, so. so I would say our, our business SmarterX is at a bit of an inflection point.
[00:23:14] We consider ourselves sort of an AI transformation company. Our, our job is to drive transformation for individuals and for businesses. We do that, you know, as the, the foundation is, we are a media and research company. So we, we create content, create value through things like this podcast, through doing research and sharing that with people, builds an audience that then our revenue model is largely education and events.
[00:23:36] That's how, that's how we grow the business. So it, the business is going very well. It's growing very quickly. this business is already two to three times the size of my agency was when I sold the agency. So this is a much bigger business already than, than what we built back then. So right before break, I had meetings with my accountants and my attorney to explore [00:24:00] a, a, a few paths forward of, of directions we may go with the organization.
[00:24:04] When I look at the future of our company, again, I'm putting my CEO hat on here, but translate this to what you do. Whatever the problems you're trying to solve is the growth challenges you have in front of you, the strategies you have to build. So as I'm explaining my scenario, put, put this, put yourself in, you know, your situation as you go into this year.
[00:24:23] So there are unknowns and complexities, to both paths forward. So I would say there's like two, these two paths, that fall outside of my expertise. So for me as the CEO, I have to like, think about finance, I have to think about hr, operations, legal, customer success, sales, et cetera. Like I don't have leaders in place in all those areas.
[00:24:44] And so I have to be the Chief X in many of the cases for our company. plus my experience and the experience of our team is a bit of a limiting factor because we're scaling a company to potentially hundreds of thousands of customers. [00:25:00] Is not something I or anyone on our team has ever done. So I am, I am in, every day I wake up in a place I've never been in before to have to figure out how to build a company.
[00:25:11] So without getting into a bunch of proprietary detail, we'll call Path A Scale and Path Two, or Path B Hyperscale, both are really good scenarios to be in, but one, you know, scales pretty quick and one scales much faster. So I honestly, like, I'm, I'm at a crossroads and I'm trying to make a lot of really important decisions simultaneously, and I'm not sometimes sure which Domino has to fall first.
[00:25:37] And so who do you talk to about this stuff? I mean, I have great attorney. I have a amazing accountants. I have advisors in other areas. I have a lot of friends who've done a lot of amazing things in business, I can reach out to for specific things, but I do not have a Co-CEO, like a someone who's been through this.
[00:25:56] but I do have ChatGPT 5.2 thinking in my [00:26:00] Co-CEO. So I started a, a I won't get into all the proprietary context, but I basically said, here's path A, scale path B hyperscale. Let's analyze this together as a starting point. Create a list of all the questions I should be asking when comparing the two paths.
[00:26:16] Then we will go through each of those questions one at a time. This was like Christmas eve morning. I wanna say that I started this thread. So it then in about a minute or so, comes back with 56 questions. It breaks it into 10 categories of North star and personal objectives, strategy, clarity, market and competitive dynamics, growth model and unit, unit economics, financial realities, partner alignment, stakeholder politics, funding, governance, business structure, and tax considerations and ex execution readiness.
[00:26:50] I would estimate I spent over break maybe 10 to 15 hours talking to Co-CEO in this thread each morning, and then periodically throughout the day when I would [00:27:00] have another thought. I had, but I had not Co-CEO, like if I hadn't had this. GPT we're talking about 200 plus hours of manual research, note taking, summarization writing.
[00:27:15] I mean, there are, I haven't counted, but I would guess there's somewhere north of 20,000 words in this thread, which for equivalent purposes, the book Mike and I wrote is 50,000 words. So like over break, I had this conversation that lasted probably half the equivalent of half of a book. in the end, and I'm not done yet, but like in the end, as of my stopping point yesterday, I produced six documents that each would've taken me five to 10 hours to create Once I edit and finalize those documents.
[00:27:46] Co-cEO will have written 95% plus of the final words. So I guess my point here, Mike, is the state of AI is not just a story of automating AI [00:28:00] research and coding, like we're talking about coding, because that is the thing all the labs are focused on, right? But the business world and the future of work has fundamentally changed already.
[00:28:09] Most practitioners and leaders just aren't aware of how much, but I know you and I both pursue this in a very similar path, Mike. And so what I'm saying is when I produce these final documents for my team, I am the expert. I'm the, I'm the CEO, I'm the leader. It has to be my voice, my tone, it has to be me, and it will be because I'm the one that pushed the model.
[00:28:33] I'm the one that thought of the questions to ask and then like. Took it down different paths and said, you know what? Let's not go to the next 10 questions. Let's push on this one. Because now I'm, I'm, I'm thinking differently about something. And when I started five days ago, I thought Path A was this, and now, now I'm actually thinking Path B is possible, but now I need to solve these five things there.
[00:28:53] Chat, GBT would've never gotten to the end game without me, without a collaboration, I guess [00:29:00] is what I'm saying. But in the end, the final output is 95% plus what ChatGPT PT wrote, not words I personally wrote. And I don't see anything wrong with that from a, a leadership perspective. Like the whole point is to get to the end game and to do this the right way and to solve for the customer and to build the right team and to put the right funding in place, like make all these decisions, not spend the first quarter thinking about it and researching it.
[00:29:26] I wanna spend the first quarter doing the thing. And so the faster I can get to the point where I can do something, the better it is for everyone involved, all stakeholders. So I guess what I'm saying, Mike, is I'm making the argument that I think the same that we're seeing in coding probably happens in business if you know how to use these tools.
[00:29:47] Mike Kaput: Yeah. Couldn't agree more. I have very similar experience with a few things I've been working on. Nothing nearly as complex as what you're talking about, but I found that my end result was basically 95% written [00:30:00] by an AI model. But not only did it save me 90% of the time, it would've taken, but the output is a million times better because I spent a huge amount of time on what I should actually be doing, which is thinking, reasoning, verifying AI outputs rather than figuring out how to do the research, figuring out which questions to even ask.
[00:30:18] It truly was, I mean, I've always been impressed over the last few years by the technology's ability to act as a thought partner. something changed in the last three to six months maybe. it's breathtaking.
[00:30:30] Paul Roetzer: Yeah, and I think like the, again, the, the point we're making here is not to like, just let AI do everything.
[00:30:35] Like what this did is I couldn't call my attorney during break or I couldn't call my banker. I couldn't call these people at like random times when I was thinking of stuff. But if I didn't have this tool, if I didn't have this Co-CEO, I'd be here on January 5th, like, all right. Like, reach back out to my attorney.
[00:30:51] Like, okay, can you re-explain that concept? I don't need that anymore. I did my own work on it. I understand the concept now, now I can give my attorney direction and say, okay, here's, here's what I've decided. [00:31:00] This is where we're gonna go and this is what I now need you to do, because now I need your expertise that me and Co-CEO don't have.
[00:31:06] You're, you're short cutting to get to the, the final thing and like, what is the final thing? And sometimes when it's publicly created, like for this podcast, the final thing is very human and it is not being done by the ai. And so like, I wanna, I don't wanna spend three months figuring out who to hire and job descriptions and all that.
[00:31:26] I wanna spend three months interviewing people face to face and meeting these people and finding the right people to put into place to enable path A or path B. So I wanna get to the human part of this. I don't wanna spend three months on the stuff that the AI is just really good at.
[00:31:40] Mike Kaput: Yeah, it's say it's a really exciting time.
[00:31:43] I'm super excited to see how this evolves over the next quarter even. Yeah.
[00:31:48] AI Change Management
[00:31:48] Mike Kaput: All right. So next up, we are starting to see some signals that companies are failing to generate significant ROI from ai, likely because they're not facing a lack of technology, but more of a people problem. So Paul, you actually wrote this out really well in a recent post.
[00:32:05] About how pressing this issue is. He said, quote, if your company isn't generating significant ROI from AI adoption, then you have a people problem. You also argued that employees often view AI as a threat to their jobs, or a replacement for fulfilling work, which creates resistance that can only be overcome through education and empathetic change management, not just through handing employees better AI technology.
[00:32:29] Now, this was backed up by a really interesting post from Jack Saslow, CEO of an AI transformation firm called Who wrote Quote, this is what Silicon Valley doesn't understand about AI transformation. Technology is the easy part. Finding the problem is harder, but the hardest part, the part almost nobody wants to do is the human work of driving change, sitting with people, earning trust, refining the product until it fits their hands, pushing until adoption actually happens.[00:33:00]
[00:33:00] So Paul, I love the post you put out. This obviously just hits really close to home because we've seen this pattern play out and experienced it for nearly 20 years in the marketing world, right? You give people marketing automation software or CRM systems, we saw this on the agency and consulting side, and nothing happens unless you actually shepherd them through the change, which is why transformation is so hard and often so expensive.
[00:33:24] So, I'm curious about your thoughts here, why this is so important now. What is Silicon Valley missing here?
[00:33:30] Paul Roetzer: Yeah, and I think in the context of what we just shared, this, this makes a lot of sense as to why I put this on LinkedIn and why it sort of hit home for me at the moment. So I was in the midst of going through this Co-CEO conversation over break.
[00:33:42] In the midst of watching these Claude code posts happen. So Jack's post was on January 1st, so this is, new Year's Day. I'm, I'm reading this post, I'm like, man, this is nails, this is exactly what we've been talking about. He, he breaks down three things, working together to get meaningful, ROI of business [00:34:00] acumen to find the real problem, technical skill to build a solution that works.
[00:34:04] And then people skills to drive behavior changed. And he said these capabilities rarely exist in one person. They barely exist in most teams. So I, I, again, I am the example here. Like I don't have all these expert skills. I have general capabilities in a lot of areas required of me to run this company and grow this company, but I'm not an expert in those things.
[00:34:22] And even across our team, we don't necessarily have the full expertise needed. But because we understand what AI's capable of, because we know to ask the questions, we understand what problems could potentially be solved more intelligently. We can drive that change and we have a company of people willing to use the tools to drive that change.
[00:34:41] But Mike, you and I interact with organizations every day that are filled with people who do not want to do this. They, they do not want for different reasons to embrace ai. Maybe they're just not convinced it's important enough yet. Maybe they do fear for their job. Maybe the leadership just hasn't committed yet, and so there's [00:35:00] no motivation for them to do it.
[00:35:01] Whatever the reasons are. It basically comes back to a people problem. We have demonstrated already. Again, just the Co-CEO example, some of the things you're sharing, like the technology is already good enough. If it stops today, it is good enough to completely transform your work regardless of what your role is or what industry you are in.
[00:35:22] If you do knowledge work, it is already at the transformational period. If you are not seeing that transformation and the ROI that is a people problem, either the leaders are the wrong leaders who don't get it and aren't pushing hard enough, or your people within the organization are not understanding the significance and in what's gonna happen to their careers and they're not having a sense of urgency to do it.
[00:35:47] So it is not a tech problem unless your company refuses to provide the technology to the people. So that was, again, like I, I'm like over a break. I'm just wanting to explode because it's like you're [00:36:00] seeing this, like I'm living it in the moment, seeing the transformation in our own business, what's possible to do?
[00:36:06] I couldn't have done a year or two ago until we had reasoning models, I couldn't have done what I was doing over break. Right. And so like we see it and like we look at all these other companies and all these other people's jobs, it's like. They don't see it yet. Like they're, they're just missing it. And it goes back to that example of, you know, Andre's quote about like, sometimes like 30 days is all it takes.
[00:36:26] And it's like the world changed and you just, you didn't know. So if over break you literally just like went offline and weren't paying attention to anything, like good. Like yeah, I'm all for that. Like, I hope our team did that unplugged over break. I don't have that ability. Like I can't mentally do it because I know sometimes that it just takes a day, two days, three days.
[00:36:47] I mean, like half of what I just shared happened on, on New Year's Eve, Christmas Eve, Christmas Day, new Year's Day. Like I still spent 20 hours with my family those days. But like I learned the things I needed to learn in the quiet [00:37:00] moments.
[00:37:00] Mike Kaput: Yeah, for sure. And I wonder too, if this is why you see.
[00:37:04] Individuals racing ahead with ai, right? Because there's no real barriers in your own life to be experimenting with this except time. I mean, you don't have any change management to do except changing your mind and your habits, which are big, but you don't have a lot of barriers. So you see these people saying like, look, there's these wonders possible that you're not seeing in your company because it's just a very different level of velocity that you're able to do if you're a solo programmer, an entrepreneur or a leader, whatever.
[00:37:30] Paul Roetzer: Yeah. And I just, I'm more convinced than ever that the AI forward professionals and leaders, the gap between them and everybody else is going to get so significant, so fast. And the AI native and AI emergent companies, or like the AI Ford companies have, have just a massive, massive advantage right now.
[00:37:50] And I think it's gonna be a very, very disruptive year for the people and for the companies that continue to sit on the sidelines and not go [00:38:00] full steam ahead. Again, just what we're living ourselves each day. I don't know how you compete with people who have the knowledge of what these things are capable of and apply them to build their businesses.
[00:38:12] Like, I just don't, I don't get it. Like how, how they don't become obsolete in like, one to two years.
[00:38:18] OpenAI Is Hiring a “Head of Preparedness”
[00:38:18] Mike Kaput: All right. Next up, openAI's has opened a search for a head of preparedness to manage the emerging risks associated with its most advanced AI models. CEO Sam Altman came out describing the position as a quote, stressful job that requires jumping into the deep end immediately.
[00:38:36] And he noted that while models are improving quickly, they're beginning to present real challenges. So this role offers actually a base salary of over half a million dollars plus equity. Involves leading the company's technical strategy for tracking frontier capabilities that create risks of what they would call severe harm.
[00:38:56] So according to the job listing, this new executive will oversee [00:39:00] the preparedness framework at the company, designing evaluations and safeguards for all these high stakes areas where AI can present troubling capabilities, including things like cybersecurity, biosecurity, and deceptive behavior. So, Altman explained the company needs a more nuanced understanding of how these expanding capabilities could be abused, and he cited specific concerns regarding systems that can self-improve or find critical computer security vulnerabilities, as well as some of the early signs that AI can have an impact on mental health, which the company was dealing with in 2025.
[00:39:36] So Paul, I guess what struck me here is like you don't create a role like this at this salary unless you genuinely believe. The next models that you're releasing or have already could cause physical or systemic harm. So whether you agree with open AI's assessment or not, it seems like they are putting their money where their mouth is.
[00:39:54] How are you looking at this?
[00:39:57] Paul Roetzer: I definitely think it's a sign that they are [00:40:00] very confident that they're, very close and that they need to take their preparedness framework much more seriously. So the, you can go read the framework, but it was at least the current version was released in April of 2025.
[00:40:13] And the one thing I'll call out that I boldfaced in revisiting it was, of the track categories you mentioned two, two of the critical ones, the biological chemical capabilities, cybersecurity capabilities. But one of the other ones they highlight is AI self-improvement capabilities. That in addition to unlocking helpful capabilities faster, could also create new challenges for human control of AI systems.
[00:40:35] So this is something we've talked about, especially in Q4 of last year, and you'll hear a lot more about this year, is self-improvement, meaning the models can improve themselves, and the other component is continual learning. And there was actually quite a bit of buzz, over break in, in AI circles that Google has in particular has made, some advancements that have not been publicly acknowledged yet [00:41:00] around continual learning.
[00:41:01] Meaning, and again, this is a topic we talk about on, on the podcast numerous times, but, when these models sort of come out of the oven, when they're, when they come out trained, through their pre-training phase, they don't then have the ability to learn. They're, they're kind of like frozen in time.
[00:41:16] And the way they then provide answers to you or help you is by using tools like, you know, search and things like that. If the models can come out and then continually learn and make updates to their own knowledge base, that is a major unlock and it a lot, it leads to other things like memory and this ability to do self-improvement.
[00:41:38] So, there's, there's a lot of chatter around self-improvement, continual learning that tell me that the, the labs are much farther along in both of those areas than we, might be aware of. And so I think all the labs are starting to take this area very seriously.
[00:41:59] Khan Academy Creator Calls for Job Displacement Fund
[00:41:59] Mike Kaput: [00:42:00] Next step in a new guest essay for the New York Times Khan Academy.
[00:42:04] CEO Sal Khan argues that AI is poised to displace workers at a scale the public does not yet fully realize. So he opens this editorial or essay with a report from a venture capitalist he knows regarding a major call center in the Philippines. They basically recently deployed AI agents. Capable of replacing 80% of their workforce.
[00:42:25] And Khan argues this rapid automation threatens a sector that in that country currently generates up to 10% of the the GDP in the Philippines. Now, beyond call centers, he warns that autonomous vehicles and robotics will soon reduce human labor demands in diverse fields ranging from long haul trucking to software engineering.
[00:42:46] Now, these are all examples, but really the crux of this is that he, he proposes what he calls a quote, 1% solution. To address this, he's calling on companies that benefit from AI and automation to [00:43:00] dedicate 1% of their profits, not revenues their profits to worker retraining. He estimates that if the world's largest corporations participated, this initiative could create a $10 billion annual fund.
[00:43:13] This capital would support a centralized nonprofit platform designed to finance apprenticeships. Verify skills for high demand roles in industries like healthcare, construction, and education. So Paul, what struck me here is kind of his framing. He said, you know, if companies don't do this, they're going to face backlash.
[00:43:31] Once we start seeing job loss, they might face backlash from regulation, taxes, bans on ai. And he also notes that if AI causes mass unemployment, there won't be anyone left to buy their products anyway. So what did you take away from this? I did find it notable. Like Khan Academy is a huge education partner.
[00:43:48] They use the AI plenty. They're not anti AI at all. It's pretty striking.
[00:43:53] Paul Roetzer: Yeah, they're, they're one of the early openAI's partners. They infused CHE GBT and com, com Migo, which again, is [00:44:00] like a learning assistant built right into the platform. Yeah, I mean obviously I'm, I'm a big proponent of AI literacy and re-skilling and upskilling.
[00:44:09] I, I like the idea of a very tangible, like 1% tax. You know, he is referencing specifically the world's largest corporations have combined profit over a trillion dollars. And, you know, 1% would be nothing to them. I do agree that there's gonna be backlash as profits skyrocket and staffing levels are reduced.
[00:44:27] So take your big tech companies. We've already seen, you know, Amazon has alluded to many more, layoffs coming, meta. Others, like, they're gonna keep reducing staff regardless of what some of these other topics we'll talk about. Believe, staffs will be reduced and profits will increase. Wall Street will be very happy, when that happens, which means more profits, you know, for these companies.
[00:44:52] And that eventually will cause, you know, political upheaval and societal revolt against these tools like this is. I don't, [00:45:00] again, I don't want that to be the outcome, but I don't see how, it's not the outcome in the near term.
[00:45:07] Paul Roetzer: I had mentioned, I think in, in fall, something about like an automation tax.
[00:45:11] So as these layoffs happen, and as robotics and AI agent automation replaces people, that these companies need to be paying probably some sort of tax, again, I don't know where this ends up falling politically, but they're, they're not gonna be able to just benefit from reducing the human workforce, in, in favor of profits forever without backlash.
[00:45:33] So, I think, like I will say, like I've had, I, lemme see how I can frame this. I have made proposals to some organizations about the need for very high level funding of reskilling and upskilling workforces, and economists. And I would say the response has been pretty Luke. Warm at [00:46:00] best in the past, like, I think it was maybe just too abstract or like people didn't want to admit the amount of disruption the technology was gonna have, and so they just weren't ready to have these conversations.
[00:46:10] But I do think at some point this is an inevitable, like some version of government funded or, you know, private funded, you're gonna have to get into this. We see it like in Ohio, there's like a $40 million fund called Tech Cred, which provides, you know, reimbursements to companies that invest in technology education.
[00:46:28] I, I know of like eight states that have something similar. I think Canada actually has a program somewhat like that. So I, whether it's state done or federally done, I think you're gonna see a lot of movement around this. And I think these tech companies are gonna wake up this year to the fact that they need to be putting hundreds of millions of dollars behind this.
[00:46:45] The question becomes training for what, what jobs are they training for if you know that their technology is basically designed to. Replace, I mean, we'll use the term augment for now, but [00:47:00] like slash replace all cognitive labor. What are they training for? What are they providing $10 billion to prepare them for when no one seems to know what the jobs three to five years from now look like?
[00:47:13] that, that's, that's the big challenge, I would say, is a space I think deeply about. and I, I'll probably have more to say down the road, but I'm glad he's, I'm, I'm glad Sal Khan is saying these things and proposing ideas. Like I think we have to have more of this kind of discussion.
[00:47:30] Jevons Paradox in AI
[00:47:30] Mike Kaput: All right. Next up we're gonna talk about something called Jevons Paradox, which sounds like kind of a fancy term, but real quick, it's a 19th century economic theory that basically.
[00:47:41] Argues that historically efficiency improvements in resources In Jevons Paradox, the example is coal have consistently led to massive increases in demand rather than reductions. So the reason we're talking about this is that box CEO, Aaron Levy, we've talked about a bunch, basically published an [00:48:00] essay saying that Jevons Paradox explains the future impact of AI agents on the labor market.
[00:48:05] So he argues that AI agents are bringing this dynamic to what he calls non-deterministic knowledge work. They're automating complex tasks like coding, market research, contract review, and by dramatically lowering the cost of investment for these activities, levy produce predicts that small teams will soon possess the capabilities of a Fortune 500 company from a decade ago.
[00:48:29] Now, here's the crux of this. He says that based on the paradox. Displacement concerns about jobs are overblown. He concludes that the vast majority of future AI computing power will actually expand demand dramatically for all sorts of jobs and work, and it will end up being spent on work that doesn't even currently happen yet.
[00:48:49] So, for instance, software projects or medical research that are currently too expensive to justify. So Paul, this idea that is one side of the job discussion or debate is that [00:49:00] AI is going to create so much demand in so many new roles that we don't have to worry at all about losing jobs. Do you agree with that?
[00:49:07] Paul Roetzer: I don't. that's not gonna come as a surprise to anyone who listens to the podcast. I, I'm a big fan of Aaron's. Like I, I love what he's doing. I, I, he's one of the most intellectually engaged, technology leaders, I would say on this. I would love for him to be right. This is a talking point that's echoed by, others like Satya NAD is a, a big fan of referencing.
[00:49:32] The paradox, it comes up all the time with the tech, leaders. I just don't think that it's gonna play out the way they hope it does this time. And I don't, like, I, you and I talked offline, Mike, about this before we got on. It's like, I don't understand Aaron's angle here, like where his optimism comes from.
[00:49:51] And I want to, like, I, I, I, again, I love reading these other perspectives that, you know, maybe kinda lean [00:50:00] toward that, that I'm, I'm not seeing something that I, I'm just like missing how this plays out. And so I love this and I encourage people to read it and again, form your own perspective and consider historical, instances where we've learned and is there something from that past that we can pull forward?
[00:50:20] I just don't. I don't know why he, he would be so optimistic. We've talked about like David Sachs, I get why Sachs has to be so optimistic 'cause he is part of the administration and he has to toe the political line. Like I get that. Yeah. I don't understand. The people who seem to be neutral have being so overly optimistic and maybe what it is, Mike, is that they just gloss over the near term pain.
[00:50:46] So let's say like they're realistic that three to five years is probably gonna suck. Like there's gonna be lots of job loss. And, but in the end, and maybe they're just thinking five to 10 year terms, all this new stuff we can't imagine we created. And I don't, I don't disagree with [00:51:00] that. Like, I do think the innovation is gonna be incredible and it's gonna create all these jobs and all these businesses.
[00:51:05] We can't fathom like that is absolutely true. But both things can be true at the same time that 10, 15 years from now maybe there is this amazing future of abundance and life is amazing for all of us. But three to five years is gonna suck for millions of people. Like, right. I think that's the part.
[00:51:24] They just, for whatever reason, refuse to admit or refuse to like, drill into, and they just gloss over the shitty part to get to like, the future of abundance because it's just a better mind frame to live in, I guess. So I don't know. Like I, again, I would encourage people to read it. I think there's just flaws in the argument anytime someone brings this up.
[00:51:45] Like, demand for knowledge outputs is not infinitely elastic. Like, just because we can create more doesn't mean there's more demand for like attention for content online, for distribution of your product or services. Like, just because you can create more doesn't mean [00:52:00] people are gonna be there to buy more, especially if they're one of the millions who don't have a job in 12 months because of ai.
[00:52:06] you know, I just, I don't know. I think there's just a lot of. Flaws around how people think about the speed with which this is gonna happen when they try and connect at the past transformations and general purpose technologies. it's just gonna happen so much faster than this, than society can prepare for in the economy can prepare for.
[00:52:26] So again, I am, I try to be as optimistic as possible and I do believe I wouldn't be doing what I'm doing if I didn't believe there was a possible, like, really optimistic outcome. Yeah. But I think we're all fooling ourselves if we think that there's a very, very messy transition to get to the future of abundance.
[00:52:44] Mike Kaput: Yeah. I couldn't agree more. I have to do a lot more research on the differing perspectives here, but I just think about like, let's take marketing as a quick example. If there are agents next year or this year. That do a lot of this work that I would need humans for. It's like, sure, I may pay an elite tier [00:53:00] marketer a top wage to help me out with the final 10% of whatever we need to have done, but there are a lot of marketers in between to get to that point that I didn't hire.
[00:53:09] And so, and maybe the demand for the intelligence is there. I'm paying an AI company, not the marketer at this stage.
[00:53:16] Paul Roetzer: Yeah. And I, I'll say again, like I'm, I'll just be real about some of the things I've been thinking about at SmarterX. part of what I've been working on is the organizational structure at different levels.
[00:53:27] You know, at, at, at 20 million. At 50 million, at a hundred million. At, at 10,000 customers, 50,000 customers, 500,000 customers. Like what does it look like? And the thing I will say is like, I am trying to find entry level roles in there, and I cannot right now find out what those are, because what I know we need is human expertise and experience.
[00:53:53] That can work with and build and manage agents and that know what questions to ask of the [00:54:00] AI and then what to do with that. And so my challenge becomes when the, when the, let's say senior level person, you know, senior level, executive level, whatever, direct level and above, when they are able to talk to these things and then say, okay, go do it.
[00:54:14] To your point Mike, go build the campaign, write the emails, create the landing page, allocate the media spend, create the ads, buy the ads, go do the things that we used to have entry level people doing. I'm not convinced that the AI agents can't do all of those things in the next one to three years.
[00:54:32]
[00:54:32] Paul Roetzer: And so, as someone who plans to employ probably a hundred plus people, I'm not clear yet what the roles are for professionals with less than five years experience. And I hate saying that. Like I, I'm, I'm truly trying to figure out what will those people do. Because those are the future leaders of the company, but I don't know what the role is for them.
[00:54:56] and middle management, it's tough. Like, I don't [00:55:00] know what they're gonna do. So I, and again, I just don't know. The tech leaders sit down and think deeply about marketing, sales, customer success, operations, finance. Like, I think they think about the technology and that we, it just always works out and I don't believe that's what's gonna happen in the near term.
[00:55:20] The Rise of Vibe Revenue
[00:55:20] Mike Kaput: Alright, next step. We have entrepreneur Greg Eisenberg warning that the tech industry is going through a phenomenon. He calls, quote, vibe, revenue, or income driven by curiosity and novelty rather than genuine product. Utility. Eisenberg argues that many AI companies are currently masking serious churn problems because there's this initial wave of vibe revenue that mimics the growth curves of true product market fit.
[00:55:45] So, according to his analysis, customers often sign up for, you know, the cool new thing. Then cancel subscriptions within three to six months. Once that wow moment fades, he notes that founders frequently blame current model limitations for [00:56:00] this churn, missing the reality that better models do not automatically create daily user habits or higher switching costs.
[00:56:07] Eisenberg concludes that. Sustainable businesses are those used on what he calls boring days and stressful days, whereas companies relying on novelty will eventually hit a retention wall that could render their valuations and employee equity worthless. So Paul, it's a bit of a commentary on AI startups, but how do you think about this idea of separating genuine AI product market fit from vibe revenue when you're looking at the market?
[00:56:33] Paul Roetzer: Yeah, it's, I thought it was an interesting one and I, again, I was like over the, break. I was listening to a lot of podcasts and actually, like an audio book. I was reading Blitzscaling, and then I was listening to the Lenny's podcast, which has like great go to market stuff. and aandink just contextually for me, it's like a spot I've been thinking a lot about because even with our own growth, like, growth in sales isn't a challenge for our business.
[00:56:59] Like the way it's [00:57:00] been growing, it's all been inbound to date. you know, it's a pretty fast scale. but I think about like, you, you can't build a business and then just always keep replacing it with new stuff. Like you have to build like an a world class customer experience that drives expansion and retention and like true value.
[00:57:20] And so just this idea, you know, of the vibe, revenue and just kind of almost masking problems with these Yeah, I, it's like a, it's a fascinating thing and again, I think from a go-to market perspective, it's what business leaders, go-to market leaders really need to be thinking about. It's just where we're at and what we're now, what is now capable of being built and tested.
[00:57:40] and then, you know, but staying true to like the reality of building a sustainable, meaningful, impactful business that actually. Makes a difference and gets people like wanting to be a part of it, wanting to, you know, come back each day, each week. So, yeah. Good read.
[00:57:57] Salesforce Says Trust in LLMs Is Declining
[00:57:57] Mike Kaput: All right. Next up a report in the information's causing [00:58:00] ruffling a few feathers at Salesforce because it's reporting that Salesforce executives in certain cases are advocating for reduced reliance on large language models, especially in their product agent force due to reliability concern.
[00:58:14] So, according to a report from the information company, leadership is now promoting what they call deterministic automation, which is decisions based on predefined instructions. To counter what they describe as the inherent randomness of generative ai. So basically a senior vice president at Salesforce as part of this report noted that the industry quote had more trust in the LLMA year ago, but that practical deployments of the technology have exposed limitations.
[00:58:42] So for instance, they're seeing customers often require these like deterministic triggers to ensure that the LOM performs the same way consistently a hundred percent of the time, which as we know LMS are not always great at doing. So. Salesforce's CTO for agent force [00:59:00] actually explain the LMS often begin dropping instructions when given more than eight commands, and consequently the company is reintroducing in some cases basic if this, then that logic to lower costs and guarantee these age agentic workflows follow exact steps.
[00:59:16] So Paul, the PR response actually to this article is pretty interesting. On one hand, Salesforce seems to be saying in certain cases they're falling back a deterministic logic. It's not really AI to make sure AI works consistently, but they also claim they're resolving 90% more customer issues with agents, and they're really just kind of optimizing where the agent works and where it doesn't.
[00:59:38] But that is also a very different message than they've been touting in the past, which is AI agents will transform your business and do all this work for you.
[00:59:47] Paul Roetzer: I, I think a few things are going on here. One, I think Salesforce is going to learn the bitter lesson, which we've talked about on the show before, which is basically the concept that, you know, if there's a flaw in the AI now, [01:00:00] you know, it'll get resolved by smarter ai.
[01:00:04] And that you trying to think you can human craft these tasks, specific things, or these specific ways for it to behave over time gets obsoleted by just smarter models. I think there's a catch 22 here though for Salesforce, which is they've kind of painted themselves into a corner where they've bet everything on AI that isn't ready fully yet to do what they're envisioning in it, selling it to do.
[01:00:28] And while the AI might get there in six months, 12 months, 24 months, whatever it is, like, there's just a few more things that need to be solved with these models to get to the level of reliability that's expected. If you're using it as a CRM, you know, integration. they can't wait around for that. So they're gonna go try and hand code a bunch of stuff.
[01:00:49] I, I don't know, like I just, again, like I, I'm, I'm, I'm not like the technological expert. I'm not in these systems building it, but when you zoom [01:01:00] out, this is just one of those things that just doesn't pass the, like, the smell test. Like, I feel like this is gonna not probably work out the way they're hoping it does by trying to solve a technology that isn't meant to be deterministic.
[01:01:15] and trying to make it deterministic, and I don't know what the answer is. Like I don't, I don't know that there's a better path for them at the moment because of the bet they've made, but Microsoft's, you know, kind of been in a similar boat. Like there's just these tech companies that made these massive bets, went in very early, on, on LLMs and infused them in, and they, the tech probably wasn't there yet when they made those bets.
[01:01:41] Mike Kaput: Yeah. And almost too, the backlash of taking this position is not only, it might not work, but also admitting that this thing you've hung your hat on may not be as ready for prime time in certain cases as you've promoted. So it's just kind of a double whammy.
[01:01:55] Paul Roetzer: Yeah. Yeah. And I don't, again, like, I mean, our only frame of references we're, I've, [01:02:00] I've mentioned before, like we're HubSpot customers.
[01:02:01] I've been a, I mean, my agency was HubSpot's first agency back in 2007. Like I've been a HubSpot customer for 18 years. So I'm move far more familiar with their platform than Salesforce. Yeah. But I feel like HubSpot, you know, almost like, it's almost like HubSpot was the apple of like the CRM world.
[01:02:18] They, they didn't go all in quite the way that Salesforce did make the massive deep integration bet right away. I feel like they've been far, you know, slower in terms of their integration, maybe more calculated in how they did it, where it didn't, it's very additive, I would say to the HubSpot experience right now.
[01:02:38] Not like I feel like they screwed it up or went too far with the LLMs and routes doing things in our platform. And I, again, you're the, you and our team use HubSpot more directly than I do every day. Yeah. But my sense is it's a, it's a very complimentary layer of features that you sort of like, yeah, it's okay.
[01:02:58] Like if it's not perfect, like it's just summarizing [01:03:00] this thing or it's helping us do this. Where it's not like the core thing, but we're, we're also not using it to write all the sales emails and do all the out outward facing things. A lot of what we use HubSpot's AI for is more internal purposes where it's okay if there's like some hallucinations and things like that.
[01:03:17] So I don't know, might just be our, our way of using it. Not, comparing the platforms perfectly, but just my sense of it.
[01:03:25] NVIDIA Does Landmark Deal with Groq
[01:03:25] Mike Kaput: So next up, Nvidia has executed a landmark deal entering a $20 billion agreement with ai chip startup Groq, that's GROQ, we've talked about them before. It is not Groq, which is Elon Musk's AI company, or AI chat bot rather.
[01:03:42] So this deal is kind of one of these like acquihire acquisition things, but it's structure as a non-exclusive licensing agreement rather than a traditional acquisition. Under these terms, Nvidia gains access to GRS intellectual property and hires its founder Jonathan Ross, along with other [01:04:00] senior leadership.
[01:04:01] Now, Groq, as we talked about in the past, specializes in something called language processing units, which are designed specifically for inference. So the process of running AI models that differs from the training heavy focus of standard GPUs or the chips that Nvidia often sells. Now, NVIDIA's, CEO Jensen Wong stated the company intends to integrate these low latency processors into its products to serve real time workloads.
[01:04:27] Now, grock, the company will remain an independent entity focused on its cloud business. Their former CFO is taking over as CEO. The information reports. This deal structure allows Nvidia to onboard key talent and technology while avoiding the regulatory scrutiny associated with often these kinds of acquisitions.
[01:04:47] So Paul, I'm curious, you know, long time Nvidia watcher, we've talked about Groq before. What does this mean for NVIDIA's business moving forward?
[01:04:54] Paul Roetzer: This was the one that happened on Christmas Eve. Yep. Oh man, I gotta reading this like 11 o'clock [01:05:00] at night. yeah, so I mean, grock was kind of, you know, it definitely captured the attention of Wall Street.
[01:05:07] I would say there was a couple of instances in 2025, especially early in the year, where I think there was some questions about like, NVIDIA's dominance and could they maintain it as more of the usage of AI move to inference versus training. So NVIDIA's stock, you know, has skyrocketed to historic levels, largely because they enable the training of these models.
[01:05:27] Where groq sort of specializes more in the use of the models on devices like you and I when we go in and use ChatGPT. So there was some questions and you know, I think Jensen had to address it numerous times in different interviews last year and differentiate it. But overall, I mean, I, this seems like a brilliant move.
[01:05:45] It, you know, this acquihire kind of approach that everyone is taking and, you know, I think that it is a prelude to many more acquisitions to come for Nvidia. I feel like they're gonna be very aggressive. They have, based on a [01:06:00] quick search, they about 60 billion in cash and equivalents, by late 2025.
[01:06:05] So that's cash on hand. Cash reserves, I don't think they're gonna be shy about making investments. They're obviously, you know, printing money right now in Nvidia. So I don't know. I would think they're gonna be pretty active in, in the mergers acquisition space or acquihire space, at least.
[01:06:21] Meta Acquires Manus
[01:06:21] Mike Kaput: Our next topic also about an acquisition.
[01:06:24] So Meta has reached an agreement to acquire the AI startup Mamo for more than $2 billion. This is a Singapore based company, which was founded by Chinese entrepreneurs. Specializes in autonomous AI agents capable of performing complex tasks, like producing detailed research reports and building custom websites with minimal human input.
[01:06:44] So this acquisition represents a bit of a pivot for meta as it's trying to cement its position against Microsoft and Google in the market for AI productivity tools. It's also one of the first major instances of a US Tech giant purchasing a startup with deep roots in the Asian [01:07:00] AI ecosystem. So Manus gained quite a bit of attention and traction earlier this year, crossing a hundred million dollars in annual recurring revenue in December, just eight months after its launch.
[01:07:11] So under the terms of the deal, Manis co-founder XO Hong will join Meta and report directly to its COO. And this obviously follows a broader recruiting blitz by Zuckerberg. Which has included the recent purchase of a 49% stake in scale ai. And meta says it intends to scale man's agents and service to more than, to more business customers while integrating the technology across its social media products.
[01:07:38] So Paul, what does acquiring Manus tell you about where Meta is headed with ai?
[01:07:43] Paul Roetzer: They're gonna keep spending billions of dollars to try and get back in the game on ai. I think, like, you know, in 2025 started meta was, was hot. They were like, you know, dominating on the open source model side. They had a lot of momentum and, I feel like we'll talk about it in a minute, but [01:08:00] Lama four was sort of a dud and and Zuckerberg had to go into scramble mode and try and, acquihire, directly hire, talent to, to get them back on track.
[01:08:13] And I feel like they're, they're still scrambling and I, I'm not sure the path forward, other than integration into their existing platforms. But I don't know. I mean, they're, that's a tough one. if you had to force rank AI labs right now and who's, who's gonna make the biggest impact in 2026? Meta would be at the bottom of my list, honestly.
[01:08:34] Yann LeCunn Speaks Out
[01:08:34] Mike Kaput: Well, you might not be the only one thinking that, because in our next topic, Yann LeCunn, who we've talked about all the time, and is considered one of the godfathers of modern ai, is kind of speaking out. He just did an interview with the Financial Times, you know, he's currently, as we've covered in the past, stepping down from meta to launch a new research startup.
[01:08:53] So in this exclusive interview with the FT, Yann announced he will serve as the executive chair of Advanced Machine [01:09:00] Intelligence Labs, what he calls something called a quote, Neo Lab focused on fundamental research beyond large language models. So LeCunn has become a vocal critic of the industry's obsession with LLMs.
[01:09:12] He says they're a dead end for super intelligence. Because they lack an understanding of the physical world. Instead, this new venture of his will develop world models that learn from video and spatial data, and that's an approach he believes is necessary to replicate the way biological intelligence learns.
[01:09:28] Now, also, a lot of juicy details in this article about the tension at Meta where LeCunn revealed that Zuckerberg lost confidence in the generative AI team after the Lama four model was considered a flop. He also said performance benchmarks were reportedly quote unquote fudged. And he also stated that his integrity as a scientist compelled him to leave an environment that he described as LLM Pill.
[01:09:54] So we've tracked LeCunn for a while, Paul and his overall take on ai. What did [01:10:00] you take away from this interview and kind of what he revealed here?
[01:10:05] Paul Roetzer: Again, I don't have any internal sources at Meta, but if you go back and listen to the 2025 episodes, everything that we assumed was happening and said would happen happened.
[01:10:14] Like this is, this is exactly how I anticipated this would end, when they made the Alexander Wang Acquihire. And so I did not expect him to just like, put it all out in this interview. It's a fascinating article to read. you get kind of a sense of his personality as you're reading through this, but, yeah, so I mean, just real quick background.
[01:10:36] He's, you know, joined there in 2013 as a, a bit of a consolation prize 'cause Zuckerberg had tried to acquire DeepMind and Demis did not wanna work for Zuckerberg, so they ended up selling to Google instead. So LeCunn went there on three conditions that he didn't have to leave NYU, that he didn't have to move to California, and that everything that his research lab did would be made public.
[01:10:59] [01:11:00] that, you know, Zuckerberg held to that up until sort of the ChatGPT moment. they tried to play the game of like, let's own the open models. And so they, that that's where they made their initial traction. llama two, they put out the open weights for all users, meaning people could download and tweak it for free.
[01:11:17] And LeCunn said in the interview, like, this sort of like, changed the industry, became the gold standard and open LLMs. I would say the deep seek moment probably impacted meta more than anybody else because in essence, this Chinese lab showed up and just took ownership of the open source side of things. So that, led to the switching of gears.
[01:11:36] Zuckerberg placed more pressure on the GenAI unit to accelerate a development and deployment, which led to a communication breakdown. LeCunn says in the article, mm, he said we had a lot of new ideas and really cool stuff that they should implement. They were just going for things that were essentially safe and.
[01:11:51] proven, when you do this, you fall behind. And then he mentioned, as you said, Mike, that the Lama four model was a dud. The fact that [01:12:00] he said they fudged the results is like, that's wild. I can't believe he said that. Like, yeah, that was the one thing that jumped out to me. I mean, he, he laid a couple other doozies in there, but the fact that he admitted they cheated was crazy to me.
[01:12:13] again, something we all basically knew, but for him, someone to say it was crazy. So that's when he said Zuckerberg lost confidence in that team, kind of sideline the Gen A organization and then they started poaching everybody. they asked, do you think it's gonna work? And he said The future will say whether that was a good idea or not.
[01:12:31] LeCunn says Deadband when asked about Wang. And so again, Alexander Wang is what, 28 years old, I think. Yeah. Came from scaleAI and the $15 billion Acquihire. He called him young and inexperienced. He said he learns fast. He knows what he doesn't know, but there's no experience with research and or how to practice research, how you do it or what would be attractive or repulsive to a researcher.
[01:12:55] so he is basically saying Alex is not a researcher. He has no idea what he is doing when it comes to [01:13:00] research. And then he said this again, I'm, I'm just kinda shocked. He said these things. Alex isn't telling me what to do either. You don't tell a researcher what to do, you certainly don't tell a researcher like me what to do.
[01:13:12] And then said he didn't mince words about why he ultimately decided to leave meta after more than a decade. Staying became a politically difficult, thing. He tells the, the author, and while Zuckerberg likes LeCunns World model research, the crowd who were hired for the company's new super intelligence push our completely LLM pill, as you said, Mike.
[01:13:33] And then, this clearly alienated Yann LeCunn. He said, I'm sure there's a lot of people at Meta including perhaps Alex, who would like me to not tell the world that LLMs basically are dead end when it comes to super intelligence. But I'm not gonna change my mind because some dude thinks I'm wrong.
[01:13:49] I'm not wrong. My integrity of the scientist can now allow, not allow me to do this wild. Like I kudos to the author that article to get that interview. I mean, it's just not, the [01:14:00] stuff you usually hear on the way out
[01:14:02] Mike Kaput: feels like LeCunn has been holding those words in for a while. Oh my gosh. Yeah.
[01:14:05] Paul Roetzer: Again, all the stuff I assumed, like, I was like, there's no way he's reporting to this guy.
[01:14:09] And yeah, I just didn't expect to actually see it in print.
[01:14:14] OpenAI Preps for Largely Audio-Based AI Device
[01:14:14] Mike Kaput: All right. Next step. We get a report from the information that openAI's is ramping up, its audio AI development in preparation for the launch of a new hardware device, which we've talked about a little bit, that is expected to debut in roughly a year.
[01:14:26] Now, according to the information, the device will be largely audio-based and perhaps Screenless reflecting a design philosophy champion by former Apple designer Jony Ive who has a passion to aim to reduce screen addiction? Internal sources told the information that OpenAI researchers believe current audio models, which are distinct from the primary text models used in ChatGPT currently lag in accuracy and speed.
[01:14:53] To address this, the company is unified engineering and research teams to build a new audio architecture. That [01:15:00] architecture is slated for release in Q1 of 2026. This new model is basically expected to handle interruptions, speak simultaneously with the user and offer more emotive responses. So this would then power this hardware, which is envisioned as a companion that proactively offers suggestions based on its surroundings.
[01:15:21] Some form factors under discussion include smart speakers and glasses. Now, Paul, still a lot of details to figure out what this device is going to actually look like. We've talked about before, I can tell you from personal experience, even just using simple products like whisper flow or just voice mode have like totally transformed how I interact with ai.
[01:15:42] So it feels like if openAI's nails this in some way, this is going to be a huge deal.
[01:15:47] Paul Roetzer: Yeah. Here here's is another idea for you on the product side of what it could be. And I don't remember where I first saw this over break, but a pen is the thing that started gaining traction. And it's [01:16:00] kind of a fascinating concept because there's an interview, we'll put the, a link to a tweet that shared this interview with Sam from earlier in 2025, where he talked about the fact that he's actually a pen and notepad guy, okay?
[01:16:14] And ev everything he does is writing and then he rips the pages out of all these notepads. And so he goes on like five minutes about how writing is thinking, and he has to go through this process. And so it's actually kind of a really fascinating form factor that I hadn't previously considered. Mm.
[01:16:29] Where the pen would recognize everything you're writing. So you'd be creating a digital version basically as you're writing. he even gets into like the specific kinds of pens he likes, like the style of pen and like the ink and things like that. And so you could tell he is talking very, like he's thought deeply about that form factor before.
[01:16:49] And so imagine everything you write again is also written in, in a digital form. You can talk to the pen if you'd prefer. You can hang it on a necklace or a clip [01:17:00] and it can have a camera that can also then audio record everything around you, goes with your iPhone. Doesn't replace your iPhone. I don't know, like it was, when I first saw it, I was like, damn, that actually makes a lot of sense.
[01:17:14] And oh, it was a supply chain thing. Somebody actually kind of had a source, within the supply chain that said it was, it was a pen that was Oh, wow. The, the part that, yeah. Obviously everybody's under very, very extreme NDAs on this stuff, but doesn't mean supply chain doesn't leak. So yeah, something to watch for.
[01:17:32] Again, I'm not saying that's what it is, but it's actually the most logical form factor, theory I've heard when you think about it.
[01:17:39] AI Predictions for 2026
[01:17:39] Mike Kaput: All right. Next up, Richard Socher, the CEO of u.com and former chief scientist at Salesforce has released a forecast for 2026 outlining how he thinks AI will reshape labor and science.
[01:17:55] And he predicts the emergence of what he calls reward engineering [01:18:00] jobs where specialists must precisely define success metrics for AI agents to prevent what he calls suboptimal reward hacking as models tackle longer term goals. He argues that while the AI wave is as foundational as electricity, many startups will fail due to poor leadership or structure.
[01:18:19] Conversely, he anticipates the rise of a 10 person unicorn company of 10 person unicorns, which achieve billion dollar valuations with minimal staff due to high revenue per employee efficiency. He also forecasts that 2026 we'll see at least a dozen companies raising billion dollar seed rounds now beyond economic socher posits that AI will become to biology, what calculus was to physics, a language capable of handling complex, messy systems like the microbiome, for instance, that simple equations just cannot describe.
[01:18:54] And he concludes that. While we may see some waves of Luddites rejecting the [01:19:00] technology, he will say, he says that automation will create new industries rather than shrinking the total work available. So Paul, definitely a notable person in the AI space. What did you make of these predictions? Some of you sound pretty familiar to some of the stuff we've talked about.
[01:19:15] Paul Roetzer: yeah. I thought overall they were, really good. a few that jumped out to me as new marketing motion market directly to LLMs. As agents make purchasing decisions, companies will need to optimize for our consumption, not just humanize. That's one we think a lot about. You know, obviously we have our make on event that was a topic of interest.
[01:19:31] We have our marketing AI industry council with Google Cloud. This is something that's come up in, in council meetings, so that is a, a definitely. One is everyone's trying to figure out the go to market when agents are playing a much greater role in information consumption and buying behavior. the continued pushback against agents on the web slows adoption.
[01:19:50] I thought it was a good one. He said Murky human incentive structures create friction advertisements, economics do not like web agents doing work for users. [01:20:00] Hourly paid jobs like lawyers don't have incentives for massive productivity increases until major disruption happens. He also said every knowledge worker becomes a manager of AI agents.
[01:20:10] And then by the end of the year we're going to see the first glimpse of where super intelligence might go. And as we touched on earlier, he said self-improvement is a term being thrown around a lot, but nobody has actually figured out yet in its fullest form. It'll be the most exciting research area moving forward.
[01:20:29] Mike Kaput: Those will be predictions to watch. Yeah. I can't say I disagree with a lot of this. Yeah, for sure.
[01:20:35] OpenAI Releases Prompt Packs for ChatGPT
[01:20:35] Mike Kaput: Alright, our last topic this week, openAI's has unveiled a broad suite of what they call prompt packs that are designed to transform ChatGPT from a general purpose chatbot into a specialized tool for specific industries.
[01:20:47] These are basically just curated libraries that of standardized ready to use prompts for different industries and functions. So for instance, this release has tailored collections for roles like [01:21:00] sales, hr. Engineering and government. For example, the chat GBT four product pack includes templates for competitive research and UX design, while the government section features a prompt pack for leaders to assist with fiscal analysis and public messaging.
[01:21:16] So this system, which is just hosted on their openAI's Academy, is basically kind of a plug and play resource. You browse by whatever ones jump out at you, by your sector, by your industry, and then you copy the pre-written prompts directly into ChatGPT. So they basically are trying to reduce the trial and error time, often required to get useful outputs from ChatGPT.
[01:21:38] Now, Paul, what struck me here is A, this is amazing, but B, how much this underscores this problem we still have, which is three years after ChatGPT comes out. The vast majority of knowledge workers just simply still don't know everything that's possible. That they can do with AI or how to get the results they need through prompting.
[01:21:58] So, because that [01:22:00] gap hasn't closed, this seems like a really good resource.
[01:22:03] Paul Roetzer: Yeah, these are great. And these, there's some of these came out in like the summer and I dunno if they repackaged 'em and yeah. Kinda like put 'em into a single page. But, some of you may have like checked these out for, if not, they are great reminders.
[01:22:14] So if you're already actively using, there's some really cool prompts in there just to like maybe give you some inspiration. These are the kinds of things though, if you are trying to bring your coworkers along, it's a super tangible way to do it. Like give them three to five sample prompts that say, Hey, here, just put this in.
[01:22:32] I I customize it with context for you. Like we talk all the time about that need to, again, change management, like solve the people problem, make it easier for them to get to value so that first time they try it, they get that wow factor and they want to keep going. So yeah, we love sharing like simple things like this that create immediate value for people.
[01:22:53] We'll put the link in the show notes. You can go right to it. And as Mike said, there's like sales, marketing, executives, hr, it, there's just [01:23:00] a bunch of sample prompts and they're really, nice quick way to get started for people.
[01:23:05] Mike Kaput: Alright, Paul, it's a packed week back for 2026. I appreciate you breaking everything down.
[01:23:11] As a quick reminder to folks, again, take the, weekly AI pulse survey, SmarterX dot ai slash pulse, and also if you haven't left us a review on your podcast platform of choice or followed us there, please do so. So Paul, thanks again.
[01:23:25] Paul Roetzer: Yeah, good to be back. Thanks everyone. Hope you had a great break and, hope your, new year gets off to a great start.
[01:23:31] We'll be back with you next week with our regular episodes. Thanks for listening to the Artificial Intelligence show. Visit SmarterX dot AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy and engaged in the SmarterX slack community.
[01:23:59] [01:24:00] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
