64 Min Read

[The AI Show Episode 184]: OpenAI “Code Red,” Gemini 3 Deep Think, Recursive Self-Improvement, ChatGPT Ads, Apple Talent Woes & New Data on AI Job Cuts

Featured Image

Get access to all AI courses with an AI Mastery Membership.

OpenAI has declared a "Code Red" to combat rising threats from competitors, reportedly delaying several initiatives to focus on improving their own models.

In this week’s episode, Paul and Mike dissect the shifting power dynamics in the AI race and explore the industry's quiet preparation for "recursive self-improvement," the backlash behind OpenAI's suggested app tests, major leadership shakeups at Apple, and Anthropic's race to the public markets.

Listen or watch below—and see below for show notes and the transcript.

This Week's AI Pulse

Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI. 

If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.

Click here to take this week's AI Pulse.

Listen Now

Watch the Video

 

Timestamps

00:00:00 — Intro

00:03:03 — AI Pulse

00:07:54 — OpenAI Code Red

00:16:28 — Google Releases

00:28:59 — AI Industry Preps for “Recursive Self-Improvement”

00:42:32 — OpenAI Slammed for Ads

00:47:20 — Apple Talent Shakeups

00:51:22 — Anthropic IPO and AI Interviewer

00:59:42 — Jensen Huang Rogan Interview

01:06:04 — Perplexity Lawsuits

01:09:33 — Meta Acquires Limitless

01:12:39 — Pope Weighs In on AI

01:16:53 — Data on AI Job Cuts

01:20:50 — Data on AI and Parenting


This episode is brought to you by AI Academy by SmarterX.

AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off an individual purchase or a membership by using code POD100 at academy.smarterx.ai.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: If you can define a workflow, if you can envision something you think could be more efficient, you are being given the tools to make it more efficient and you don't need it involved. Like that is the beauty of the moment we find ourselves in. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.

[00:00:20] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:41] Join us as we accelerate AI literacy for all.

[00:00:48] Welcome to episode 184 of the Artificial Intelligence Show. I am your host, Paul Roetzer, along with my co-host Mike Kaput. We. Are recording this on December 8th. It is a Monday 11:00 [00:01:00] AM There is a decent chance we're gonna get a new openAI's model. Maybe today, maybe by the time you've listened to this, we may have a new openAI's model.

[00:01:08] So as we've said a couple of times, December is probably gonna be a pretty busy month. So I would, hold on and get ready for quite a number of updates coming soon. So we had, some activity last week, but I think it's gonna kinda speed up a little bit this week. So this episode is brought to us by AI Academy, by SmarterX AI Academy helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys.

[00:01:33] And a new AI powered learning platform that we just launched last month. It was probably like three weeks ago or so. there are nine professional certificate course series already available on demand and more are being added each month. Mike has been very busy building AI course series. Usually if I'm in the office and I can't find Mike, he is recording a new course series somewhere.

[00:01:53] we also, in addition to those, we have our new genai app series. So this is one of the features we were really [00:02:00] excited about with the new Academy. Every week, every Friday we drop a new Genai app review. they give you short focused videos that show exactly what's today's most powerful AI apps can do and how to put them to work in your business.

[00:02:13] They're like 15, 20 minutes. They're meant to be kind of bite-sized. they're a really good way to learn tools fast. Mike and Claire on our team have been producing most of these. So new gen AI apps series courses drop every week. Some of the recent ones include Nano Banana Pro D Script, hey, gen Notebook, LM and Custom GPTs.

[00:02:32] So if you are not a member yet and you wanna become an AI Academy member, not only do you get access to all the courses, you get the weekly Gen AI apps, as well as our AI Academy live. If you are an AI Academy member, don't forget to check in every Friday when those new Gen AI app, classes launch.

[00:02:48] So you can go to academy dot SmarterX dot ai and learn all about. The AI Academy and our AI mastery membership program for both individuals and businesses. Alright, mark. [00:03:00] Mike, mark. It sounded like Mark. Yeah. 

[00:03:03] AI-Pulse

[00:03:03] Paul Roetzer: So, AI pulse, so this is week five. Yeah, I believe 

[00:03:09] Mike Kaput: that's week five. It might be week six actually.

[00:03:10] Okay. So, 

[00:03:11] Paul Roetzer: AI Pulse we introduced in November, late October, early November. If you're new to the show and aren't familiar with this, we basically do an informal survey each week. So we take a poll of our listeners. Usually it's between a hundred, 150 people respond. So again, these are informal polls, but it gives us a really good sense of just like kind of the overall sentiment.

[00:03:29] What are audiences thinking? And Mike and I have been surprised, honestly, by some of the results. So each week we just ask two simple questions. You are welcome to participate. So each week we do go through like a two minute recap of the previous week's, survey and findings. And then we, introduce the two new questions and so you can participate and you can actually go back.

[00:03:49] On the website and you can look at past surveys. So it's SmarterX dot ai slash pulse. That's where you take the survey, but it's also where you can go back and, [00:04:00] see what the previous survey was. Okay. So, last week we said with recent reports of AI related, layoffs, how secure do you feel about your specific role over the next 12 months?

[00:04:12] So, Mike, it looks like we've got about 49% rounding up says extremely secure. Yeah, 22% say somewhat secure, 13% neutral, 10%. somewhat insecure. Yeah. Okay. And then 6% extremely insecure. So I don't, I mean, so our audiences feel pretty good. Maybe that's because they're the AI forward professionals in their group.

[00:04:34] They're the only ones that are doing this, and so they're feeling pretty good about their prospects. but I think, again, this is the whole intention is this is our audience. These are people who are choosing to listen to an artificial intelligence podcast every week. So I would hope. They're kind of at the leading edge of feeling pretty good about where this is going.

[00:04:51] Right? I think if you took this at a broader audience, a much wider spectr you're not gonna get those high levels of security, Mike is my sense [00:05:00] here. and then, do you believe we are currently in an AI investment bubble? This has been a topic of conversation on the last couple episodes. I touched on this on my exec AI newsletter this past weekend.

[00:05:10] so do you believe we are in an AI investment bubble? Yes. It's a massive investment bubble that will burst is only 2%. Yeah. Is that, am I reading that right, Mike? Yeah. 34% say yes, valuations are too high, but the tech is real. So like, yeah, fine, but like, it's gonna be okay. Like, we're gonna, this is real stuff.

[00:05:29] 40%, which is the highest, percentage. It's mixed. Some stocks are bubbles, others aren't. 14% say we are not the current investment matches the potential value. And then 9% say we are currently underestimating the opportunity. Okay, so then if you want to go again, SmarterX AI slash pulse, the two questions this week, OpenAI, and again, we provide more context as we're sort of going through today's episode.

[00:05:55] OpenAI faced backlash this week for testing app suggestions, which look like [00:06:00] ads in ChatGPT. If ads or sponsor suggestions become a permanent fixture in the tool, would you switch models? So that is the first question. And the second is the latest challenger report that we'll talk about today. Sites arise in AI driven job cuts.

[00:06:15] How is AI impacting workforce planning at your organization? So we want to hear kind of what's going on in terms of headcount at your organization. alright, so that's the AI pulse. Again, you can go to SmarterX.ai/pulse and check that out. We're gonna jump into our main topic. So again, if you're a new listener, oh, by the way, quick thank you to everyone who's posting, what is it called on Spotify Wrapped?

[00:06:37] Is that 

[00:06:37] Mike Kaput: Oh yeah, Spotify wrapped. 

[00:06:38] Paul Roetzer: It's crazy. Like we've had so many listeners who have post their Spotify wrapped on LinkedIn and tagged us where we are, like the number one, or certainly in the top three of their podcasts. I saw multiple people tag us that they had more than 3000 minutes listened to this podcast throughout the year.

[00:06:53] Yeah. So yeah, for all of our loyal listeners who, you know, are with us every week, we really appreciate it and it's fun to, every once in a while get to like, [00:07:00] hear from those people, whether it's at a live event or people posting and stuff on LinkedIn. So, thank you. If you're new to the podcast, maybe you saw one of those posts and figure out, I'll go check out what these guys have to say.

[00:07:10] the way we do this weekly, which drops every Tuesday, is we do three main topics where we summarize. Mike and I kind of curate about 50 plus resources throughout the week. We then go through on Sunday night and pick what are the three main things and then like seven to 10 rapid fire items. So it runs about hour and 15 minutes.

[00:07:27] We shoot for an hour. It always ends up being an hour and 15 minutes. we do three main topics and then we do those seven to 10 rapid fires. So that's how it is. And then everything is published each Tuesday on YouTube, on podcast networks, and we publish all the show notes with the full transcript. And every link we mention, it all goes on the SmarterX podcast site.

[00:07:45] so, you know, have at it. Enjoy if there's a topic that interests you, go, you know, dig into it. Alright, with that Mike, we had a code red at openAI's last week. 

[00:07:54] OpenAI Code Red

[00:07:54] Mike Kaput: Yes. openAI's Paul has declared a code red to combat rising threats from [00:08:00] Google and other AI competitors. So according to an internal memo viewed by the Wall Street Journal, CEO Sam Altman told employees the company must marshal resources to improve the quality of ChatGPT as its lead in the AI race narrows.

[00:08:14] This urgency follows the release of Google's Gemini three, which recently surpassed open AI's models on industry benchmarks and helped drive Google to 650 million monthly active users as a result of this code. Red openAI's plans to delay several other initiatives, including reportedly its advertising efforts, AI agents for things like shopping and health, and a personalized assistant feature called Pulse.

[00:08:38] Instead, the company will focus on improving ChatGPT speed, reliability, and personalization for its 800 million weekly users. Now, to counter people like Google openAI's is also reportedly developing a new model code named garlic, interesting name there. And internal evaluation show garlic performing well against Gemini three in [00:09:00] Anthropics Opus 4.5 in coating and reasoning, and they may have a separate new reasoning model scheduled for release as early as this week.

[00:09:08] So Paul Code Red term kind of significant here. Actually the history is three years ago Google declared its own code Red in response to the release of ChatGPT. And now here we are. The tables have turned entirely. What's going on here? And how do you expect openAI's to fair moving forward? 

[00:09:25] Paul Roetzer: The, you know, I think the biggest year is like, why is this happening now?

[00:09:29] And you kind of touched on it, Mike. I, you know, I think more than anything it's Google is just flexing its muscles. you know, I talked about, I don't remember, was it like Gemini 2.5 Pro? Remember there was an episode where I said like, I felt like it was the first time where Google was just like, all right, this is our thing.

[00:09:44] We created all this. Like, we're back. And so when we think about Google's strength, as we've talked about numerous times on the podcast, they have the infrastructure, they have their own chips, the TPUs, which we've been talking about a lot recently. They have data centers. They have. you know, the [00:10:00] capacity to do things at a much larger scale.

[00:10:02] They have great models. Now they're winning at reasoning with Gemini three. Their image generation editing model is better. Their video generation model is better. They have AI agents that are universal through the, the, like, to be able to see, see and understand the world around it. So the visual capabilities of these universal AI agents, like they're just dominating honestly, like right now.

[00:10:23] Yeah. And so it's not like ChatGPT isn't still good and the models aren't still working, but Google seems to have figured a couple things out from a model perspective. Now, in part this is because Google has had AI research for like two decades. They, they led to many of the innovations. What happened in, in like, you know, after Chad GPT is everybody kind of stopped publishing the research paper.

[00:10:45] So we don't really often get to see what's coming from Google now. They seem to have stepped it up a little bit lately, which actually tells me they've made more breakthroughs than we know. Because they're all of a sudden releasing research papers that they haven't been releasing for [00:11:00] the last two years.

[00:11:01] Stuff that's indicating they've made breakthroughs. And so tells me like if they're willing to do that, they have probably already figured out how to productize those breakthroughs versus, you know, putting it out there and then two years later, like the attention is all you need paper in 2017 where they invent the transformer and they hadn't figured out what to do with it themselves.

[00:11:19] I think they now are on a process where they have a research breakthrough, figure out what to do with it, then they release the paper. So. I think there the research capabilities of, what was the Google Brain team now? The Google DeepMind team combined with the brain team, the data they have from all their different platforms, the distribution they have through all their different, technologies, through phones, through Gmail, through workspace, YouTube, like everything they've got going on.

[00:11:47] And maybe the most important thing, Mike, is the financial strength. Like right. Google is funding all of this through cash flow. Like they have a war chest of tens of billions of dollars and they make a ton of money. [00:12:00] And the thing that everyone thought was gonna be threatened with search and ads doesn't seem to be happening.

[00:12:04] Like they're, they're not losing their core business in all of this. So they have a lot of money to pour into this versus openAI's, which is doing these complex financial deals. They've announced in the last month, like 1.4 trillion in commitments to infrastructure and partnerships like Oracle and Nvidia and stuff like that.

[00:12:22] So they have these really like complex financial deals and industry partnerships, and they have to keep raising massive amounts of money they have to IPO in the next 18 months, or they're just screwed. And so it's not, again, like openAI's lost its ability to build innovative models and to like keep innovating at the frontier here.

[00:12:42] Google just has so much more going for them and it's starting to become very apparent the power of all of those pieces. The other thing that did come out, and we'll touch a little bit more on this in a, in an upcoming topic. Anthropic is racing towards an IPO and all of a sudden seems to have gotten their groove back.

[00:12:57] Like in the last two weeks. You can just feel the [00:13:00] sentiment around Anthropic evolving a little bit to where maybe it's just a better financial model. Maybe their bets on like coding and building automated research assistance and safety and alignment, which is like the founding of Anthropic. It's why Diode and others left openAI's to go build Anthropic.

[00:13:17] It seems like they're a more viable financial model. Now, it might mean their ceiling is lower than openAI's. Like openAI's is trying to do all this stuff. They're building devices, they're stealing employees away from Apple and Meta and all this stuff, and like they're got their hands in everything. And Anthropic is just, seems like it's laser focused on building AI researchers and coding capabilities and it's threatening.

[00:13:39] Like it's, it's making it really hard and so. I don't know. It'll be really interesting to see again, like if I was thinking to myself, and again, I I always say like, I, I'm not giving stock advice to anybody. I'm not giving investing advice to anybody. But I'll go through these thought experiments, like if OpenAI was a publicly traded company right now, and like if philanthropic was a publicly traded [00:14:00] company and SpaceX was the other one that we found is probably gonna IPO next year.

[00:14:04] and then I could, so I could invest in any of those three, and then I could invest in Nvidia and Apple and Google and like the others that are already in the game. I, I'm probably like real hesitant on openAI's at the moment, honestly. Mm. Like, I think the risk profile for openAI's is very, very high. I think Anthropic is starting to look more like a really good bet.

[00:14:27] Might not take the, you know, might not have the overall upside that an openAI's would, but I think either they become a really strong business or they just get acquired. But I feel pretty good about that. openAI's is just like scaring me. They're just trying to do so many things, compete in so many, different fields that I worry they're just getting too afraid in like what they're trying to set out to do.

[00:14:49] just to make, to raise enough money to keep going. Like they've just, their financial commitments are so massive. So, I don't know. It's, it is totally fascinating. Like I have no idea where this goes, but it is really [00:15:00] interesting to watch. 

[00:15:01] Mike Kaput: Yeah. That lack of focus is brutal for openAI's because as part of the code red, they have to cut back on all the other things that could make them money, which as a result, they're going to have to go raise more.

[00:15:11] They're going to be in financial straits, and then if they have to switch to revenue again, then we're gonna take our eye off the ball on the frontier model and be back in this situation. It sucked. 

[00:15:20] Paul Roetzer: Yeah. And yet like, so we hear one thing we're gonna focus, and yet we know some version of ship misses coming.

[00:15:26] We talked about this last week. So last year they did 12 days of ship Miss. So leading up to Christmas, they did a launch each day. Well, that's a pretty. Diversified approach. You're, I, if, if you're saying, okay, we're gonna do this with GPTs and we're gonna do this with this code stuff, we're gonna, and you're gonna start announcing all these things, doesn't that actually like divert from your message of focus where you're stealing resources from teams and like saying everybody's gonna make the chat bot better and we're gonna improve personalization.

[00:15:50] So I'll be really intrigued to watch what happens over the next, you know, when this comes out, will be the ninth. So, yeah. yeah, like 18 days or whatever that is, like 15 days. Like [00:16:00] what do they do, what do they announce that like, maintains this focus? I have seen increasing, you know, murmurs online that like, we will probably get GPT 5.2 seems like what most people are thinking it's gonna be called, an improved version of what we already have probably this week.

[00:16:17] And that may be, or that maybe they hold it till like the launch of ship Miss or whatever, but they're not gonna be quiet. Like we're definitely gonna get more indications before Christmas of what they're gonna do. 

[00:16:28] Google Releases

[00:16:28] Mike Kaput: All right. Our next big topic this week is about open AI's competitor Google, because they have rolled out two pretty significant updates.

[00:16:35] So first they launched Gemini three Deep Think Mode, and this is right now for AI Ultra subscribers, their highest tier, which is like $250 I think a month. And it is according to Google, a model or a mode rather designed to tackle complex math, science, and logic problems that challenge even the most advanced state-of-the-art models.

[00:16:56] Google reports that Gemini three Deep Think mode allows the [00:17:00] model to achieve industry leading scores on certain huge benchmarks, including 41% on the Human's last exam benchmark. And that's without using tools. It also achieved an unprecedented 45.1% on the notable ARC AGI two benchmark. And this basically measures how close AI systems are to general human intelligence.

[00:17:21] So those are both pretty noteworthy, even though they're a little wonky in terms of. The benchmarks themselves and a little in the weeds, but definitely worth paying attention to. At the same time, Google also introduced Workspace Studio, which is a platform that allows users to create and manage AI agents without coding.

[00:17:39] So someone can describe a workflow in plain language. So for instance, request a daily summary of all your unread emails, and Gemini generates an agent to automate the task. And these agents integrate directly with Google apps like Gmail and Drive, as well as third party services like Asana and Salesforce.

[00:17:57] So Paul, two big releases here. [00:18:00] First, maybe just talk to us about the significance of Gemini Three Deep Think, and then I wanna hear your thoughts on Work Workspace Studio, because I know you had some early tests and experiences there. 

[00:18:10] Paul Roetzer: So the the deep think, again, most people listen to this won't have access to or use Deep Think.

[00:18:16] So we'll just talk kind of at a high level of why this matters. First, as I say often on this, this podcast, this is a reminder that you and I do not have access to the most powerful models these labs have. Yeah. They, they very often have far more powerful, unrestricted versions of what you and I end up getting after it goes through safety and alignment, and then even we might not get access to them.

[00:18:39] So one, keep that in mind that this is an example of the kind of more powerful model. They're only gonna release it to alter because it requires a lot of compute to deliver it, and also because they just wanna see how it's used probably. So it's also a good reminder, again, if you're kind of new into this space, we are living through currently three scaling laws.[00:19:00] 

[00:19:00] So the first scaling law on AI was this idea of pre-training that we give it more compute, we give it better data, we train these things on more NVIDIA chips, and they just get smarter and more powerful. So that's like the core models like AGI, PT 5.1 or Gemini three. They go through a pre-training process where they're given data, they're, they go into these data centers and they build these more powerful general models.

[00:19:23] Out of that comes kind of a raw model that then goes through post-training. So this is the principle that an RD trained model's performance. Can be predictably improved through optimization techniques applied after its training. As is things like fine tuning, reinforcement learning that makes the model smaller, faster, more accurate, more specialized, more personalized, things like that.

[00:19:44] So that's post-training. So these are both scaling laws, meaning they're kind of following this scientific law so far. Like it may stop, but like as of right now, this is true. And then the third is what's called test time compute. And that's the emerging principle that a model's [00:20:00] performance on a difficult task can be improved.

[00:20:03] By allocating more compute power at the moment of use or what we call inference. So this means you get a better answer from the same model by letting it think longer or double check its work before giving a final response. So deep think. So we have Gemini three thinking. Now you have deep think and what that means is it's giving it more time to think at the time of inference when you and I would use it.

[00:20:26] So those are the three main scaling laws that appear to all be holding true, at the same time. And that's what's allowing a lot of these breakthroughs. So, in terms of Google Workspace, I was super excited about this. So we've known this was coming. We saw versions of these capabilities where you go in and build your agents back, Mike, when you and I Yeah.

[00:20:44] were at the Google Cloud conference in, in April, and so I've been waiting anxiously for this. And so I was like, honestly, like I wasn't even sure if I would have access because sometimes when Google announces things like, it's really hard to figure out where is this and how do I actually [00:21:00] get to this?

[00:21:00] And then when you finally figure out how to get to it. You realize you don't actually have access to it. So I woke up Sunday morning and I was like, all right, I'm gonna go, like, I got some time while I'm drinking my coffee to figure this out. I go in, I eventually find the link to get into it, and and it works like I'm there.

[00:21:14] I'm like, oh my God, this is amazing. So you go in and it's, it's really clean interface and you can create an agent, you can discover or you can click on and view my agents, and then it has a bunch of templates for you of like suggested things. I was like, okay, this is really cool. And it's all tied to Google Workspace.

[00:21:29] So if you're not a Google Workspace customer, you don't have sheets and docs and things like that, this isn't gonna be for you. but if you are, and according to our research last week, Mike, but was it like 50 some percent of our people use Gemini every day? Yeah, yeah. It was a big portion. So, yeah. So our listeners generally do, you know, use Google workspace.

[00:21:45] So, when you go in the interface is like, you click new agent and then you choose how to start your agent. And so it gives you a number of possibilities. So you can do this on a schedule like every Monday at 8:00 AM do this thing. You can trigger it [00:22:00] when an email is received, when something happens in spaces or your Google Chat messages, a change in, in a Google sheet doc, a folder, change file edits are made, a meeting is on your calendar or a form response is submitted.

[00:22:13] So again, they're integrating into existing things you do, and then they're allowing you to set up these sort of rules-based agents that then infuse Gemini's capabilities into those rules. So there's a lot of human in the loop here. This thing doesn't just like build the whole agent and just go off, but it can help you do those things.

[00:22:30] Then once you, you pick your things like, okay, every time I get an email from this person, do this activity or anytime, you know, I get, a chat message or a row changes in my sheet, do this thing. 

[00:22:42] Mike Kaput: Mm. 

[00:22:42] Paul Roetzer: And then from there you choose a step. So you can, it has, these are the options. AI agent, where you can ask Gemini or ask a Google Gem.

[00:22:50] So like, let's say I have a Google Gem. I don't know. Let's have a Google Gem for the podcast, like prep for the podcast every week and every time Mike emails me with an [00:23:00] idea for the podcast, go in and ask Google my, my podcast, gem if this is a good idea, and send me a quick summary of your thoughts on it.

[00:23:06] Some, something like that. I'm just kind of making this up as we go. it then allows you to choose a step that's AI skills, so it can recap unread emails, make a decision, extract information or summarize information. And then the other ones are tools, Gmail, chat, drive sheets, docs tasks, and then out of the box it has Asana integration, confluence, JIRA, MailChimp, QuickBooks, Salesforce.

[00:23:29] So basically it enables actions within those platforms if they're connected to your workspace account. And then from there, I mean, once you kind of create this thing and you do simple ones that are like two steps long, it doesn't have to be overly complex and zero coding. Like this is literally just like click a button and go.

[00:23:44] You can test it and then you can turn it on. So I was like, this, this is amazing. So I picked one, I think I did like, get a daily summary of unread emails, was like the first one I tried. And I was like, all right, sweet, let's try this. And so I build it takes me like two minutes. I click test run, and it's like, [00:24:00] boom.

[00:24:00] Like we are at capacity, we'll be vaccinated. I was like, shit like that. All right. That didn't work. Yeah. And then I was like, all right, there was one about, getting a daily news brief or something. So I was like, let me do that. So I click on that one. It comes with the pre-written prompts. So like, this is a real key aspect is they've built these templates where the system instruction prompt is already written.

[00:24:19] So it already says, like, as emails, come in, look for these things, and then they want you to go through these steps and it's like a, I dunno, it's like a, I dunno, probably 300 word prompt. Like it's, it's not insufficient or in insignificant. So I was really geeked about this one. So I go in, I edit that prompt to talk specifically about AI news.

[00:24:37] Here's the kinds of things we focus on our show, here's the kinds of people we cover on our show. Basically every morning deliver me this AI news brief daily, and then I can use it to basically vet against all the curation I do personally. And so I was really excited to try that one, create it, test run, boom, we are at capacity, we'll be back soon.

[00:24:55] I was like, damn it. And so, and then I tried one more auto-create tasks when I sent [00:25:00] action items and sa, same thing. So super excited. I could see a ton of application for this immediately in terms of efficiency, productivity. I would, Mike, if this was working, I would be running a hackathon with our team this week to get people figuring out ways to use this.

[00:25:16] But lo and behold, it doesn't work. So, I put this on LinkedIn. I got a lot of people like, yeah, it didn't work for me either. Yeah. So hopefully Google solves this real fast and we get to a better place. because I am very excited about this and just to see how simple Google has made this to where you can now imagine the ability to build agents for all kinds of other things.

[00:25:37] but they just gotta figure out whatever the bugs are that are causing this. I can't, it's not a, it can't be a compute thing. There's no possible way. This is a capacity thing. Mm. These are not heavy compute intensive things. These are basically text-based automations, which tells me this is far more of just like a flawed rollout than it is, like they're not providing enough compute to it kind of thing.

[00:25:59] do you [00:26:00] have any thoughts on that, Mike? 

[00:26:01] Mike Kaput: Yeah. One interesting thought here without kind of tooting our own horn too much is that this is one of the big reasons we teach so much about kind of evergreen fundamentals when it comes to ai. Because if you look here, I can now integrate all the gems that I have created if I have created them.

[00:26:18] I have workflows that are mapped out for everything I do. Each step, if possible, has gems running it. This is based on that infrastructure. This is ready for me to turn on and run 200 miles an hour. Yeah. Rather than having to build out all that stuff. Out of the gate. So really interesting to see the benefits of kind of doing that basic blocking packing stuff when it comes to AI literacy day in and day out.

[00:26:44] Paul Roetzer: Yep, for sure. That's a great point. 'cause like if you, if you're listeners to like, what the hell's a gem? Why do they keep saying this gem thing? What is that? Yeah, it, it's, it's Google's version of custom GTS and Chad GPT. if you're asking us what the hell is a custom GPT, like, you're at that beginner stage and that's fine.

[00:26:59] Like, [00:27:00] that's good. Like you're here and you're thinking about these things. But to Mike's point, like this is why the AI literacy matters so much. Like you have to understand these very basic things that are possible with no coding ability. Any knowledge worker can get in, build a gem, build a custom GPT set a rule because you know what your workflows are, right?

[00:27:17] You build tasks all day long of like how I'm gonna get something done. That is basically where we're at with AI is like, if you can define a workflow, if you can envision something you think could be more efficient. You are being given the tools to make it more efficient and you don't need it involved.

[00:27:32] Like that is the beauty of the moment we find ourselves in. so I'll just note a quick word of caution. So I had put this on LinkedIn on Sunday, like my experience with with Workspace Studio and Daniella, someone I'm connected to my network had flagged for me like, oh, are you sure this is safe? and I was like, yeah, like it has all my data already.

[00:27:50] And then she actually replied with a tech radar article that was unrelated. Like it's for a different, Google tool, but the point is well taken. And so I'll just flag [00:28:00] this. So, we'll put the link in. But Google also recently released something called Anti-Gravity. and there was a instance where it wiped the hard drive, basically the Google drive of a developer because it was given access to these things and the developer asked it to do one thing and it actually did a different thing and there was no way to recover this thing.

[00:28:19] And so I think it's just like, um. Something to keep note of that as we start to rely, I, again, I think these workspace agents are, are largely, benign when it comes to these, these risks. But if you are starting to connect to these APIs and you're allowing access, and like administrative capabilities, we are not nowhere near from a business perspective ready for that kind of thing.

[00:28:43] And these tools are pretty raw as we see already. Like I can't even get the thing to work right. And it's a simple rules based agent. So just keep in mind that there are definitely more risks that come as we start to do these things. 

[00:28:55] Mike Kaput: All right. Our third big topic this week. 

[00:28:57] Paul Roetzer: Speaking of risks. 

[00:28:58] Mike Kaput: Yes.

[00:28:59] AI Industry Preps for “Recursive Self-Improvement”

[00:28:59] Mike Kaput: Speaking [00:29:00] of risks in an event, there was an event at Harvard last week where former Google, CEO, Eric Schmidt warned that Silicon Valley is basically prepping for the arrival of something called recursive self-improvement. This is AI capable of learning without human instruction. It's not possible today, but he said Silicon Valley insiders believe it is very, very close and that we could soon see the ability for computers to do things like write their own programs, generate their own math conjectures, discover new facts and more in the next two years.

[00:29:32] Now, Schmidt said that was kind of the bullish Silicon Valley timeline. He said his own timeline for this happening is just a little longer, four years, not two, but he did mention quote, it's happening and quote, happening very quickly. Now, perhaps, coincidentally or not, openAI's has launched something called their Alignment Research Blog, which is a new platform specifically dedicated to the safety challenges of self-improving systems.

[00:29:59] So the company [00:30:00] describes this as a public lab notebook designed to share safety work earlier in the research lifecycle, including sketches, notes, and technical deep dives. According to OpenAI researcher Jasmine Wang, the content is written by researchers for researchers to encourage the pressure testing of ideas.

[00:30:18] And OpenAI openly states that while the upsides of something like super intelligence are enormous, the risks of this recursive self-improvement are potentially catastrophic if models cannot be robustly controlled, audited, and aligned with human values. So Paul, this seems like one of those things that's not coincidental and maybe the lab's telling you with whether you agree with it or not, what they believe is coming next.

[00:30:41] Why is recursive self-improvement such a big deal? 

[00:30:47] Paul Roetzer: This is not new. So first of off, this is like the first time you're hearing about recursive self-improvement. This is a known research path. So if you've ever been to my intro to AI class that I teach every month, or heard me give keynotes on the state of AI and business or, [00:31:00] you know, taking any of my courses online, I often talk about the dimensions of AI progress and the different pursuits that labs are making to make these models smarter, more generally capable.

[00:31:09] So things like computer use, expanded context windows, memory multimodality reasoning capabilities, recursive self-improvement is one of the dimensions I always feature because this has been a research challenge for years. it is also one of the things that leads to a lot of the sci-fi fears of fast takeoffs of these models.

[00:31:30] So this again, is not a new concept at all. Um. What is it? If it, if it is new to you, what, what is it? So if an AI system gets good enough that it can meaningfully help design the next better version of itself and that keep that loop keeps going, that is basically what we're talking about. So it is a, a loop of this recursive self-improving system.

[00:31:51] So the AI system can propose changes to its own architecture, training, data and training processes. Those changes produce a more capable new version. [00:32:00] The new version is even better at proposing further improvements. And then you just keep repeating. The danger comes when we start to rely less on the human in the loop.

[00:32:09] That's monitoring this self-improvement. so you can imagine this applied to your own work. Like any strategies, campaigns, workflows, like right now, they only get better when you make them better. You look at data, you analyze that data, you look at the campaign performance, you look at AB tests, um.

[00:32:27] Then you go in and you make these improvements. Now you might talk to like an AI system about it. You may say, Hey, here's what I'm seeing, the chat, GBT, like how could I improve this? But like you are putting time and energy into improving the outcome of something, the behavior of something. In this scenario, you basically start being removed from that loop.

[00:32:46] So imagine you're running a marketing campaign out top of, we're running a, a big make on promotion day. So I'd like to think about event ticket sales. Imagine an an AI agent that has access to all the data the human does, and maybe more data. And it's watching everything. It's looking at the email [00:33:00] performance, the ad by performance, the messaging, what's resonating with people.

[00:33:05] And unbeknownst to us, it's just constantly changing them. It's evolving the emails, it's rewriting different emails. It's changing the send time, it's changing the personalization, maybe changing the language. So it's like bilingual, like it's just doing things and the human is maybe completely uninvolved.

[00:33:21] Like we're just turning, like, just go do your thing. That's the premise here, except applied to AI models. So when this happens, you have a far greater risk of misalignment. You run into potential consolidation of power. So like a smaller number of tech companies who learn how to do this benefit from this and maybe don't share that information out, the disruption of jobs becomes far more likely, like a faster disruption to industries that are knowledge work related.

[00:33:49] Do complex decision making require constant optimization and experimentation. You start getting to autonomous campaign managers, kinda like I said, they're doing the strategy, they're doing the experiments, they're allocating the [00:34:00] budget, they're iterating on the creative, all without the human involvement.

[00:34:03] So we're talking about recursive self-improvement for AI models. Mm. But you crack the code on how to do it for AI models. Everything else just fits dominoes like everything else falls. Like the AI model stuff is harder than us running marketing campaigns like, like it's, you know, so then you start to think about, and again, like this is.

[00:34:19] This, this is stuff that's like 2026 stuff. Brand risk. So if you have self-optimizing systems, they might learn highly aggressive tactic tactics that work in the short term, but damage trust and violate policy. You might run into regulatory risks where you are still responsible, you and your company are still responsible for the decisions the tools make, but they're able to make decisions without you.

[00:34:39] And then data privacy, like if you don't know what it's doing, what it's accessing, how it's learning these things. so I was kind of like laughing. The reason I surfaced this as a main topic for this week, Mike, was, it was like by Tuesday of last week, I was like, oh, like we, we changed from super intelligence to recursive self-improvement.

[00:34:56] Like it was, and I funny, I went into nano banana and I [00:35:00] was like, I couldn't remember what that meme was called, where like the guy's holding his girlfriend's hand and the other girl's walking by and he's like checking out the other girl. Yeah, yeah. And I was like, do that meme where the guy's holding his girlfriend's hand, but looking at the other girl, make the girlfriend's super intelligence makes the other thing like recursive self-improvement.

[00:35:14] And it did it, like you go on my Twitter profile and like, see this, I tweeted this. It created like the perfect meme, and that was how I felt. It's like all of a sudden everybody's just like, all right, let's talk about recursive self-improvement. So you mentioned Mike, the the new Alignment blog or research blog from openAI's.

[00:35:30] The first post is Hello World. That's the title. The very first line of the first post of the research blog from OpenAI says, add OpenAI. We research how we can safely develop and deploy increasingly capable AI and in particular, AI capable of recursive self-improvement. So I was like, well, that's intentional.

[00:35:48] Like there you, you are very blatantly indicating that you have made progress in this direction and you are now needing a blog that talks about this. That is the lead of your top post. [00:36:00] Then the one you mentioned about Eric Smitt, so he was talking again former, CEO and chairman of Google. He said, so I pulled the transcript on this.

[00:36:08] We'll put the YouTube link in there. So the question is, what happens over time? You have language agents and reasoning. Isn't that what we do? Meaning humans? We do stuff, we communicate, we do actions. The San Francisco, meaning Silicon Valley consensus is that at some point that stuff comes together and you get what is tech called?

[00:36:23] Technically recursive self-improvement. Recursive self-improvement is when it is learning on its own. This is not true today. Today when you set up one of these data centers, you have to tell it what to learn. There's lots of evidence. This is coming though. For computers to generate conjectures, discover new facts.

[00:36:40] Looks like it is very close. Many people believe there will be new math design in the next year, meaning 2026. So we collectively, as an industry believe this is going to happen soon. If you ask a simple, swat of people from like San Francisco, they will say two years, which is really soon if you ask me, for years.

[00:36:59] [00:37:00] it happens very quickly, but I think he said more like probably four to five years. Four. Yeah. Um. So he, he, so Eric Schmidt, if you, if him followed him, is, is very involved in, overall AI policy, but specifically related to defense, in the us. And so he said, I want, and Henry, meaning Henry Kissinger, his co-author before he passed, certainly wanted to be built with American values and human values.

[00:37:24] Then the other thing, Mike, so again, this all happened in like a, you know, a 24 40 hour period we see a blog or a, a tweet from, Anna Goldie, who's a former Google DeepMind, researcher, worked in chip design at, at Google she tweets excited to announce, that Azalea, Nia and I are launching recursive intelligence, a frontier AI lab, creating a recursive self-improving loop between AI and the hardware that fuels it.

[00:37:49] Today, chip de chip design takes two to three years and requires thousands of human experts. We will reduce that to weeks. This will be incredibly hard. We co-founded the [00:38:00] machine learning for systems team at Google Brain. There we built Alpha Chip, a reinforcement learning agent for chip placement. and then she went on to say, our immediate goal is to dramatically accelerate chip design.

[00:38:10] Next, we plan to design chip's end-to-end, given a machine learning workload, unlocking a Cambrian explosion of custom silicon. Finally, we will close the recursive loop. We will build our own chips, train our own models, and co-evolve them on a path to super intelligence. AI designed better chips, chips train better ai.

[00:38:28] They raised, they announced they're raising a 35 million recently valuing that company at 750 million. So what does this all mean? Mike? Accelerated AI progress toward AGI and super intelligence, which is what we're talking about all the time. Accelerated risk of the fast takeoff that people worry a lot about, that these things just self-improve beyond our ability to understand what they're doing, accelerated likelihood of the labs working more closely together as it becomes more tangible.

[00:38:53] So all again, keep in mind. openAI's Anthropic, Google. These people sometimes are roommates. The [00:39:00] researchers, like as we recently found out with showOne Dresh, they are certainly often friends hanging out at the same parties. They talk. If a safety research of philanthropic said, knows that they've unlocked it, and they're a few months away from doing this, you better believe they're telling their friends at Google DeepMind and that they're talking to each other.

[00:39:19] And at some point, if it becomes obvious to them that the milestone is near or already achieved, it is far more like these labs actually start communicating more closely because this is the thing they're all worried about. If this happens, and it seems like it's going to, this also accelerates the likelihood of political divide AI regulations becoming reality very quickly and negative responses from society.

[00:39:43] This morning I saw a tweet from Bernie Sanders again. Political. Both sides are saying everything they, they've no idea which side to pick. Like, is this good? Is it bad? Is it gonna destroy jobs? Is it gonna ruin human life? Like everybody's trying to figure it out. So Bernie Sanders this morning, the greatest challenge now facing [00:40:00] humanity is whether AI and robotics are designed to improve human life.

[00:40:03] Or whether these technologies will undermine democracy and privacy and make the wealthiest people on earth even richer and more powerful. And this leads to, like we saw past week, Google DeepMind has an open job for a research scientist to explore this quote, to explore the profound impact of what comes after AGI.

[00:40:22] Key responsibilities include defining critical research questions within these domains, collaborating with cross-functional teams to develop innovative solutions, and conducting experiments to advance our mission. Spearheading research on the influence of AGI and political institutions, economics, law, and human relationships.

[00:40:39] Developing and conducting in-depth studies to analyze, AGI societal impacts across key domains. looking at, let's see, create a map of potential outcomes of what happens when we achieve AGI. And then building and refining measurement infrastructure and evaluation, frameworks for a system evaluation of AGI societal effects.

[00:40:57] So like, again, [00:41:00] we're spending the time on this as the main topic 'cause I think everyone has to understand this is a major change. If they're right, if, if Eric Schmidt is right, if gel's need for post AGI world researchers is right, if, if all the other labs are talking about recursive self-improvement are right, then the path to AGI and super intelligence accelerates and all of these other things come with it.

[00:41:23] So this is actually a very pivotal piece of all the other topics we talk about. 

[00:41:28] Mike Kaput: Yeah, and I suspect if any of it is even close to, right, whether they use the term recursive self-improvement in the headlines in the next few years, we're going to look back on this and say, okay, this was the piece that made whatever advancements, whatever innovations, whatever disruption possible that we're gonna be talking about a few years from now.

[00:41:46] Paul Roetzer: Yeah. We've talked in recent months a lot about the idea of all these labs trying to automate AI research. Yeah. This is how you do it. Like you have to have recursively self-improving models if you want to automate AI research and take, you know, [00:42:00] rather than running, I, I'm just gonna make up a number for context, but like, let's say a given AI lab like openAI's runs 500 experiments in a year of like potential ways to evolve the model, potential ways to handle pre-training, post-training, things like that.

[00:42:16] What if they can run 50,000 experiments next year, like, you know, through these automated, that's what they're trying to do. And to do that you need some element of this recursive self-improvement because there's no way you could hire as many humans as you want. They're never gonna be able to oversee all of this.

[00:42:32] OpenAI Slammed for Ads

[00:42:32] Mike Kaput: Alright, let's dive into some rapid fire topics for this week. First up, openAI's is facing some backlash after, testing a new feature that integrates third party app. Suggestions directly into ChatGPT conversations. TechCrunch reported the controversy began when Uchen Chin a subscriber to the $200 per month Pro.

[00:42:52] Plan for chat. GPT posted a screenshot on X showing ChatGPT suggesting the Peloton app during a [00:43:00] completely unrelated chat about, in this case, Elon Musk and Xai. So users started to criticize this placement, fearing that OpenAI was introducing advertisements into its paid subscription tiers to. In response opening AI's data Lead for ChatGPT.

[00:43:15] Daniel McCalley clarified that the Peloton app placement was only a suggestion, not a paid ad, stating there was no financial component involved. He did admit that the lack of relevancy to the act of conversation of this app recommendation resulted in a bad and confusing experience. A company spokesperson confirmed the incidents incident was part of an ongoing test to surface apps naturally within chats.

[00:43:43] Currently, these integrations are in pilot testing for logged in users outside the uk, Switzerland, and the European Union. They cannot, at least for the moment, be turned off by the user. Now Paul, this was pretty shocking in terms of some people do not seem to like the possibility of ads at all. Even something [00:44:00] that seems like an ad in chat.

[00:44:02] GPT, I'm curious is. openAI's overestimating how tolerant users will be of ads in ChatGPT, because that seems to be a pivotal point in their strategy eventually. 

[00:44:13] Paul Roetzer: I don't, all I know is it's a really bad look. Like, and again, I've listened to a lot of podcasts with openAI's leaders, I mean all the time, but like certainly in the last few months.

[00:44:23] And the thing you start to get a real sense for is, I mean, they're obviously moving really fast. They're under tremendous pressure to dramatically accelerate revenue, open new markets, get the next round of funding. They are hiring executives from everywhere, like very well known executives who need to come in and make their imprint, make their mark.

[00:44:46] obviously personalization, which leads to personalized ads is like a key component of what they're gonna need to do to make this work. And from everything I hear from listening to these interviews, they do not have a rigid product roadmap like it [00:45:00] is. Open experimentation. There's lots of kind of shots on goal, and then it sounds like things just sort of bubble up and then they get resources quickly and like decisions are made and all of a sudden there might be people at openAI's, I have no idea, they're gonna be testing apps, and all of a sudden it's in ChatGPT and people are getting blow back everywhere.

[00:45:18] people are pissed. Like I, I'd be pissed, like, I'm paying 200 bucks a month for pros. I don't wanna see some like completely irrelevant recommendation for an app in there that looks like an ad. that's annoying. I'm paying the 200 bucks a month. I don't wanna see it. So I think they're just making bad decisions because they're moving so fast and they have this culture where they don't have a lot of oversight of, of how these decisions are being made.

[00:45:42] And they're putting a lot of autonomy into these people. They're paying a lot of money to come there and disrupt things and take these shots. And again, it just like highlights to me the advantage Google has here. Like they're already making money. They're. They've learned lessons for 25 years, how to integrate ads in a, in a way that [00:46:00] is like additive to it.

[00:46:01] And, and you know, so like, I don't know. I just feel like we're gonna keep seeing missteps. And again, I'm not meaning to be negative towards openAI's here, it's just the reality. They are very much a scale up company. Like maybe like we've never seen. And they gotta figure a lot of things out real fast.

[00:46:17] And they're getting hit from all sides. And I think they're just gonna keep making missteps. And this is just a bad one. Like, it's, again, they're, they're these, like, these own goals. Like they're, they're doing stupid stuff that, yeah. Any, any, like, any average leader would've known not to do that. And like, these are really smart, talented people that are green lighting things that just, it's so obvious you wouldn't be like, talk to your communications team once, like, there's just no way you do this.

[00:46:45] So that's the part, like they gotta get past that. and I, again, I know they're doing a million things at once, but. The, I just hate to see good companies making really bad, obvious mistakes. Like you take a big risk or do an innovation, it doesn't work fine. But when you do something, [00:47:00] it's like, what are you doing?

[00:47:01] Like, again, like, we're not experts at this, but like, you look at it and like, that doesn't make any sense that you would've done that. that's the kind of stuff that bothers me and like piss me off if I was the leader in the company. It's like, how are we doing this? 

[00:47:12] Mike Kaput: Well, it seems like the internet agrees with you, at least in this corner of X.

[00:47:15] Yeah. Because people are overwhelmingly unhappy about this. Yeah. Yeah. All right. 

[00:47:20] Apple Talent Shakeups

[00:47:20] Mike Kaput: Next up, there's some talent shakeups at Apple. Apple is first overhauling its leadership across AI as they navigate delays in their product roadmap. The company announces that John Gria is stepping down as the Senior Vice President for machine learning and AI strategy.

[00:47:36] He's being replaced by Amar Sue Braman, a researcher who previously led engineering for Google's Gemini assistant and served as a vice president of AI at Microsoft. According to the Verge, the transition follows internal struggles to modernize Siri, which saw its latest AI powered features delayed earlier this year.

[00:47:55] Superman will report to Software Chief Cred, fedi, and oversee critical [00:48:00] areas including foundation models and machine learning research. Now at the same time, Bloomberg is reporting that Meta has hired Alan Dye, Apple's head of user interface design. They've hired him to lead a new studio focused on AI equipped hardware.

[00:48:15] He'll be reporting to Chief Technology officer Andrew Bosworth, who oversees Reality Labs. That group is tasked with developing wearable devices like smart glasses and virtual reality headsets. So this is being seen DI's departure as a significant loss for Apple because he was overseeing the interface design for the Vision Pro and iPhone X.

[00:48:34] Gria will remain as an advisor at Apple until his retirement in spring 2026. So, Paul, big shakeups, it seems happening in Apple. You're long time watcher and fan. What does this mean for them in the short term? 

[00:48:46] Paul Roetzer: I don't know. I mean, every day it's like, man, another executive's leaving. Like, this has been a, I mean, the last five days, including over the weekend, just like every day I go on X there's like a new story about, you know, another person leaving or [00:49:00] another person threatening to leave.

[00:49:01] lots of rumors that Tim Cook is on his way out, you know, sometime either, you know, early next year or, you know, later next year. 

[00:49:08] Mike Kaput: Yeah. 

[00:49:09] Paul Roetzer: And like, who's gonna, you know, replace him? yeah, I mean, there's just a lot going on there. Vision Pro, they made a big bet on didn't work. They spent 10 billion on, cars for, you know, a decade.

[00:49:21] Didn't work. Sury sucks. Has sucked for sorry, he's turning on, on my phone as I'm saying. Sorry. He sucks. Yes. Sorry, sir. it, it's bad and like it's obvious to everybody and they just haven't figured it out. So part of this is like, they, they need a shakeup, but you don't want great executives leaving during that shakeup who you see being part of the solution moving forward.

[00:49:42] So. I'm not sure how this ends. I just glanced at the stock. it, it's down, remember last week it's down 2% during that time, like the Dow was down, the Dow is up 1% NASDAQ's down, you know, I don't know, 0.5 or something. Like, it's not like the stock is [00:50:00] getting punished and especially like Monday morning, you would think that this would've had some effect and it, I don't know.

[00:50:05] I mean, it's 2% isn't a massive thing for them. and I have no idea if that 2% has anything to do with the executives. So I don't know. I would expect Apple to have been punished more by their inability to figure out ai. So I think there's probably some investors who would be excited by the idea that there's gonna be a shakeup and maybe they gonna go figure this out.

[00:50:25] I don't know. I'm, I'm as an investor in, in Apple, I am a little bit like worried. Like I, but at the same time, I feel like they're just such a great, stable company and yeah, they're gonna figure this out and when they do, they're gonna get rewarded. It's almost like they stock. Isn't factoring in them figuring out AI yet.

[00:50:42] And so I feel like if they do and they do it in an elegant way and they do it in a very aggressive way, they could benefit. But I think a lot of this has to do with like, the future is gonna be vision-based. It's gonna be some form of glasses or what they tried to do with vision approach didn't really work.

[00:50:58] Technically it worked, but like from a [00:51:00] market response didn't work. and everybody's trying to like hoover up the talent that can make the breakthroughs or lead to the productization of the next user interfaces. And so it's just gonna be wildly competitive between OpenAI and Meta and Google and Apple when it comes to that next generation of user interface.

[00:51:22] Anthropic IPO and AI Interviewer

[00:51:22] Mike Kaput: Next up we've got a couple of stories related to Anthropic. So First, Anthropic has reportedly hired legal counsel to prepare for an initial public offering and IPO, according to the Financial Times, the company has tapped the law firm Wilson Sonsini to begin work in a listing that could occur as early as 2026.

[00:51:38] The report suggests this is part of their race against rival openAI's to reach the public markets. Both companies obviously grappling with astronomical costs of training AI models. Anthropic is currently negotiating a private funding round that could value the business at more than 300 billion. Now, sources characterize talks with investment banks as preliminary about this, [00:52:00] but an anthro, an Anthropic spokesperson stated that operating as if they're publicly traded is quote, standard practice for a company of their size.

[00:52:08] They added no decision has been made on when or either whether to go public. Now, interestingly, at the same time, another story, Anthropic is also developing its own models to conduct qualitative research at unprecedented scale. The company has introduced something called Anthropic Interviewer, a new tool powered by Claude that's designed to conduct automated adaptive interviews with users about their experiences with ai.

[00:52:33] So this initiative aims to move beyond analyzing chat logs. To actually understanding how people feel about and use AI outputs in their daily lives. So they did an initial study using this tool of 1,250 professionals, and they gathered data from the general workforce, from scientists, from creatives, and they've now launched a public pilot of this tool, inviting eligible Claude dot AI users to participate in short [00:53:00] interviews to further inform the company's societal impact research.

[00:53:04] So Paul, first, the IPO seems like it could be a huge deal, and then I'd love to know what you make of this AI interviewer. 

[00:53:12] Paul Roetzer: Yeah, the IPOI, I kind of alluded to that earlier. I mean, I think Anthropic is just getting a lot of very positive sentiment right now for their direction, and I think they would do very well.

[00:53:25] in an IPOI, again, I've said many times, like I feel like Apple should have like bought them. I mean, Google, Google has 14% ownership in Anthropic. AWS has invested, like everybody's got a piece of it. but I, I've, I don't know, like, it would seem like someone would try and get them before the IPO, I guess is what I'm saying.

[00:53:46] Yeah. I don't know that that's gonna happen. The interviewer thing, Mike piqued my interest significantly. So I've mentioned on the show a few times, like I think of SmarterX in, in large part as a research firm. You know, we use our podcast as a real time distribution channel for the [00:54:00] things we're learning and sharing.

[00:54:01] And so I think a lot about what does the future of research look like, and as soon as I saw this, I was very intrigued. And so what the Lincoln, you can go kind of check out how this works, but I'll just give you a quick high level. They say the Anthropic interviewer operates in three stages, planning, interviewing, and analysis.

[00:54:18] So planning in this phase, Anthropic creates, an interview rubric that allows it to focus on the same overall research questions across hundreds or thousands of interviews, but which is still flexible enough to accommodate variations and tangents that might occur in individual interviews. they develop a system prompt or a set of overall instructions for how the AI model is to work, to give Anthropic interviewer its methodology.

[00:54:40] This is where we included, being Anthropic the hypothesis regarding each sample as well as best practices for creating an interview plan. This was established in collaboration with an actual human-based, research team. And then after putting the system prompt in place, the interviewer used its knowledge of it, the research goals, which they included in the blog [00:55:00] post to generate specific questions and a planned conversation flow.

[00:55:04] There was then a review phase where human researchers collaborated with Anthropic interviewer to make any necessary edits to finalize the plan. And then the third, the second phase is interviewing Anthropic. Interviewer then conducted real-time adaptive interviews following its, interview plan. At this stage, they included a system prompt to instruct the interviewer how to use best practices for interviews.

[00:55:25] If the interview conducted by the interviewer appeared on Claude dot ai and lasted about 10 to 15 minutes, Mike, as I'm thinking, saying this out loud, like, imagine this with an AI avatar, like doing customer success research, doing market research, consumer research, and adapting in real time, running shit, running HR interviews, like building an interviewer that does the man.

[00:55:44] Oh, okay. I hadn't really thought about this. I made these notes, but I'm like, I'm thinking in real time here. Analysis. once interviews were complete, a human researcher collaborated with Anthropic interviewer to analyze the transcripts. The interviewer analysis, steps [00:56:00] takes as input the initial review plan and outputs answers to the research questions along I, alongside illustrative quotations.

[00:56:07] imagine then like nano banana capabilities of like turning this into slide decks and like summarizing the results and slide decks and things. at this stage we also used our automated AI analysis tool to identify emergent themes and quantify their prevalence. So like, I'll hit on like a couple real quick findings, but to me, the big story here is what a brilliant use case.

[00:56:27] Like what a amazing innovation that was sitting in front of all of us. Like this isn't probably hard. Anybody with API access to Gemini or chat ccb t or Claude could have probably built this interviewer model. I could think of a hundred ways to use this right now, like on my mind is like swimming with ways to apply this concept.

[00:56:47] So, you know what, what they learned from this, again, which to me isn't even the headline, but overall, members of our general sample of professionals described AI as a boost to their productivity. 86% reported that [00:57:00] AI saves them time, 65%. They were satisfied with the role AI plays in their work. 69% of professionals mentioned the social, this is an interesting one, mentioned the social stigma that can come with using AI tools at work.

[00:57:13] One fact checker told Anthropic interviewer quote, A colleague recently said they hate ai. And I just said nothing. I don't tell them any. I don't tell anyone my process because I know how a lot of people feel about ai. I think that is happening way more than we're hearing about. And then the last one, whereas 41% of interviewees said they felt secure in their work and believed human skills are irreplaceable.

[00:57:36] Which kind of goes to what the questions we were asking in AI pulse, 55% expressed anxiety about AI's impact on their future. 25% of the group expressed anxiety, who, who expressed anxiety said they set boundaries around AI use, for example, at educator, always creating lesson plans themselves, while 25% adapted their workplace roles, taking on additional responsibilities or pursuing more specialized tasks.

[00:57:59] [00:58:00] So again, great research. Um. It's fun to see. And this gets beyond just the coders, which historically has been the issue we've had with Anthropics research is it's dominantly talking to people who do coding. this seems to get outside of that with a nice sample size of knowledge workers. but again, the interviewer one, like if you're like me and you're listening to this, like your mind is probably just spinning with ways you could use that kind of technology and interview processes, customer surveys, market research, all those things like man, it'd be huge.

[00:58:29] Yeah. I feel like 

[00:58:29] Mike Kaput: we could do a whole segment on the possibilities there. I would also say too, beyond the technological implications there, don't sleep on having AI in across your prompts and use cases interview you to get things done is extremely, extremely valuable. Very, with very simple prompts. You can have AI ask you.

[00:58:47] A battery of questions about anything you're doing and get a lot richer outputs, either for creating prompts or for getting something done. 

[00:58:54] Paul Roetzer: Yeah. One of the highest rated talks we had to make on 2025 this year was Jeff Woods, who, [00:59:00] is the offer of the AI driven leader, and he shared his crit framework, which is like a prompting framework.

[00:59:05] And I think the big breakthrough for a lot of people who love that framework is the idea of have the AI interview you and have it ask you one question at a time. So, yes, I could see the same model actually being applied to strategy where it's like, Hey, I need you to help me build a marketing strategy and allocate budget for 2026.

[00:59:23] I want you to interview me like you were a consultant from McKinsey and like, let's work through this together. And then the AI interviews you and you talk, use voice like you're not typing all this stuff like you. Yeah, I mean, just like I feel like we could just run a whole hackathon on just what to do with this kind of technology.

[00:59:40] I'm like, for sure. 

[00:59:42] Jensen Huang Rogan Interview

[00:59:42] Mike Kaput: Next up, Nvidia, CEO. Jensen Huang joined the Joe Rogan experience this past week. He sat down with the host of one of the world's most popular podcasts to discuss the future of ai. The conversation centered heavily on the geopolitical implications of ai, one compared the current tech race and [01:00:00] AI to the Manhattan Project and the Cold War.

[01:00:02] Arguing that AI Grants nation's military superpowers, he prays President Trump's focus on Reindustrialization, noting the administration wants to ensure critical technology manufacturing remains in the United States. Now, despite all this won, admitted to Rogan that quote, nobody really knows what the ultimate national security end game looks like.

[01:00:22] When he was asked about AI safety, he rejected this idea that we might have, you know, a single kill switch for ai and instead likened AI defense to cybersecurity. So a model where a global network of defenders constantly shares info to patch vulnerabilities. And interestingly, he closed this interview on a personal note.

[01:00:41] Revealing that despite leading a multi-trillion dollar company, he still operates in a constant state of anxiety driven by a fear of failure that has persisted for decades. Now, Paul, I know you found a lot to like In this interview. What jumped out at you? 

[01:00:56] Paul Roetzer: Yeah, it was more the human side to me, honestly.

[01:00:58] The entrepreneurial stuff, generally [01:01:00] familiar, obviously with the background of Nvidia, not some of the early origin stories like the near failure with the, there was one he told where the CEO of Sega stepped in. and they actually had a contract. So people don't know, like Nvidia started more in the game design, like building, you know, chips for gaming and consoles for gaming, to enable like next generation video games.

[01:01:20] And then they eventually realized in 2011 with the breakthrough of, computer vision by Ilya and Jeff Hinton, that the chips, the GPUs they'd built for gaming could be used to train AI models. Like that was sort of like the big breakthrough in 2011. But he told this story back in 95, how they're basically gonna go outta business and they had a contract with Sega, the big gaming system at the time.

[01:01:40] The CEO who had come from Honda, he was the ceo of Honda. Prior to that, they had to deal with them to build this game console for them, and then they, they failed. Then he had to go to Japan. He's like, yeah, we just, we can't do it. It's not gonna work. The thing we thought was gonna work isn't gonna work.

[01:01:53] I need you to let me outta this contract, or we're gonna go outta business. And on top of that, I need you to let me keep the $5 million you [01:02:00] gave us. and the guy's like, well, it's not gonna work. Like, you're like, it's almost a hundred percent probability this money is gone and you're gonna fail. And Jensen's, like he, he's like, I acknowledge that.

[01:02:11] I was like, this is probably not gonna work. Like, but if you don't let us keep this money, we are done. And so the CEO of Sega came back and he is like, all right, we'll do it. And so that was the 5 million that then floated NVIDIA's existence. Wow. Made it possible for them to make this next bet on the next generation technology.

[01:02:28] And eventually IPO. And then he told the story about Sega sold their shares at IPO and they still made out great, but whether the math is right or not, or he was exaggerating, Jensen said, had they held on to that investment, it would be worth about a trillion dollars today. Oh, wow. Which is like wild to hear.

[01:02:44] and then he told this other one that I thought was fascinating. I'd never heard the backstory to this. So the first, it was called, a DGX one. So like the computer that they were building a supercomputer they were building back in like 2015 Jensen Bet the company on this thing.

[01:02:59] They spent [01:03:00] billions building this computer with no buyers. Like it was basically like, this is what the future computing is gonna look like. This is accelerated computing. And so like he had a vision for this bet everything in the company on it. And then he announces it at their conference, their GTC conference.

[01:03:13] He's like, the audience is totally silent. Like nobody gets it. No one, no one wants this thing. And he's thinking like, the thing costs $300,000. He spent billions on r and d for this thing and he's screwed, basically. And so then he is doing an event with Elon Musk and they're actually talking about the future of autonomous driving.

[01:03:31] They're on stage together and he's explaining this Chip and Elon's like, you know, I actually have a company that might need that. It's a nonprofit. And Jenssen's like, oh God. Like no nonprofit's gonna be able to afford this $300,000 thing. Well, it ends up the nonprofit was openAI's. And so there's a, there's this like famous picture of Jensen delivering the first chip to openAI's and Elon's in the picture.

[01:03:53] And Sam, and I think Ilya is there and Greg Brockman. And so this is like Silicon Valley lore of this like photo. Well, this is the [01:04:00] backstory to where it came to be where Elon committed to buying this thing that no one else wanted. Wow. So, I don't know. I mean, just like. He came here at age nine as an immigrant because of issues in Thailand.

[01:04:10] His parents, he didn't see his parents for two years. They would write, they would record audio messages to each other once a month and then mail them to each other. And that's how they communicated him and his brother for two years. Wow. So he is just like, I don't care how you feel about Joe Rogan, like, put any of that aside.

[01:04:25] It's just an incredible entrepreneurial story. And like, if you've ever been an entrepreneur, you understand the amount of risk that goes into doing it, especially when you want to do something big. And it is, I will tell you from experience, nothing at the level he's done, but like having built multiple companies, it is an insanely lonely place.

[01:04:44] there are very few people who understand what you're going through and the decisions you have to make every day that the personal risk you have to take to do things. The resilience you have to do from failure and being told no and being told you're wrong, like for years on end. [01:05:00] Then making these massive financial bets and having like high conviction of what the future looks like for oftentimes for years before anyone else agrees with you or realizes you are right all along, it's insanely lonely.

[01:05:12] And so to hear his story and then like when you hear him say, I'm not even driven by success, I'm driven by fear. Right? You understand where that comes from. 'cause you don't ever forget those feelings. Like in building the AI Institute, we lost money for eight years straight, never made a profit for eight years.

[01:05:27] And I, but I believed with like enormously high conviction that we were right and the world was gonna change and AI was gonna be everywhere. those are terrifying times. and you have to have this insane faith that you're right and you're, you're gonna figure it out. And so I think like, just, if you only listen to it as an entrepreneurial story, like it's, it's incredible and it helps you.

[01:05:48] Again, if you're not an entrepreneur, you've never been on that side. Maybe you work for an entrepreneur, it might help you understand a little bit better the entrepreneurial mindset and like why. Maybe entrepreneurs seem a little crazy [01:06:00] to most people 'cause we have to be. 

[01:06:03] Mike Kaput: I love that. 

[01:06:04] Perplexity Lawsuits

[01:06:04] Mike Kaput: All right. In less inspiring news, perplexity, the AI powered search engine is facing lawsuits from two major newspaper publishers regarding its use of copyrighted journalism.

[01:06:15] The Chicago Tribune filed a complaint in federal court alleging copyright infringement, arguing that perplexity delivers the newspaper's content verbatim to users. Now, this lawsuit targets perplexity use of retrieval, augmented generation or rag, which is a method used to verify data sources. The Tribune alleges Perplexity Comet browser uses this technology to bypass paywalls and generate detailed summaries of articles without permission.

[01:06:40] Perplexity lawyers previously stated the company did not train models on the Tribune's work, but acknowledged the system may receive non verbatim factual summaries. The New York Times also filed suit stating perplexity copies journalism to power its product without compensation. A time spokesperson noted that the rag [01:07:00] process allows the search engine to crawl the internet and retrieve content effectively reserved for paying subscribers.

[01:07:06] Now these actions join ongoing litigation against perplexity from other entities, including Dow, Jones and Reddit. So Paul lawsuits against perplexity, nothing new, but they are stacking up. Like at what point do these become a real existential threat for perplexity? 

[01:07:22] Paul Roetzer: Yeah, so again, I'm not hating on perplexity.

[01:07:25] I've been pretty clear on my thoughts on perplexity overall in this podcast. And again, if I put it in the context of, let's say it's a publicly, publicly traded company, I want nothing to do with it. Like, yeah, here's, here's my problem with them. Besides, I think it's an unethical company overall, based on the decisions they've made.

[01:07:44] And like things they've said publicly, they have no leverage in this situation. So if you come at openAI's or Anthropic or Google for these same allegations. The leverage they have is they have massive user base and they're the ones building the models and the [01:08:00] media companies need them. Perplexity doesn't build their own models.

[01:08:04] They, they don't, they, the media companies don't need perplexity. They have a, a very small, small segment of the market. It does not affect these media companies. So I think they're going to have to settle these lawsuits or they're gonna lose their company. No. Who, who's gonna give money just to settle lawsuits when you have no leverage to, to change the behavior of these companies against you moving forward.

[01:08:28] So I just feel like they're in a really tough spot where they're gonna have to probably settle for hundreds of millions, if not billions of dollars that they don't have and they have no leverage to do licensing deals. What are they licensing? They're not the ones building the models. Right. So that's the out for these other companies is meta can show up and say, Hey.

[01:08:49] Let's license real time data and we'll, we'll spend 20 billion with you over the next three years. Lexie's not doing that. Like that's the difference. So again, is it good tech do as a listener? [01:09:00] Maybe you love their tech, maybe it's the one you use. 'cause it gives you access. That's great. Like I'm not disputing that, that it's good tech and that it's interesting as a company.

[01:09:07] But when you zoom out from that and you say, what is the reality of the long-term lifespan of this organization? I think they're gonna go the route of the company we're about to talk about and they're gonna either have to sell or they're gonna get sued outta business. Is, is kind of what I think happens to perplexity again.

[01:09:23] Interesting. It not to hate on 'em. I, I'm just trying to be like objectively honest about this is the reality of the part we're in. It's gonna be brutal for some startups that don't have leverage. 

[01:09:33] Meta Acquires Limitless

[01:09:33] Mike Kaput: On that next subject, meta has acquired Limitless, a startup best known for creating an AI powered pendant that records and transcribes real world conversations.

[01:09:44] Limitless. CEO Dan Siroker confirm the deal in a blog post of this past Friday, stating that his team will join Meta's Reality Labs wearables organization to help accelerate the development of AI enabled consumer hardware. Limitless, formerly known as Rewind, developed a [01:10:00] $99 wearable device designed to augment memory by capturing audio and generating searchable summaries of daily interactions through a companion app.

[01:10:09] The company had actually previously raised more than 33 million from investors, including Andreessen Horowitz and Sam Altman. According to the announcement, both companies share a vision of bringing personal superintelligence to users through wearable technology. Interestingly following the acquisition, limitless will stop selling devices to new customers and wind down its desktop recording software.

[01:10:32] However, they do plan to maintain support for existing hardware users for a year, waiving subscription fees while asking users to accept revised privacy terms. So Paul, what are your thoughts on Limitless, specifically going to meta the implications of this overall? I mean, especially since meta just poached Apple talent for wearables as well.

[01:10:53] Paul Roetzer: Yeah, I, I'm not gonna take a victory lap on, on this one. Again, I, I'm on the record with Limitless as a company and [01:11:00] as a product. I was not a fan. I did not see this ending well. so I'll just, I'll, I'll highlight like a couple of key points here. anytime you are willing to be a Guinea pig for new AI technology, specifically hardware, devices that are intended to record your life, ponder who gets that data when that company fails.

[01:11:22] Um. In this case it's meta. I highly doubt that whatever the exit was, because they didn't disclose it for obvious reasons, is anything close to what they hoped this company was going to exit for. to, to be like, positive from the entrepreneurial side. 'cause I just gave some love for the entrepreneurs of the world.

[01:11:46] They took their shot. I don't know Dan personally, I don't know any of the founders there. Maybe the tech was too early, maybe the competition was too great. it didn't work. And, you know, maybe it's gonna work out well [01:12:00] for them at meta. But again, I'm, I'm more focused on the big picture takeaway here of let's all be very conscious of like who we're giving our data to, who we're connecting it to.

[01:12:12] because you don't control who acquires these companies or their data when the end comes for them. 

[01:12:19] Mike Kaput: Yeah. It's especially relevant when you're recording your entire environment or having very intimate chats with, you know, AI models and things like that. 

[01:12:27] Paul Roetzer: Correct. And you're putting other people's data at risk who, unbeknownst to them, were being recorded.

[01:12:33] Yes. yeah, that's some stuff to work out in society. 

[01:12:39] Pope Weighs In on AI

[01:12:39] Mike Kaput: All right, next up, hope Leo the 14th has issued a warning regarding the impact of AI on human dignity and child development. And in address Friday. among his remarks, the pontiff stated quote, human beings are called to be coworkers in the work of creation, not merely passive consumers of content generated by artificial technology.

[01:12:57] Our dignity lies in our ability to reflect, choose [01:13:00] freely, love unconditionally, and enter into authentic relationships with others. Recognizing and safeguarding what characterizes the human person and guarantees their balanced growth is essential for establishing an adequate framework to manage the consequences of artificial intelligence.

[01:13:15] He basically warned that the technology's current trajectory. Causes serious concerns about humanity's capacity for critical thinking and risks concentrating power in the hands of a few. So this speech actually is not the first he's given. It escalates a growing tension, interestingly, between the Vatican and Silicon Valley.

[01:13:33] For instance, last month, venture capitalist Mark Andreessen drew a bunch of heat for publicly mocking the Pope on X. After he tweeted that AI carries ethical and spiritual weight. So Andreessen, as we've talked about, views AI deceleration as dangerous to human life, but he did delete the post and it basically showed Paul that the Pope seems to have emerged as a leading voice for the moment.

[01:13:57] On the kind of pro-human side of the AI [01:14:00] debate, it seems like another signal AI is becoming a lot more of a hot button social issue. What do you think? 

[01:14:06] Paul Roetzer: so again, if you're, if you're near the show. My political views, my religious views, Mike's political views, they're, they're irrelevant like we are. This is not a show about our opinions and trying to convince anyone of anything.

[01:14:20] The reason we talk about politics, whether it's Bernie Sanders or Donald Trump, or Josh Hawley, or Ron DeSantis or whomever, the reason we talk about religion, you know, with the Pope is because it affects people and it, and the people need to know. So in, in, in the world, 1.4 billion people follow the Pope.

[01:14:37] It's the largest, you know, cath Catholicism, largest SEC of Christianity. In the United States, 68 million people identify as Catholic. Roughly 20% of the US population, the Pope is infallible. Like I was raised Catholic for 12 years. Like if the Pope says something about ai, then 20% of the US adult population is supposed to believe that.

[01:14:58] So if the Pope is [01:15:00] making AI a key part of his agenda that matters, it affects the way people think about ai, the way they respond to ai. So. That's why we track this stuff. It's why we report on this stuff. We are not offering opinions one way or the other of whatever the Pope is saying, whether you personally should care.

[01:15:17] What we're saying is there's a whole lot of people in the world who care what the Pope says, and it affects what they think, what they believe, and how they act. And so it's really important that you have that context as you start to think about how society is going to respond to ai. Moving into 2026, 

[01:15:35] Mike Kaput: it's also quite interesting to notice someone who is also raised Catholic, that Pope, the names Pope's choose are not, random.

[01:15:43] And Pope Leo the 13th, the current Pope Leos predecessor was Pope from 1878 to 1903 and is best known. As the social pope because of his work on labor and capital. It was at a time of great economic disruption and [01:16:00] transition. It is not a surprise at all. This pope is planting a flag on this issue. 

[01:16:03] Paul Roetzer: And they, if I remember correctly, Mike, we talked about that a few months ago, that, that that AI actually played a role in the name he chose for that reason.

[01:16:12] Yes, exactly. And they actually called AI out as part of it. 

[01:16:14] Mike Kaput: I wouldn't even say this is one of his pet issues. This is probably the re the reason he has named what he is named right. Would be my guess. 

[01:16:21] Paul Roetzer: So again, macro level AI is a political and a religious topic. A very important one moving forward from here.

[01:16:30] And that is un debatable like that. Again, to go back to these like fundamental truth versus beliefs. That's right. We're, we're moving into the fundamental truth area that AI is becoming a very important political and religious belief. This is not my opinion. It's we're, you know, sharing these beliefs from, from high level influential people.

[01:16:49] Because it's going to matter in part because of our next topic, Mike?

[01:16:53] Data on AI Job Cuts

[01:16:53] Mike Kaput: Yes. We've got some interesting data in our next topic, which is a new report from Global Outplacement from Challenger Gray [01:17:00] and Christmas, which quantifies the growing impact of AI on US labor numbers. So they paint the broader economic picture as showing over 1.1 million job cuts announced through November of this year.

[01:17:12] That is, by the way, the highest year to date level since 2020. And they also actually talked about how much of this is directly tied to ai. So according to their data, US employers cited AI as the explicit reason in November alone for 6,280 layoffs. Now, this contributes to a 2025 total. They've been tracking, so they say that 54,694 job cuts so far through November have been attributed specifically to AI this year.

[01:17:43] Now the technology sector continues to lead private sector reductions, announcing over 12,000 cuts last month. They've totaled 153,000 for the year. Now the bigger factors here are things like restructuring and market conditions, which are the top overall cited reasons for workforce [01:18:00] reductions. But interestingly, the AI numbers, Paul, I would say, you know, it's only about what 5% of the total layoffs that they're tracking.

[01:18:09] But it is interesting to see them really put specific numbers to this now. 

[01:18:14] Paul Roetzer: Yeah, and I think it's under reported honestly. Yeah. Like it just, so I, it is funny, like somebody see Matt, somebody on LinkedIn and, Matt, I think he was a listener, had asked about, said, I pose a controversial question.

[01:18:29] He tagged me and Ethan Mollick in it, and he tagged you as well, Mike, about midterm and long term, you know, our interval roles still at risk or is it really mid-level career workers? so I'll just real quick read what I had wrote, I said right now we are definitely seeing a more direct impact on entry level.

[01:18:45] While they are AI native, in a sense, they lack the business experience, instinct, and context to ask the right questions of the AI assistance and to know what to do with the answers. I believe moving forward companies will put a premium on AI literacy, interpersonal communications, creativity, critical thinking, [01:19:00] curiosity, emotional intelligence or eq, imagination and adaptability.

[01:19:04] Leadership will prioritize employees at all levels that know how to talk, to, collaborate with and learn from ai. It's hard to say at this point what that means for middle management over the next three to five years. I tend to lean in the direction of Matt's thesis that over time middle managers who lack high level strategic abilities will be left behind.

[01:19:22] One senior leader with strategic cap, strong strategic abilities and high AI literacy will be 10 to a hundred times more productive and impactful. I'm personally seeing this already with my own staffing and organizational design at SmarterX. I think as AI agents become more autonomous and reliable, we see a strong distribution of senior leaders plus AI agents.

[01:19:41] Plus entry level with very little middle, middle management is sort of like current theory. and again, I just want to like, I don't want to do the fear side of this, but I think people have to understand 2026 is not gonna be easy when it comes to this stuff. Like there [01:20:00] there indications, there's things I have seen and heard that tell me we are in for way more AI affected layoffs.

[01:20:07] and I can tell you that with pretty high confidence. And I think people have to be prepared for that. If you have kids in college right now, like who, like seniors in college, like it's gonna be rough. Like coming out in next summer and trying to find entry level work is gonna be rough. Yeah. and I think we just have to be realistic about that.

[01:20:29] If you're in the education space, gotta be thinking about these things and talking about these things, we can't ignore it. It, it's gonna be a very difficult period. I believe and. we just need to keep having the conversations about it, keep trying and find a ways to do it. And again, like keep trying to drive growth.

[01:20:45] Growth and innovation is the answer. Yeah. Entrepreneurship is the answer. Like we have to create more opportunities. 

[01:20:50] Data on AI and Parenting

[01:20:50] Mike Kaput: I love that. So Paul, I wanted to end here just really quickly talking about another piece of research. This actually comes from one of our longtime listeners, Deborah Ross at [01:21:00] Kids Out and about.com.

[01:21:01] It's a big, big platform for basically parents and grandparents, to support them with content and also like events and activities for kids. basically in every US state and I think also a bit in Canada. And she actually put out a report based on a November, 2025 survey polling over 300 parents and grandparents in the US and Canada regarding AI literacy.

[01:21:22] And interestingly enough, this data shows that four 54% of respondents say they feel somewhat confident in their general understanding of ai. But it doesn't translate to parenting readiness because 52% of parents and grandparents state straight up, they do not feel equipped to help children navigate AI technology.

[01:21:41] Only 5% said they truly feel confident in their ability to guide kids. It's really interesting also to look at, she asked a question about what kind of educational needs do they have? Like what would they like to learn about the most? And interestingly, the top answer was how to spot misinformation or bias followed [01:22:00] closely by helping kids use AI responsibly.

[01:22:02] There's also some interesting qualitative data in there where she had. Parents and grandparents kinda write in how they were feeling about ai. so I just wanted to kind of highlight that. I know we talk a lot about AI and parenting and it's always good to see, you know, ev ev all the research on here.

[01:22:17] Obviously you have to read for yourself, take with a grain of salt. But I thought that was a really, fascinating look at kind of where we're at and probably, I don't know about you kind of gels with what we're hearing and seeing in terms of just people needing much more AI literacy when it's parenting.

[01:22:31] Yeah. Every, 

[01:22:31] Paul Roetzer: every conversation I have with parents, I just, I do feel this like sense of urgency to do more in this space. Like, I'm glad you, you guys had that meeting and Yeah. And highlight some of this research. I, I don't, I don't know what we're gonna do. Like, I do a lot of like, just personal stuff.

[01:22:48] Like I'll go talk to my high school, go talk to colleges, I'll, you know, talk to parents whenever I get a chance to raise awareness. But I don't know, there's, there's gotta be something more. I've got some ideas, but, another time I guess we'll [01:23:00] talk about that. But I feel like we just. Yeah, this is important research.

[01:23:04] It's important that a lot of people are thinking about this and we're all trying to figure out the most positive way to handle this. I get asked all the time how I treat my, you know, my own with my own kids. Yeah. At 12 and 13 and, I don't have all the answers. I'm trying a lot of stuff. 

[01:23:17] Mike Kaput: Yep. I'm not sure.

[01:23:19] Well, it's worth mentioning SmarterX dot AI under tools has KidSafe, GPT, which you created, help parents have conversations with kids. So yeah, it'll be interesting to see because I think Deb is also putting out some more research soon here. But just an important topic good to talk about. couple final notes here, Paul, just that if you have not subscribed to our newsletter@marketingaiinstitute.com slash newsletter, please do so.

[01:23:42] But also please leave us a review. If you have not yet reviewed the podcast on your podcast platform of choice, it is the number one thing you can do to help us out, not only to get better, but also to spread the word more about what we're doing, reach more listeners. So Paul pretty packed week, appreciate you guiding us through everything going on this week.[01:24:00] 

[01:24:00] Paul Roetzer: We'll be back next week and we also will have, I'm pretty sure we're gonna have an AI answers episode maybe this Thursday. We, so, yeah. Okay. Yeah, I think we'll have a second edition this week of our AI Answers series that's presented in partnership with Google Cloud. So, I think we did a, did we do a scaling AI class last week?

[01:24:17] Maybe? Yes. Is that what it was? I think so. I don't know. Maybe it was intro, maybe. No, we have an, isn't there scaling this week? I don't know. Fast. We gotta check the marketing code right before we get on this. I, but yes, there, there will be a second edition coming up soon, so keep an eye on for that in addition to our, our regular weeklies.

[01:24:35] Alright, Mike, great talking with you. Thanks. Thanks for listening to the Artificial Intelligence show. Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy and [01:25:00] engaged in the SmarterX slack community.

[01:25:02] Until next time, stay curious and explore ai.

Recent Posts

71% of Professionals Feel Secure in Their Roles Despite Layoff Headlines (Informal Survey)

Mike Kaput | December 9, 2025

Despite a news cycle dominated by reports of AI-related layoffs and tech sector volatility, our latest AI Pulse survey reveals a surprisingly resilient sentiment among professionals.

[The AI Show Episode 184]: OpenAI “Code Red,” Gemini 3 Deep Think, Recursive Self-Improvement, ChatGPT Ads, Apple Talent Woes & New Data on AI Job Cuts

Claire Prudhomme | December 9, 2025

On Ep. 184, OpenAI declares Code Red as Google launches advanced AI tools. Plus, the race for self-improving AI models, Apple talent shake ups and more.

25% of Firms Hiring Fewer Entry-Level Staff Amid AI Disruption (Informal Survey)

Mike Kaput | December 2, 2025

Our latest AI Pulse survey reveals a growing shift in the entry-level job market, with nearly one in four organizations reporting they are hiring fewer early-career staff.