61 Min Read

[The AI Show Episode 186]: GPT-5.2, Disney-OpenAI Deal, New Trump AI Executive Order, OpenAI State of Enterprise AI Report, Teen AI Usage & Data Centers in Space

Featured Image

Get access to all AI courses with an AI Mastery Membership.

A billion-dollar check from Disney. A federal crackdown on state AI laws. And a new model from OpenAI that beats human experts 71% of the time.

In Episode 186 of The Artificial Intelligence Show, Paul and Mike unpack the release of GPT-5.2, Disney’s strategic pivot to license its IP for Sora, and President Trump’s executive order designed to accelerate American AI at all costs.

Listen or watch below—and see below for show notes and the transcript.

This Week's AI Pulse

Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI. 

If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.

Click here to take this week's AI Pulse.

Listen Now

Watch the Video

Timestamps

00:00:00 — Intro

00:03:46 — AI Pulse

00:06:27 — GPT-5.2 and OpenAI Turns 10

00:22:43 — Disney-OpenAI Deal

00:32:41 — Trump Executive Order to Override State AI Laws

00:44:17 — OpenAI State of Enterprise AI Report

00:53:03 — Google Cloud ROI of AI Reports

00:56:14 — Microsoft Lowers AI Sales Expectations

01:02:14 — TIME Person of the Year: The “Architects” of AI

01:06:08 — The Economics of AI and Data Centers in Space

01:14:31 — Shopify SimGym

01:18:37 — Research on Teen AI Usage

01:21:11 — OpenAI Certifications


This episode is brought to you by AI Academy by SmarterX.

AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off an individual purchase or a membership by using code POD100 at academy.smarterx.ai.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: Do you realize how low adoption is for ai? And of the 10%, like how many of those people actually know the full capability of what they have, like it's so early. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer.

[00:00:21] I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:40] Join us as we accelerate AI literacy for all.

[00:00:47] Welcome to episode 186 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput, we have a. A lot to talk about today. I mean, Mike and I were actually just sitting here before we started recording [00:01:00] to see like, is there something we should like move to the next one?

[00:01:02] Because yeah, we basically ended up with the four main topics and I don't know any other way around it. There's a couple others that probably could have been main topics in addition to those four. We'll ride with it. 'cause I'm expecting more stuff to happen this week, so I don't know what we could punt to the next one.

[00:01:16] So we'll just, we'll try and get it, get through everything we've got outlined here. It's, lot of interesting things to talk about, big topics to wrap the year up. we are recording on Monday, December 15th at 11:00 AM Eastern time. I I think there might be a new Google model later today. So just a timestamp for everybody.

[00:01:35] we are gonna do one more episode, Mike, right in, 2025. We'll be back next week. Yeah. So for our regular listeners to, to our weekly, we've got the one drop in, you know, this one. And then we'll do one number 180 7, next week. And then that'll, that'll be a wrap for the year for us. Mike and I are gonna.

[00:01:54] Recharge for a couple weeks and, spend some time with our families. Alright, this episode is brought [00:02:00] to us by AI Academy, by SmarterX. AI Academy helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and an AI powered learning platform.

[00:02:12] There are currently nine professional certificate courses available on demand right now with more being added each month. So just to give you a, a sense of what's available, we basically break things into collections and then series within those collections, and that that's what enables the personalized learning journeys as you can go through and sort of pick the things you want to focus on.

[00:02:33] So right now we have, three collections I'll highlight. The AI Foundations collection has AI fundamentals, piloting AI and scaling ai. Again, each of those are professional certificate course series. The AI for Industries Collection has AI for professional services, AI for healthcare and AI for software and technology, which Mike, I believe, just dropped last week.

[00:02:53] Is that right? It did, yeah. Okay. And then, AI for departments collection has AI for marketing, [00:03:00] AI for sales and AI for customer success. So again, all nine of those are available right now. If you are an AI Mastery member, you can watch any of them take, you know, participate in any of them. They're all included in your annual mastery membership.

[00:03:14] They are all, available for individual purchase as well. So if you don't want to go all in on a Mastery membership, you can go and and get individual course series. Now, if you're gonna do two or more of 'em, you might as well just get the Mastery membership because the prices, is the same. It's, two courses cost the same as an entire year of, of membership and access to all of them.

[00:03:34] So you can go learn, learn more about AI Academy and the AI Mastery membership program again for both individuals and business accounts. at Academy, do SmarterX.ai. 

[00:03:46] AI-Pulse

[00:03:46] Paul Roetzer: Alright, Mike, we have our AI pulse. So again, if you're new to the show, we do these informal polls each week. it is just asking our audience how they feel about different topics we're covering on the podcast.

[00:03:57] depending on the week, it, like, this week we've got [00:04:00] 76 responses When we're recording this, some weeks it's in the, you know, low hundreds. But that's why we position them as an informal poll. This is not projectable data based on a large enough sample size, but it just gives you a sense of kind of how our audience feels about different topics.

[00:04:13] So last week we asked, OpenAI face backlash for testing app suggestions, which looked like ads in ChatGPT. And the question said, if ads or sponsored suggestions become a permanent fixture in the tool, would you switch models? 60% said I would consider switching. 22% said yes, I would switch immediately.

[00:04:35] So that's interesting. Like, so yeah. What, eight 80, 3% combined say I would switch or I would consider switching. Yeah. Now the interesting thing is like, well, Gemini's probably gonna have ads at some point too, so. Right, right. I think, well, I'll just probably get used to it. and then the other question was, the latest challenger report sites arise in AI driven job cuts.

[00:04:55] How is AI impacting worse workforce planning at your organization? [00:05:00] the largest percentage was it is having no impact on headcount. So 42% said that, but then we did have, 20% said we are hiring different roles and skills. 17% said, I don't know how it's impacting headcount. Headcount. And another 20% said we have slowed or frozen hiring.

[00:05:18] So, you know, pretty good mix there. but as of right now, 42% said it's not having an impact on headcount yet. Alright, this week we have two more questions. So again, you can go to SmarterX dot ai slash pulse to participate in these polls. This one, again, we're previewing topics we're gonna talk about today.

[00:05:36] It says, does the new Disney openAI's deal change your perspective on using AI video tools like Sora for creative or business projects? So we'll talk about the licensing deal between Disney and openAI's today. And then the next is, again, a big topic is gonna be an executive order related to AI regulation.

[00:05:55] So the question here is regarding the new executive order on AI regulation, do you [00:06:00] believe a single federal standard is better than individual state laws? So we will give you some context to answer that question as we go through today. Alright. So again, SmarterX, do AI slash pulse if you wanna participate in those.

[00:06:13] and check out the previous pulse surveys. Alright, Mike. like we said, the open, just a ton happened last week. Lots of big items to talk about. starting with a new model and openAI's turning 10 years old. 

[00:06:27] GPT-5.2 and OpenAI Turns 10

[00:06:27] Mike Kaput: Yeah, Paul. So openAI's is marking its 10th anniversary this past week, and they did that with the release of GPT 5.2, which is an advanced AI model designed to counter Google's recent launch of Gemini three Bloomberg reports that quote, the new model GBT 5.2 is faster and more adept at finding information, biting and translating.

[00:06:49] The company said Thursday. The software available in three tiers is also intended to be better at mimicking the human process of reasoning to handle more complicated lengthier tasks in fields such as math [00:07:00] and programming. Now, you know, interestingly, new data released by openAI's highlights a specific surge in performance with GPT 5.2 on knowledge work tasks.

[00:07:10] They have this benchmark we'll talk about called GDP-Val, which we mentioned on a previous episode, and this basically measures how good AI is at real world knowledge work. Some of the stats they published are pretty interesting. GPT 5.2 thinking achieved a score of roughly 71%, which was up from 39% for GPT 5.1 thinking, which came out literally in November.

[00:07:35] Now, Wharton Professor Ethan Mollick notes that this metrics suggests that in head-to-head competition against human experts on tasks requiring four to eight hours of work, the new model is now winning 71% roughly of the time. So obviously this release follows what we covered in previous episodes about the code RED directive from CEO Sam Altman to accelerate development.

[00:07:58] And Altman [00:08:00] actually as part of a retrospective for the 10 year anniversary, said that openAI's, among other things, is now almost certain to build super intelligence in the next 10 years. So while you celebrate, if you celebrate Ope AI's anniversary, you can try out GPT 5.2. It started rolling out to paid subscribers.

[00:08:20] So. Paul, maybe talk to me first about G PT 5.2. The GDP-Val thing seems pretty significant. And then maybe if you have any thoughts on the overall anniversary here. 

[00:08:30] Paul Roetzer: Yeah, so GDP-Val we talked about on episode one 70, so if you wanna go back and sort of listen, and that was on, September 30th, 2025 is when that episode dropped.

[00:08:40] And the basic premise here is the tests of IQ are basically saturated. So when you're trying to evaluate these models against standardized tests that a human might take the AI is already there. It's at basically top human level, if not beyond top human level at a lot of these tasks. And so it's [00:09:00] really hard for all of us users to feel the difference when we're just talking about increases in IQ points.

[00:09:07] So what they're trying to do with GDP-val and what a lot of other, AI labs are doing. and the thing we're hearing more and more about is the impact on knowledge work. So basically, what is the equivalent it's of its capabilities versus a human actually doing a task. So when they rolled out this GDP valve, they said, our mission is to ensure AI benefits all of humanity.

[00:09:29] As part of that mission, we want to transparently communicate progress on how AI models can help people in the real world. They went on and say, that's why we're introducing GDP Val a new evaluation designed to help us track how our models and others perform on economically valuable real world tasks.

[00:09:46] Now, if you remember when they did this, Claude actually outperformed open AI's best model and they did share that data and we, you know, everybody was kind of acknowledging at the time they appreciated that openAI's was being transparent about the fact that they didn't have the best [00:10:00] model. In some cases.

[00:10:01] So, they said, we call this evaluation GDP valve because we started with the concept of growth, gross domestic product as a key economic indicator and drew tasks from key occupations and industries that contribute most to GDP. So their full data set includes 1300, specialized tasks. Every task is based on real work products such as a legal brief, an engineering blueprint, a customer support conversation, or a nursing plan.

[00:10:27] GDP TE V tasks are not simple prompts. They come from reference file with reference files and context and the expected deliverables, span, documents, slides, diagrams, spreadsheets, and multimedia. So again, trying to more closely replicate how you and I all work, in our jobs each day. And how far along are these models in performing those tasks?

[00:10:49] They then have expert graders, a group of experienced professionals from the same occupations represented in the dataset. The graders blindly compare model generated deliverables [00:11:00] with those produced by task writers, not knowing, which is AI versus human generated. And then they provide critique and rankings on those outputs.

[00:11:08] And when they first released this back in, September of this year, they said, we found that today's best frontier models are already approaching the quality of work produced by industry experts, not average people, but industry experts. And then they said, in addition, we found that frontier models can complete GDP Val tasks roughly 100 times faster and 100 times cheaper than industry experts.

[00:11:33] So now we're back to 5.2. specifically 5.2 thinking is the one that's like using more reasoning. So Sam Altman tweeted it is the smartest generally available model in the world and in particular good at doing real world knowledge work tasks. And then he said, and then a lot of other openAI's executives were tweeting similar things like try to make slides, spreadsheets, code, and much more.

[00:11:55] So they're basically implying like, go do the things you do every day. So [00:12:00] then we'll put a link to this, but they actually have a blog post introducing GPT 5.2. And there's a few things I wanted to highlight from this. So they said the average ChatGPT enterprise user, and Mikey referred to this, we'll touch on this report a little bit later on.

[00:12:13] But one of the highlights of this, enterprise user study they did is that AI saves, 40 to 60 minutes a day for the average user. And heavy users say it saves them more than 10 hours a week. Then open on and say, we designed GPT 5.2 to unlock even more economic value for people. It's better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context using tools and handling complex multi-step projects.

[00:12:42] Now note here, I'm gonna come back to this. The positive tone of all of this. So we wanna unlock even more economic value for people. Notice they don't say we wanna replace more jobs, but like we'll get to that. says GPT 5.2 thinking is the best model yet for real world professional [00:13:00] use on their evaluations, measuring, well specified knowledge work tasks across 44 occupations.

[00:13:07] GPT 5.2 thinking again, this is the reasoning version of the model sets a new state-of-the-art score and is our first model that performs at or above a human expert level. GPT 5.2 thinking produced outputs for GDP Val tasks at greater than 11 times the speed and less than 1% the cost of expert professionals.

[00:13:31] And I love this positive spin on this, suggesting that when paired with human oversight, GPT 5.2 can help with professional work, can, can help with professional work. Okay. GPT 5.2 thinking hallucinates less than 5.1 on a set of de-identified queries from ChatGPT responses with errors were 30% less.

[00:13:55] So they actually are at 6.2% the error rate versus eight point uh [00:14:00] eight on the previous version. For professionals, this means fewer mistakes when using the model for research, writing analysis and decision support, making it more dependable for everyday work. Okay, so just recap there, Mike, and then I'll touch on the 10 year thing.

[00:14:12] So in essence what's happening, they built a model that they're fine tuning to do more human work. So for the first few years of this rapid escalation of the model improvements from all these different labs, it was all about these benchmarks, these known evals in the industry that were in essence IQ tests.

[00:14:32] and now they're moving past that, which is what we've said for the last year and a half is like, okay, like we need to start measuring against real work. 'cause that's what we're gonna know when economic. Disruption is around the corner. And I would say we're, we're there like, again, they're being very positive, I would say, with their findings and not illuminating the potential challenges.

[00:14:54] But again, we'll talk about that in a minute. so Mike, any any thoughts? Have you had any experience with 5.2 yet you [00:15:00] wanted to touch on before I, you know, jump into the 10 year thing? 

[00:15:02] Mike Kaput: Yeah, we've talked about this before, but it just really emphasizes again and again to me, the need at companies, and in your own workflow for just really formal benchmarks, or like standardized benchmarks you can use to rate how it works in your own work.

[00:15:18] So I have a standard set of prompts and workflows. I'm always testing with new models that at the very least subjectively I can say, okay, this outcome is the same, worse, or better than the previous model. That's really, really helpful because admittedly, after that, the rest is kind of just vibes and the vibe is.

[00:15:36] 5.2 seems really smart, seems very useful for the things I'm using it for. Seems like a good model, but beyond that, unless you're evaluating it in the context of your particular work, it's really hard to say, to translate the metrics, the evals, what they publish into, Hey, is this better than some other model?

[00:15:54] The answer is, it depends. 

[00:15:56] Paul Roetzer: Definitely. Yeah, we've touched on that a few times. I've, I've alluded to the fact we're [00:16:00] working on some stuff to try and help people with that. So yeah, no, nothing to announce, you know, here in December, but I think early next year, that's scenario we're gonna be focused more on, is trying to help organizations and leaders figure out ways to establish these benchmarks, for their, for their own sake, for personalized use cases within the organization.

[00:16:17] I think it's really, really important. okay, so then on the 10 year thing, so OpenAI was started, you know, Sam Altman tweeted on December 11th, 2015, that they had created this nonprofit. So I thought it was really interesting. I went back and reread the blog post that announced the formation of openAI's, and then Sam Altman's post on December 11th, 2025 that talked about sort of where they've gone.

[00:16:43] So I'll just read a few of these. 'cause if, again, if people aren't familiar with that moment back in 2015 when this all sort of came to be, some of this is really good context for you to understand. So again, just for real, real brief background, 2011, 2012 was sort of the moment when [00:17:00] people realized that deep learning might work.

[00:17:01] It, it specifically, there was a breakthrough in image recognition, from Jeff Hinton and Ilia SVA and their team. and that read to the, led to this like rapid escalation of trying to figure out ways to apply deep learning to language and to other modalities. I started studying AI right around that time.

[00:17:18] So 2011, 2012 is when I started sort of researching AI and trying to tell the story of it and figure outs application to our own business. So by. 2014, I had written about AI in my second book. in, in in Marketing Performance Blueprint was the name of that book. And so that was about a year prior to the introduction of openAI's.

[00:17:38] But even back then, like I remember vividly, like there was already lots of progress. And when openAI's came out, I was like, this is interesting. so here's the quick background, what it said. Again, this is December 11th, 2015, a blog post introducing openAI's. Now as I'm reading the, some of these excerpts consider where we are today.

[00:17:57] So it says, OpenAI is a nonprofit [00:18:00] artificial intelligence research company. Our goal is to advance digital, digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive Im, human impact.

[00:18:17] We believe AI should be an extension of individual human wills and in the spirit of liberty as broadly and evenly distributed as possible. The outcome is this venture is uncertain and the work is difficult, but we believe the goal and structure are right. We hope this is what matters most to the best in the field and AI technique explored for decades, deep learning started achieving state-of-the-art results in a wide variety of problem domains in deep learning.

[00:18:43] Rather than hand code a new algorithm for each problem, you design architectures that can twist themselves into a wide range of algorithms based on the data you feed them. This approach has yielded outstanding results on pattern recognition problems such as recognizing objects and images, machine translation and [00:19:00] speech recognition.

[00:19:01] But we've also started to see what it might be like for computers to be creatives, to dream and to experience the world. AI systems today have impressive but narrow capabilities. It seems that will be, that will keep whittling away at their constraints, and in the extreme case, they will reach human performance on virtually, virtually every intellectual task.

[00:19:22] It's hard to fathom how much human level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly. Because of AI's surprising history, it's hard to predict when human level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

[00:19:46] We're hoping to grow openAI's into such an institution. As a nonprofit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work when, whether as papers, blog [00:20:00] posts, or code, and our patents, if any, will be shared with the world. That, by the way, nothing in that paragraph is true anymore.

[00:20:07] Opening AI's co-chairs are Sam Altman and El Elon Musk. That also is no longer a thing. And then this is interesting again, people forget who else was involved early on. Sam, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Theo, Amazon Web Services, Infosys and YC Research are donating to support openAI's.

[00:20:27] In total, these funders have committed 1 billion, although we expect to only spend a tiny fraction of this in the next few years. So that was back then and now fast forward to 10 years is the name of the blog post that Sam published, last week. It says, openAI's has achieved more than I dared to dream possible.

[00:20:44] We set out to do something crazy, unlikely and unprecedented from a deeply uncertain start, and against all reasonable odds with continued hard work. It now looks like we have a shot sh a shot to succeed at our mission. We pressed on and made the technology better and we launched [00:21:00] ChatGPT. Three years ago, the world took notice and then much more when we launched GPT-4.

[00:21:05] All of a sudden, AGI was no longer a crazy thing to consider. These last three years have been extremely intense and full of stress and heavy responsibility as technology has gotten integrated into the world at a scale and speed that no technology ever has before. As we wrestle with the question of how to make AI maximally beneficial to the world, we developed a strategy of iterative deployment where we successfully put early versions of the technology into the world so that people can form intuitions and society and technology can co-evolve in 10 years more.

[00:21:38] As you alluded to Mike, I believe we are almost certain to build super intelligence. I expect the future to feel weird in some sense, daily life and the things we care about most will change very little, and I'm certain we will continue to be much more focused on what other people do than we will be on what machines do.

[00:21:57] In some other sense, the people of [00:22:00] 2035 will be capable of doing things that I just don't think we can easily imagine. Our mission is to ensure that AGI benefits all of humanity. So just like such an interesting dichotomy between what they set out to do and where they are today. And again, if you've been listening to the podcast, you know, for the last couple years, you're familiar with the background.

[00:22:19] But even for me, like, you know, I've been living this stuff for 13, 14 years. It's so fascinating to take these like snapshots in time and go back and see what was being said, what was being thought about, who was involved, and then like zoom back into where we are today and see, you know, kind of how things have changed.

[00:22:35] Mike Kaput: Yeah, it's been pretty in incre, incredible. 10 years for openAI's, especially the last three. It'll be interesting to see what the next three look 

[00:22:42] Paul Roetzer: like. Yeah. 

[00:22:43] Disney-OpenAI Deal

[00:22:43] Mike Kaput: Alright, next up. Another big story. This week Disney has become the first major Hollywood studio to license its content to an AI video platform.

[00:22:52] Under a new three year agreement, Disney will invest $1 billion in openAI's and bring its intellectual property to Sora the [00:23:00] startups video generation tool. Starting in early 2026, Sora users will be able to generate short videos featuring more than 200 characters from Disney, Pixar, Marvel, and Star Wars franchises.

[00:23:13] This deal explicitly excludes the use of human talent, likenesses or voices. However, the move has sparked some pretty big backlash immediately from labor groups. CEO Bob Iger frames the partnership as a way to responsibly extend storytelling. However, a representative from the Animation guild publicly noted that the artists who created these characters won't see a dime from the new content.

[00:23:38] So this partnership actually arrives even as Disney is aggressively protecting its rights elsewhere. The company recently sent a cease and desist letter to Google accusing the search giant of using Disney content to train its models without permission. So Paul, maybe connect the dots for me here, because Disney is suing AI companies, it's sending cease and desist to [00:24:00] Google.

[00:24:00] Yet at the same time, they're happy to pay openAI's to give them, let them use their ip, which OpenAI was almost certainly doing already. What's going on here? 

[00:24:11] Paul Roetzer: Yeah. So, this is, I mean, exactly the outcome we've talked about many, many times on the show of what I assumed would happen, which was licensing deals would be had, because in the end, these companies need each other.

[00:24:22] So. I had tweeted such a fascinating legal and business case use IP without permission to train AI models, get rewarded with $1 billion equity and licensing deal. Right? Right. So, my, my assumption here is basically the same deals on the table for Google and they haven't arrived in an agreement yet, so now they're just gonna assume into like forcing a deal.

[00:24:41] So there was a, a really good article, in, let's see. So Variety had the thing about the Google thing. Yeah, but first, lemme go, go back here. So OpenAI, published a post. There's lots of articles about this. You can go read a, a number of 'em, we'll put some links in the show notes, but in the SO agreement, there was a few additional [00:25:00] details, Mike, that I thought were really interesting.

[00:25:01] So, the, it says the agreement will make a selection of these fan inspired Sora short form videos available to stream on Disney Plus. So there's this integration between Sora and Disney Plus, and then you'll also actually be able to create images, using ChatGPT images. So it's not just Sora, it's gonna be integrated into ChatGPT overall.

[00:25:22] Then it also said that Disney will become a major customer of openAI's using its APIs to build new products, tools, and experiences, including for Disney plus and de deploying ChatGPT for its employees. Sam Almos quoted and says, this agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity and help works, reach vast new audiences.

[00:25:50] It does say the transaction is subject to the negotiation of definitive agreements. and then it says, as part of the agreement opening is committed to continu its industry leadership and implementing [00:26:00] responsible measures to further address trust and safety, including age appropriate policies and other reasonable controls across the service.

[00:26:09] In addition, openAI's and Disney have affirmed a shared commitment to maintaining robust controls. To prevent the generation of illegal or harmful con content in, in other words, making Disney characters do inappropriate things. To respect the rights of content owners in relation to the outputs of the models and to respect the rights of individuals to appropriately control the use of their voice and likeness.

[00:26:31] That means nothing. That whole paragraph is just like, oh God. okay. So then on the Google front, so variety, I will put the link in the show notes. It says, Disney accuses Google of using AI to engage in copyright infringement on massive scale. It says, on Wednesday evening, attorneys for Disney sent a cease and desist letter to Google demanding that Google stopped the alleged infringement in its AI systems.

[00:26:55] the letter says Google is infringing Disney's copyrights on a massive scale by copying a [00:27:00] large corpus of Disney's copyright works without authorization to train and develop generative AI models and services. And by using AI models and services to commercially exploit and distribute copies of its protected work to consumers.

[00:27:13] In violation of Disney's copyrights. Now, everything in that paragraph is also true of openAI's, so there's like no, no difference here yet, besides a billion dollars. The letter continued. Google operates as a virtual vending machine capable of reproducing rendering and distributing copies of Disney's valuable library of copyrighted characters and other works on a mass scale.

[00:27:34] And compounding Google's blatant infringement. Many of the infringing images generated by Google AI services are branded with Google's Gemini logo falsely implying that Google's exploitation of Disney's IP is authorized and endorsed by Disney. Hmm. The allegations against Google file cease and desist letters that Disney sent earlier to meta and character.ai, as well as litigation Disney filed together with NBC Universal and [00:28:00] Warner Brothers, discovery against AI company's Midjourney and Minimax.

[00:28:04] I don't even know who that is. Never heard that one alleging copyright infringement. Asked for comments. A Google spokesperson said we have a longstanding, so this is again, Google's perspective here. We have a longstanding and mutually beneficial relationship with Disney, and we'll continue to engage with them more generally.

[00:28:19] We will, we, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google Extend and Content ID for YouTube, which gives sites and copyright hosts control over their content. That's an interesting thing. Bob Iger, Disney, CEO in an interview on CS CNBC said, we've been aggressive at protecting our IP and we've gone after other companies that have not honored our ip, not respected.

[00:28:46] RIP not valued it. And this is another example of doing that. they said Disney has been in discussions with Google, basically expressing our concerns about its AI systems alleged infringement. And ultimately because we didn't really make any progress, AKA get a billion dollar [00:29:00] licensing deal, the conversations didn't bear fruit.

[00:29:02] We felt we had no choice but to send them a cease and deist letter. This will all end in a licensing deal. Like there's, right, there's no way that this get becomes anything else. so then the they, by the next day there's another article on Variety says Google removes ai videos of Disney characters after cease and desist letter.

[00:29:18] they removed videos that depicted Disney owned characters after receiving the letter. The links were still working on Thursday, but now reroute to a message that reads this video is no longer available due to a copyright claim by Disney. Now, keep in mind, all this is saying is Disney probably included a bunch of links as examples in their cease and desist letter.

[00:29:37] Yeah. And they just basically turned those links off. They did not stop the model from doing it. Now again, keep in mind if you're, again, if you're newer to how this stuff works, you, you can't take models trained on Disney characters, which all are like, Gemini did it. ChatGPTs trained this way. they're trained on the characters.

[00:29:55] You can't just extract those characters from the model. So there is [00:30:00] no way for Google or openAI's to go in because Disney threatened them or did a licensing deal and remove ability to do the thing it's trained to do. All they can do is put guardrails in that. If someone prompts it in a certain way or ask for a specific character, they can have it, return it.

[00:30:18] So it won't do it. But there's ways around that. So the core of all this, they absolutely trained on this intellectual property without permission. And now the media companies, Disney has, you know, as many lawyers and money to litigate as anybody. They can come after these companies for doing it, but all they really probably want out of all this is a licensing deal.

[00:30:39] Yeah. and money. So I assume we're just gonna see a bunch of, I think a bunch of people that can't afford to fight it are gonna go out of business who have no leverage. This is the example I gave a perplexity or like. I can see like a mid journey, somebody like that where it's just like, you can't fight this, like you don't have the money for it.

[00:30:55] And yeah. Then it just becomes do you have enough leverage that they care? Like do they even want a licensing [00:31:00] deal with you or they just wanna put you out of business so they can do a bigger deal with Google and openAI's, like they care about like Midjourney or character ai. They don't, it's just, I mean, they're, they're good companies.

[00:31:10] They got a bunch of money, but in the grand scheme of things, they're nobody compared to these companies. So, 

[00:31:17] Mike Kaput: yeah, I I know you've been predicting this for quite some time. It is pretty stark to just see it happen. It does feel like if you're in media and entertainment, my guess without being an insider in those industries is this is a bit of a crossing the Rubicon moment because there's no way that this doesn't end in licensing deals for anyone, like you said, who has very valuable ip.

[00:31:39] Like this is the direction this is going, whether we like it or not. 

[00:31:42] Paul Roetzer: Yep. Barring a Supreme Court decision that says the models were allowed to be trained on other people's intellectual property, and they're allowed to produce that, intellectual property in images and videos. I don't know how it doesn't end in licensing deals or people being put out of business.

[00:31:57] Right. It, it just, [00:32:00] again, like I'm not an attorney, but I don't know what the other alternative is besides those two things, basically. And so I think what ends up happening is like some of these smaller image and video generation companies who can't fight somebody like a Disney can't raise enough money to fight them.

[00:32:16] You just get acquired probably before you get put outta business. I don't know. I don't know. So I think that that's basically the third option is a lot of like acquisitions or talent acquisitions, basically Acquihires, that's what we've seen in this space. 

[00:32:28] Mike Kaput: So yeah, mid journey runways of the world, I would guess in terms of image and video.

[00:32:33] Paul Roetzer: Good companies, ined in the space continue to innovate in the space, but if Disney comes knocking, like, I, how do you fight? 

[00:32:41] Trump Executive Order to Override State AI Laws

[00:32:41] Mike Kaput: All right. Our third big topic This week, president Trump has signed an executive order designed to dismantle state level AI regulations in favor of a single federal standard. The directive, which was signed Thursday of last week, empowers the attorney General to sue states and overturn laws that do not quote, support quote, [00:33:00] American AI dominance.

[00:33:02] It also instructs federal regulators to withhold broadband and other funding from states that maintain conflicting rules. Now, the administration here argues that a patchwork of 50 separate regulatory regimes stifles innovation and threatens the USS competitive edge against China. While the order aims to override strict measures in states like California and Colorado Aisr, David Sachs clarified that it includes exemptions for laws governing child safety and local infrastructure.

[00:33:34] This move has drawn support from tech investors who describe the current landscape as a startup killer. However, it does face immediate pushback from governors and lawmakers in both parties who contend the order oversteps federal authority. Legal experts actually predict this will face swift challenges in court arguing that only Congress has the power to preempt state legislation.

[00:33:56] Now, Paul, we don't know if this order will stand as is given the [00:34:00] possible legal challenges, but it certainly seems like a pretty big win for the acceleration side of the debate that is very heavily influential in this administration. Can you maybe talk me through the implications here? Should this remain in effect?

[00:34:16] Paul Roetzer: again, so I'll preface all this. I'm, I'm not an attorney. so there's a few things going on here that people need to be aware of. So first, the threat of legal action is part of how this administration functions. They, things that they know are probably illegal to do, they will do anyway, and they just like tie things up in court.

[00:34:39] So the threat to states to withhold funding, even if the we withholding of that funding is illegal, that is the playbook they have followed on many core agenda items for the administration. So just because Congress is supposed to be the one involved here does not mean they care and doesn't even matter if it's legal [00:35:00] or illegal.

[00:35:01] They're just going to do the thing that's gonna hopefully, you know, put some pressure on people. so the New York Times had an article, Trump signs executive order to neuter state AI laws in that, they said, Trump was quoted in the Oval Office when he was signing this, saying it's gotta be one source.

[00:35:19] You can't go to 50 different sources. David Sachs, the, you know, ai, CZ, and crypto. He tweeted one rule book for ai, all caps, and then he went through like this long thing explaining why. so the New York Times expands on this said, states have rushed to fill a void of federal regulation with their own laws, meaning there aren't federal regulations around this, on AI safety requiring certain safety measures from companies and putting guardrails around the way the technology can be used.

[00:35:46] This year, all 50 states and territories introduced AI legislation and 38 states adopted about 100 laws. According to national conference of state legislators as examples, California has [00:36:00] passed a law that requires the biggest AI models including Open Eyes Chat, GBT, and Google's Gemini to test for safety and to disclose the results.

[00:36:08] South Dakota passed a law banning deep fakes, which are realistic AI generated videos, in political advertisements within months of an election. Utah, Illinois and Nevada passed laws related to ai, chatbots and mental health requiring disclosures that users are engaging with chatbots and adding restrictions on data collection.

[00:36:27] So now to try and like get at what are they actually doing with this executive order? I read the executive order. Anybody can, we'll put the link in the show notes. it's titled, ensuring a National Policy Framework for Artificial Intelligence. I just wanna make it abundantly clear. This executive order is not about establishing that national framework.

[00:36:48] It is about stopping regulation and accelerating AI at all costs. So like, and just to reinforce that point, I'm going to read to you excerpts from the executive order. It says We remain in the [00:37:00] earliest days of this technology revolution and are in a race with adversaries for supremacy within it. That is the opening of the executive order to win United States.

[00:37:09] AI companies must be free to innovate without cumbersome regulation. But excessive state regulation thwarts, this imperative first state by state regulation by definition, creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for startups. Second, state laws are increasingly responsible for requiring entities to embed ideological bias within models.

[00:37:34] Ideological bias in this administration's viewpoint means not agreeing with our viewpoint on things. So the bias that they're okay with is, things that align with their views. That, and again, this is just kind of like how they position it. So it says, my administration, again, this is Trump must act with, with the Congress to ensure that there is a minimally, burdensome national [00:38:00] standard.

[00:38:00] So again, they're acknowledging that we need Congress for this not 50 discordant state ones. The resulting framework must forbid state laws that conflict with the policy set forth in this order. That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, that's funny, and communities are safeguarded.

[00:38:19] A carefully crafted national framework can ensure that the United States win the AirX. Okay, so now again, if you're new to this show, Mike and I make every effort possible to present anything politically in as objective a way as possible. And just telling you, here is the facts around this. So, again, they don't have a plan for a national framework, nor do they actually intend to have a plan.

[00:38:45] From what we can tell from anything that's been disclosed so far, because any plan to restrict acceleration of AI would reduce in their minds their ability for AI supremacy over China. Mm. That, that's just like straight up. Okay. So then what does the [00:39:00] executive order actually do? The entire thing is about penalizing states.

[00:39:05] So again, this is the whole thing about studying a national standard, and yet nothing in here talks about setting a national standard, section two policy. It is the policy of the United States to sustain and enhance the US' global AI dominance through a minimally burdensome national policy framework for ai.

[00:39:21] Minimally burdensome national policy framework for AI is their wording. section three AI litigation task force. So what is this gonna do to establish this li this policy Nothing within 30 days of the date of this order, the attorney general sell establ an AI litigation task force. So we're not establishing a national regulation task force.

[00:39:41] We're establishing a task force to sue states away from doing anything. So there's an AI litigation task force whose policy or whose responsibility is to challenge state AI laws. Section four, evaluation of state laws. Within 90 days of the order, the Secretary of Commerce, along with a bunch of other, executives, shall publish an [00:40:00] evaluation of existing state AI laws that identify onerous laws that conflict with our policies.

[00:40:05] Section six. Within 90 days of the publication, the chairman of the Federal Communications Commission shall initiate a proceeding to determine whether to adopt a federal reporting and disclosure standard for a miles. That's the closest we could get to anything about, like any kind of national standard, is that within 90 days, they're gonna basically figure out if they should have one.

[00:40:23] And then Section eight says the special advisor for AI and crypto, which I think is David Sachs. Yeah. And the assistant to the president for science, the technology. She'll jointly prepare a legislative recommendation establishing a uniform federal policy framework for ai. So there we go. There's our, you know, uniform federal policy that preempts state laws that conflict with the policy.

[00:40:41] Now, mind you, this is the only thing in this executive order that has no timeline tied to it. So it's like, okay. so yeah, it is what it is like, and again, from a, from a. A politics standpoint or a policy standpoint? I don't know what the right answer is. I actually agree that it probably is gonna be challenging to [00:41:00] have this state by state thing.

[00:41:01] I mean, there's 1200, pieces of legislation at different levels across 50 states right now. That seems impossible to manage. To me, my alternative though, like, well then what is the plan? Like if , if it, if you're going to preempt all of this and you're going to go legally after states that do something about it, then have a plan for like what that is, or at least a framework of what that plan could be.

[00:41:24] But I don't believe that they want one. Like I, I don't think that that is the intention at all of what they're gonna do. And if it is, it's gonna be minimally burdensome. So I, I, we gotta talk about these things because they're important and it is an executive order. I just don't feel like this is gonna be anything other than litigation like it.

[00:41:45] The whole point of this is just to slow things down with courts. 

[00:41:50] Mike Kaput: It's really interesting to also read through the full David Sachs tweet, which we'll put in the show notes because I think it's pretty [00:42:00] interesting and you have to consider all the different perspectives. But also I thought just from a political strategy perspective, it gave us a very interesting look at what they're trying to preempt in terms of opposition.

[00:42:12] And from what I would guess, this entire tweet is about preventing opposition in their own party because the things definitely that they are pointing out, the concerns, child safety, big issue that cross across party lines could engender some rebellion within the ranks because it's a huge issue.

[00:42:30] So they carve out the exception for those kind of rules. Communities, they talk about ai, preemption would not apply to local infrastructure. IE people pissed about data centers, right? Yep. creators obviously cuts across different lines as well. They say, okay, copyright law is already federal, so there's no need for preemption.

[00:42:48] And then that censorship issue is like, you know, rally the troops around the idea that these models are biased. And they even say, look, straight up blue states will not make sure they're not censored. So, [00:43:00] right. Again, regardless of where you come down in terms of the political aisle, really interesting to see that strategic framing.

[00:43:06] Paul Roetzer: Yeah. It reinforces Michael, we've said many times on the show recently, nobody on either side of the aisle knows what to do with ai. Like they don't know what voters care about. Yeah. And so you're gonna have republicans that hate this executive order. Yep. You're gonna have Democrats that hate it and then you're gonna have a mix of them that like, kind of like the direction.

[00:43:24] So there, there's no agreement politically. and it's a becoming a very divisive, issue within politics. And like I said, many times, I think by spring of next year. AI is just gonna like, blow up from a political perspective. And again, it's like people don't know which side they fall on. It's like one of the biggest, critics of this I think is Bannon, right?

[00:43:45] Like Steve 

[00:43:45] Mike Kaput: Bannon. Yeah. Steve Bannon's a big, big within the mag. 

[00:43:48] Paul Roetzer: Yeah. He, that's what Trump's doing with ai and they're like trying to divide me over this. So it's again, for, for our perspective, Mike and I don't care to talk about politics. We've avoided at all costs if we can, you as a listener, don't [00:44:00] care what our political views are.

[00:44:01] So we are, we are purely just trying to present this as like, I don't know where's gonna go with this, but here's the facts. Like, and they put on an executive order that people on both sides of the aisle hate. there are supporters, but there's a lot of people who do not like this. 

[00:44:17] OpenAI State of Enterprise AI Report

[00:44:17] Mike Kaput: Alright, Paul, let's dive into our rapid fire this week.

[00:44:20] So first up, OpenAI has released a new report offering a detailed look at how AI is being deployed within large organizations. This report is called the State of Enterprise ai. It combines usage data from more than 1 million business customers. Survey responses from 9,000 workers across nearly a hundred companies.

[00:44:39] And according to these findings, enterprise adoption intensified over the last year. Total message volume on ChatGPT Enterprise grew eightfold while the consumption of reasoning tokens, which would, is basically a proxy metric here for complex model usage that increased 320 x per organization workers, [00:45:00] as we mentioned in a previous segment report, these tools save them an average of 40 to 60 minutes per day.

[00:45:05] Three quarters of respondents say the tech improves their speed or output quality, and the data also highlights a shift in job functions, coding related queries from staff outside of engineering. And IT roles Roetzer by 36%. Including non-technical teams and indicating non-technical teams are increasingly performing technical tasks.

[00:45:27] Now, interestingly, despite this growth, there's a pretty serious usage gap. The report identifies a frontier group of power users, the top 5%. They now send six times as many messages as the median worker. And Paul, I just couldn't help but thinking there's so much in here that provides powerful validation for things we have been talking about on this podcast for a while.

[00:45:49] I mean, basically the idea around frontier workers generating six x more messages, and this is actually correlated directly with value because [00:46:00] users who engage with a wider variety of tasks report saving five X more time than those who use AI for only a few. We've basically been saying, you wanna get better at ai, wanna get more value out of it?

[00:46:09] Use it more. Use it for more thing. Additionally. They said leading organizations, what they call frontier firms, are integrating AI more deeply. They generate two x more messages per seat and they use custom GPTs seven x more than median companies. One last thing here. Usage of custom gpt and projects increased 19 x to date.

[00:46:30] So again, kind of validates we have said just how important these simple but powerful features are for enterprises. What did you, what jumped out to you in this report? 

[00:46:40] Paul Roetzer: Yeah, I'd made, I've made a note of that same thing, Mike on the custom GPTs. Yeah. 'cause I just continue to be shocked at how little they have done to support them and yet their own data.

[00:46:49] I mean, it said approximately 20% of all enterprise messages were processed by a custom GPT or project. Unreal. And yet the only thing they've done is like integrated the upgraded models into them. Like they just [00:47:00] for two years basically. so. Yeah, I don't know. I think in some ways it validated what we've seen, like my personal experience has been very few workers and leaders have ever built a custom GPT.

[00:47:10] Like it's weird that they're starting to show this increase because still when we go out and talk to people, people don't know what they are or how to use them in part. Yeah. 'cause I don't think openAI's talks about them and maybe this is the start of a shift for them. and then very rarely are people knowingly using the reasoning capabilities.

[00:47:28] Now again, they're doing more and more to just integrate it directly in where the thinking model's baked in. Gemini has been doing that now since 2.5, where the reasoning is just baked right into it. But like, that's like the subconscious thing versus like people going and using like deep research as an example.

[00:47:42] the other thing is when you download the report, it's a, it's like a 24 page report. You can get through it pretty quick. There was a few data points I had not seen before, so I just wanted to note those. They said more than 1 million business customers now use Open Eyes tools, and that means business accounts, because then it said they serve more than 7 [00:48:00] million ChatGPT workplace seats.

[00:48:02] Mm. So again, it's, it's the first time I've seen that data. Sam Altman had a lunch last week, that I was reading about the big technology, po blog. And he said that enterprise AI is the, this article said Enterprise AI is the fastest growing software category in history. Expected to bring in 37.5 billion next year.

[00:48:22] Hmm. And they were citing Gartner. That's up from basically zero in 2022. But at that lunch, Sam made it clear that openAI's is, has a major focus on enterprises going into next year. And obviously this report is, is part of that. Now, one thing to note, we'll put the link in the show notes is, there was a wired article last week that said, openAI's staffer quits alleging company's economic research is drifting into AI advocacy.

[00:48:48] So they were actually talking about, a couple of people. One of them, I think the guy's name was Tom Cunningham, who wouldn't, comment for the article, but basically said that openAI's is, BAS is talking about their technology [00:49:00] in these glowing, very positive terms, despite the fact that their own research is showing the disruptive effect this is gonna have on jobs in the economy.

[00:49:07] And they're basically burying that data. So I thought that was interesting. the article said, it actually touched on the Trump administration as champion AI's potential White House advisors have pushed back on claims technology will eliminate jobs, which has become an increasingly urgent issue for many Americans.

[00:49:23] Roughly 44% of young people in the US fear that AI will reduce job opportunities. According to a November survey from Harvard Kennedy Schools Institute of Politics. Then one other note here, Mike, on just like the adoption, there was a, a new Gallup survey that said the percentage of US employees who reported using AI at work at least a few times a year, which is, I don't know how relevant that data point is, really went from 40 to 45%.

[00:49:48] That's actually okay now that I'm like looking at this data out loud, only 45% of people say they use it a few times a year. Yeah, that's actually terrifying. frequent use a few times a week grew [00:50:00] from 19% to 23%, while daily use move less, ticking up from 8% to 10%. That's wild. So, so zoom out, like ignore the openAI's data for a second, which is talking to people who have ChatGPT BT licenses.

[00:50:13] Basically the Gallup poll, which is a more broad, that 23,000 US adults, they, they surveyed, nationally representative survey of 23,000 US adults employed full and part-time conducted in this is in August, a Gallup panel. Only 10% are using AI daily at work. Yeah. So people always, every time I talk to people, they're so afraid that they've fallen behind.

[00:50:35] They hear these stats about all this usage, and I'm like, no. Like you don't understand how early we are. And so this Gallup data might actually be the most interesting data point we're sharing in this whole section. daily use only at 10%. And it went from eight to 10%. Yeah. Like that's nothing. Two percentage points.

[00:50:54] it said in Q3, 2025, 35% of employees said their organization has implemented AI [00:51:00] technology to improve productivity, efficiency, and quality. 40% said their organization had not, and 23% said they did not know. So. 40% of people said the organization hasn't implemented ai and 23% said they don't know which means well at least they haven't to them.

[00:51:17] So that's 63% of organizations. Yeah. So again, every time you think you're behind, like just you're not, and then AI use in the workplace continues to grow. They say with 45% of employees saying they used AI at least a few times a year, even so daily, you see the 10% number of it. So again, there's, you can't look at a single research report and think you understand the market.

[00:51:40] There's all these different perspectives, but this gala poll is really interesting data. And again, if you're just getting started, don't worry about it. You, you can catch up and get ahead. Most people still haven't figured this out and still are not using it daily, no matter what AI bubble you are living in.

[00:51:57] Yeah. You probably think you and your organization are way [00:52:00] behind it and you're not. 

[00:52:01] Mike Kaput: And I would also just emphasize on the flip side, if you're someone that's like AI. Impact or economic impact or impact on jobs, et cetera, organizations has plateaued or isn't materializing. I would probably point you to this and say like, it is the beginning of ending one.

[00:52:17] Paul Roetzer: Yeah. Or like that demand is gonna somehow stop for NVIDIA's chips and computing power and like, oh my 

[00:52:23] Mike Kaput: God, anything can happen. But I would just encourage you to consider the vast majority of companies are barely doing anything yet with ai. 

[00:52:30] Paul Roetzer: That's a good point, Mike. That's the thing I always stress to people every time I was like, oh, is Nvidia stock gonna stop going up?

[00:52:35] Is Google gonna be worthwhile? Like is, are we at kind of like the end? I'm like, do you realize how low adoption is for ai? And of the 10%, like how many of those people actually know the full capability of what they have? Like it's so early. Yeah. Another 

[00:52:50] Mike Kaput: huge portion of that 10% are probably using it daily and being like, oh, this is great.

[00:52:54] I write emails with it. 

[00:52:55] Paul Roetzer: Right? Yeah. For like meeting summaries and emails, which, yeah, which is fine. Like those are fine, [00:53:00] but like that is not the full capability. 

[00:53:03] Google Cloud ROI of AI Reports

[00:53:03] Mike Kaput: All right. Our next topic, some more data here. There's a new series of reports out from Google Cloud that details the return on investment for generative AI across six major global industries.

[00:53:14] So they did research reports on each of these industries where they surveyed thousands of executives and sectors that include things like finance, manufacturing, and healthcare to track the industry shift from simple chatbots to more AGI agentic AI usage. So according to this data, AGI Agentic AI adoption is surging over half of executives and manufacturing telecoms and retail report.

[00:53:35] They're already using AGI agentic systems in production. The financial impact is also becoming clearer. 78% of manufacturing and retail leaders specifically report they're seeing ROI right now, while majorities in every sector surveyed say their projects are now just taking. Three to six months to move from idea to production.

[00:53:54] So cutting down kind of the time it takes to get things out into the market or get them [00:54:00] live. roughly half of leaders in finance and healthcare plan to allocate over 50% of their future AI spends specifically to agents. However, despite rapid uptake, data, privacy, and security remain the number one concern for executives evaluating providers across every sector study.

[00:54:17] So Paul, also some good work from Google Cloud on the ROI of ai, we will include links, a LinkedIn post, from Google Cloud that has all six of these linked. There's financial services, retail slash cpg, healthcare slash life sciences, telecoms manufacturing, media slash entertainment. What did you kind of, how do you see this, like complimenting what we just learned about enterprise AI usage from openAI's?

[00:54:42] Paul Roetzer: I think they're good quick reads. I mean, they've got some solid use case examples. They address some of the different challenges and then share the data by industry based on the actual, you know, surveys that they did. so I think this kind of stuff can be really helpful if you're trying to make the business case for AI and you're in one of these industries, it's like good data points to pull in.

[00:55:00] just noting AI agents specifically since they're so, you know, directly related to these reports, they define them as specialized LLMs or large language models that have specific roles, context and objectives to independently plan, reason, and perform tasks with access to data function, call APIs and can interact with other AI agents if needed.

[00:55:23] These can be prebuilt or in-house built agents. That's a really expansive explanation of AI agent. It, it is interesting though that they, I don't know if this was intentional or not, but the independently plan, they've like. Eliminated autonomous. I don't remember if Google Cloud was using that as part of their definition or not before, but commonly what these companies will do is they'll, you know, say autonomously perform tasks.

[00:55:48] So it seems like they've, very specifically adjusted this definition to, give a little bit more clarity there and not over promise what they're capable of doing. But that's, the reports are based on that. [00:56:00] Like, how, how much are you integrating AI agents and here's what they consider those to be.

[00:56:03] Yeah. So, yeah, like I said, good quick reads, if you're in one of those industries, it's probably worth a quick download and, go through it. I like that they put 'em all at the same time. You can just go access 'em. Yeah. Super 

[00:56:12] Mike Kaput: cool. 

[00:56:13] Paul Roetzer: Yep. 

[00:56:14] Microsoft Lowers AI Sales Expectations

[00:56:14] Mike Kaput: Alright, next up, maybe some data that shows the other side of AI ROI, Microsoft and some other enterprise software firms have basically pitched 2025 as the year of the AI agent.

[00:56:26] But a new report from the information suggests adoption might be moving a little slower than predicted, at least for certain tools. Now, according to this report, Microsoft specifically. Has lowered its sales growth targets for certain AI agent products after multiple divisions missed their goals for the fiscal year ending in June.

[00:56:44] Corporate customers have reportedly pushed back noting difficulties in measuring cost savings and technical challenges in getting the AI to reliably work with other applications. They cite a story from the private equity firm, Carlisle, which reduced its spending on [00:57:00] Microsoft's co-pilot studio after struggling to connect the tool to necessary data sources.

[00:57:05] Now, in a statement, a Microsoft spokesperson pushed back on this saying that the aggregate sales quotas for AI products have not been lowered, but they declined to address the specific reduction in growth targets cited by the information. Now Paul, this tells definitely a different story than our last two segments.

[00:57:23] openAI's and Google both claim people are getting tons of ROI from ai, Microsoft, however, at least through its actions, seems to be saying this isn't the case. What is the disconnect going on here? 

[00:57:35] Paul Roetzer: I mean, this data does not jive with what we heard earlier about the Gartner study of enterprise ai, fastest growing software category in history, 37.5 billion.

[00:57:43] Like that doesn't, it doesn't make sense. It'd be lowering. Now, I, I'm gonna say just at a, a very broad level, Mike, you and I do not use copilot every day. Correct. Like, so, so I have no firsthand experience to say, is this a copilot software product problem? [00:58:00] What I will say from lots of firsthand experience with many organizations who have copilot is my, my interpretation is it is often far more a lack of change management and personalized use case education.

[00:58:16] Like lack of AI literacy is actually probably the bigger problem. Now that doesn't mean copilot doesn't have issues and maybe it's not as good as Gemini or not as good as straight up ChatGPT team or enterprise. That's very possible. Even if it's not, let's say it's 80% of what ChatGPT team or enterprises.

[00:58:35] If you just provided AI education and training and personalized use cases, I can almost guarantee you the value you get from copilot would be greater than what is happening in the enterprises we talk to who have it and have not done those things, which is almost everyone. Now to that point, Mike, I was, I was almost crying, laughing at this tweet.

[00:58:52] Like, it's one of the better tweets I've ever seen. so I, this guy Peter Ness, I don't know who he is. I, I'd never followed him [00:59:00] before, but this thing had over 21 million impressions when I pulled it this morning. So I'm gonna read this, the whole tweet 'cause it is classic. and we'll put the link in the show notes if you wanna share it internally.

[00:59:10] All right. So again, this is, this is Peter. Last quarter I rolled out Microsoft Co-pilot to 4,000 employees, $30 per seat per month, 1.4 million annually. I called it digital transformation. The board loves that phrase. They approved it in 11 minutes. No one asked what it would actually do, including me. I told everyone it would 10x productivity.

[00:59:32] That's not a real number, but it sounds like one. HR asked how we'd measure the 10 x. I said, we'd leverage analytics dashboards. They stopped asking. Three months later, I checked the usage reports. 47 people had opened it. 12 had used it more than once. One of them was me. I used it to summarize an email I could have read in 30 seconds.

[00:59:51] It took 45 seconds plus the time it took to fix the hallucinations, but I called it a pilot. Success. Success means the [01:00:00] pilot didn't visibly fail. The CFO asked about ROI. I showed him a graph. The graph went up into the right. It measured AI enablement it. I made that metric up. He nodded. Approvingly we're AI enabled.

[01:00:13] Now, I don't know what that means, but it's in our investor deck. A senior developer asked why we didn't use Claude or ChatGPT. I said, we needed quote unquote enterprise grade security. He asked what that meant. I said, compliance. He asked which compliance? I said, all of them. He looked skeptical. I scheduled him a career development conversation.

[01:00:33] He stopped asking questions. Microsoft said a case study to the team they wanted to fe to feature us as a success story. I told them we saved 40,000 hours. I calculated that number by by multiplying employees by a number I made up. They didn't verify it. They never do. Now we're on Microsoft's website, quote unquote, global Enterprise achieves 40,000 hours of productivity gains with copilot.

[01:00:57] The CEO shared it on LinkedIn. He got [01:01:00] 3000 likes. He's never used copilot. None of the executives have. We have an exemption. Strategic focus requires minimal digital distraction. I wrote that policy. The license is renew. Next month I'm requesting an expansion, 5,000 more seats. We haven't used the first 4,000, but this time will drive adoption, quote unquote.

[01:01:19] Adoption means mandatory training. Training means a 45 minute webinar. No one watches. But completion will be tracked. Completion is a metric. Metrics go in dashboards. Dashboards go in board presentations, board presentations get me promoted. I'll be a senior vice president by Q3. I still don't know what copilot does, but I know what it's for.

[01:01:39] It's for showing where investing in AI investment meets spending. Spending meets commitment. Commitment means we're serious about the future. The future is, whatever I say it is, as long as the graph goes up and into the right. I mean it is o obviously like meant to be funny, but it is not at all far off from some of the conversations I have had with [01:02:00] actual leaders and actual enterprises.

[01:02:02] It was amazing. It was so, well, I don't know if he used Chet to write it, but like It's amazing either way. 

[01:02:08] Mike Kaput: Yeah. More than a kernel of truth in this, regardless of whether or not it's a parody. 

[01:02:14] TIME Person of the Year

[01:02:14] Mike Kaput: All right, next up. Time Magazine has named quote the architects of AI as its 2025 person of the year. So this is a collective recognition, recognizing the executives who have turned AI into the single most impactful technology of our era.

[01:02:29] This group includes Nvidia, CEO, Jensen, Wong openAI's, Sam Altman, SoftBank, MAA, Yohi Mets, mark Zuckerberg, and Google deepminds De Saba. Now, this cover story characterizes 2025. As the year, these leaders shifted the industry from debating safety to a quote sprint to deploy new systems as fast as possible.

[01:02:50] It notes. Nvidia has become the world's first $5 trillion company ChatGPT usage has surpassed 800 million weekly users. And the article also details the geopolitical and social [01:03:00] fallout from the Trump administration's $500 billion Stargate infrastructure project to lawsuits. Alleging that chatbots have induced psychosis in users.

[01:03:10] The feature concludes that under the direction of figures like Wong and Altman, humanity is now moving all gas, no breaks towards the AI future. Now, Paul, just a couple quick interesting things to me here. One is kudos to time. I came into this with very low expectations. Personally, kind of felt over the last few years, the quality of articles being put out by a lot of major publications, in my opinion, has fallen pretty badly.

[01:03:35] So I thought this was gonna be like underwhelming, really surface level summary. But this is a really well written article. It's pretty long, but I've already sent it to several people, basically saying like, Hey, read this, and you'll get a really good solid summary, kinda what's going on in AI right now.

[01:03:50] It's a really, really great recap. So kudos to that. 

[01:03:53] Paul Roetzer: So pass along if you're trying to educate coworkers or friends and family. 

[01:03:58] Mike Kaput: Yeah. And second now [01:04:00] then I'll turn it over to you here. The interesting to me also were the reactions to this news. 'cause some people are just not happy about this. There's always controversy around the person of the year.

[01:04:09] Jimmy Kimmel did this really scathing pit on this whole thing. It doesn't see, it seems like there's some backlash around this too. 

[01:04:16] Paul Roetzer: Yeah. I have not followed the backlash online honestly. I, I did see this, I, you know, I glanced at who they chose to put on the cover. and know, I think there was, I saw some of that about like, the choices that were made, some people that were missing and maybe some people didn't need to be on there basically.

[01:04:34] so yeah, I don't know. It good quick, well, not a quick read, but a good read, like you said, might kind of like a nice high level overview. everyone, you know, that they feature obviously plays a role. I do think that there were definitely some people that are probably gonna be very important to the future way AI goes that, you know, maybe aren't included.

[01:04:52] The one quote I saw, time tweeted that, caught my attention was Jensen, who they were featuring in the article, said, there's a [01:05:00] belief that the world's GDP is somehow limited at 100 trillion, which is, it was about 117 trillion this year. AI is going to cause that 100 trillion to become 500 trillion.

[01:05:10] Wow. Now, we've talked recently about Jensen and his, you know, visions for the future. And it's hard. It, you know, they seem exaggerated at times, but it's hard to, you know, for a guy who's done what he's done with that company, it's hard to bet against, you know, that view. And there are definitely some people in the AI accelerationist camp who, who think that GDP growth could dramatically outpace what it has historically been.

[01:05:33] And, you know, again, like you and I have said on this podcast episode, Mike, like, I'm, I'm pretty confident we're the very, very early innings of, of the impact this is gonna have both good and bad. So I don't, I would've a hard time like disputing that we couldn't see massive increase in GDP over the next decade.

[01:05:51] that's hard for people to fathom right now. That'd certainly be an optimal outcome. Yeah. As long as we still have jobs and human purpose. [01:06:00] Right, right. Like other than that, like, 

[01:06:03] Mike Kaput: yeah, I think they skipped over that part in the, person of the year. All right. 

[01:06:08] The Economics of AI and Data Centers in Space

[01:06:08] Mike Kaput: Next up in a new interview on the invest, like the best podcast investor, Gavin Baker, who we've discussed and featured a couple other times here, detailed de escalating infrastructure competition between Nvidia and Google.

[01:06:19] He argued Google currently holds a temporary advantage as a low cost producer of AI tokens, which is basically their strategy designed to suck economic oxygen out of the market. However, he predicts this dynamic will fundamentally shift as NVIDIA's next generation Blackwell chips reach mass adoption.

[01:06:37] And he also, in addition to a bunch of other topics he covered, he forecasted this interesting trend we're hearing more and more about, which is a long-term move towards space-based data centers to solve power constraints. He basically says that they are space-based data centers would be superior in every way to the ones we're building on Earth.

[01:06:57] he also finished this interview [01:07:00] by warning that traditional SaaS companies could be in trouble if they don't adapt to lower margin. Ai, changes, including agents that impact their business model. So Paul, there's a ton to unpack. You had noted in a post this past week that this episode is an absolute masterclass in a bunch of different topics.

[01:07:19] Like what jumped out to you here that we have to be paying attention to? 

[01:07:22] Paul Roetzer: Dude, I would, I would've paid to listen to this episode like is so densely packed with insane knowledge about AI models, frontier Labs, data centers in space, scaling laws, Nvidia geopolitics investing like you, you gotta listen to it.

[01:07:38] Yeah. If you haven't heard this episode before or ever heard Gavin speak, I mean it's just incredibly impressive stuff. the one I'll, I'll zoom in on, 'cause it was rapid fire. I could spend 20 minutes talking about this episode, but the data centers in space, so like, I think it was last week, I was half joking about how we went from like super intelligence is all anybody was talking about to then, like recursive self-improvement sort [01:08:00] of became the thing in the last like two weeks.

[01:08:02] Data centers in space are the IT thing. Yeah. Like to end the year, it is everywhere. Like, Bezos is talking about it with, with Amazon and Blue Origin, his rocket company. Google's talking about data centers in space in the next three to five years. Elon Musk obviously is like big on this with SpaceX.

[01:08:20] Like, I honestly, like, I don't know if it's just, again, I live in a, a bubble about this stuff, but like everyone was talking about data centers and space and so I knew like I follow a lot of this stuff pretty closely. If I follow, follow SpaceX pretty close, I have a decent idea of like, why, you know, I've said I think there'll be a $10 trillion company.

[01:08:36] Like, but I never really honestly analyzed. Why data centers in space makes so much sense or how viable it is. Mm. the three minutes where he explained it, I was like, oh my God. Like I get it now. Like you're, it's just one of those like light bulb moments. You're like, this makes so much sense. so he said that, it's the most important thing that's going [01:09:00] to happen in the world in the next three to four years is data centers in space.

[01:09:05] And so then he goes into this insane like, breakdown of how this works. So he says there's power and there's cooling, and then there are chips. Like those are like the three main components. that's the to if you think about it from a total cost perspective. and then like that enables you to create these tokens.

[01:09:20] And, and now comes these like magic from these AI assistance. And so he talks about like satellites when they're up there, they have access to 24 hours of sun and it's 30% more intense 'cause they're outside of the earth's atmosphere. Then it kind of starts going into like, how do you handle cooling? So if you're on earth, you need like water to cool these systems.

[01:09:37] Well, and there you just, you put the, the, the the components that need to be cooled on the part that doesn't face the sun. And now it's basically an absolute zero. and then he talks about lasers communicate. It was just like, oh my God, it's so good. So if you wanna know why SpaceX is now the most valuable private company in the world and soon to IPO for probably [01:10:00] 1.5 to $2 trillion next year, go listen to those three minutes of him explaining this.

[01:10:05] And then there was an article, this was in ArsTechnica. And the reason that kept, caught my attention is 'cause Elon tweeted Eric, the author, he is like, as usual, knows exactly what he's talking about. So in this article. It talks about, what the plan is. And so he says, why would Musk take SpaceX public now?

[01:10:23] So he's talking about the idea of going public at a time when the company's revenues are surging. Thanks to the growth of starlink internet Constellation, the decision is surprising because Musk has for so long resisted going public with SpaceX. He has not enjoyed the public scrutiny of Tesla and feared that the shareholder desires for financial return were not consistent with his ultimate goal of settling Mars.

[01:10:44] A significant shift in recent years has been the rise of ai, which Musk has been involved in since 2015, as we've talked about it, and then later co-founded Xai in 2023. And obviously Tesla's making a big push into AI and robotics, but raising large amounts of money in the next 18 months would allow Musk to [01:11:00] have significant capital to deploy at SpaceX as he influences and partakes in the convergence of all these technologies and at his companies.

[01:11:07] And then how can Space SpaceX play in this space? In the near term, the company plans to develop a modified version of the starlink satellite to serve as a foundation for building data centers in space using a next Gen starlink Satellite manufactured on Earth is just the beginning. the level beyond that is, this is a quote from Musk.

[01:11:25] The level beyond that is constructing satellite factories on the moon and using a mass driver to accelerate AI satellites to lunar escape velocity without the need for rockets that scales to more than 100 terawatts per year of AI and enables non-trivial progress toward becoming a cardish shev, two civilization, which means like capturing all the, energy output from the sun.

[01:11:49] We're not even at EF scale one, on Earth yet, based on some projected analysis, SpaceX is expected to have. 22 to 24 billion in revenue next year. [01:12:00] but it's on par for way more than that. And then SpaceX just, did an internal, like a secondary sale so they could do buyback from employees at $800 billion valuation.

[01:12:12] So again, when you're looking forward to like, why are they worth so much money, in part because they're gonna build all these data centers in, in space. but this is like a crazy thing, and I mentioned this to Mike when I saw him this morning, at a $800 billion valuation. So this is before they even go public next year for three times this value.

[01:12:28] Basically two to three times this value. Elon Musk is, is rumored to own 42 to 40% of SpaceX. So it was 44% in 2022. I I've seen that. It's down to 42% now at 44%. His stock in SpaceX alone would be worth $352 billion, meaning his stock in SpaceX, forget Tesla and Neuralink and X AI and all these other things.

[01:12:55] He's a part of. That alone would make him the richest person in the world by a lot. [01:13:00] So his estimated net worth with that SpaceX valuation is $540 billion, which means Elon Musk is gonna be worth probably a trillion by this time next year. which is the bigger than the GDP of like all but like nine countries, like just wow.

[01:13:20] Larry Page is number two at, at like 260 billion. So astonishing money, and if you listen to the Gavin Baker podcast, you will actually understand in a much greater way why Elon Musk is worth what he's worth and why SpaceX is. Poised to be one of the most valuable companies in the world within the next three to five years.

[01:13:41] Crazy stuff. My gosh. 

[01:13:44] Mike Kaput: Yeah, that's, that requires quite a bit more imagination sometimes than I think I bring to the table. It's hard to, it hard. 

[01:13:51] Paul Roetzer: Yeah. If you haven't contemplated this stuff before, and this is the first time you're hearing, especially if you never like, knew about his intentions to get the Mars and how he's gonna do it and how he's gonna build bases on, on [01:14:00] Moon and stuff like that.

[01:14:00] Yeah. Like you, you need to like, like take a little time to like really process all of this stuff. but there's a, there's a great, like, we could probably put these in the links, Mike, the show notes, but Tim Urban had did these amazing, way, but why articles about Elon Musk and SpaceX back like seven, eight years ago.

[01:14:18] I remember reading these things about his, like, vision of how to get to Mars and how they're gonna build Starship and all these things, and he's done it like he's doing all the pieces of this vision. It's, it is wild to see. 

[01:14:31] Shopify SimGym

[01:14:31] Mike Kaput: All right, well next up we've got Shopify on Earth. Yeah. Back on Earth. Still back on earth, but honestly, a little sci-fi here.

[01:14:38] Yeah. Because Shopify has introduced a new AI powered tool designed to simulate customer behavior before you even take any store changes. live online. This is called SIM Gym, and the system generates digital customers with human-like personas that browse websites and complete tasks, which basically allows emergent to test a [01:15:00] redesign or a campaign without using real traffic.

[01:15:03] So according to Shopify Executive McHale Peran, the tool lets businesses identify optimization opportunities and run ab tests using zero live visitors. Sim GM uses data from billions of purchases to model these AI agents, which then provide insights on metrics like add to cart rates, cart values, navigation patterns, and the like.

[01:15:25] Shopify, CEO, Toby Lu. Describe the release as quote, one of the craziest things the company has shipped. So this is currently available as an AI research preview. It's free to install though Shopify indicates additional charges may apply per simulation run. Now Paul, this reminded me actually of a, a paper we covered on episode 1 74.

[01:15:46] Basically, researchers were able to simulate consumer personas to rank products, and it worked just as well, if not better than humans. Now we're seeing this out in the wild. This, I don't know, it's early here, but this seems like an important area to watch, [01:16:00] especially given our marketing backgrounds. Like if this ended up working at scale, there's a lot of marketing and market research functions that get pretty disrupted.

[01:16:10] Paul Roetzer: Definitely. And I could see this, you know, to the point where in the next, I don't know if like one to two is too soon, but I like to where it's gonna be weird to do marketing campaigns, sales outreach, customer success stuff, we, where you haven't. Pres simulated everything that was gonna happen. And you basically, the simulations just get smarter every time and they learn really, really fast.

[01:16:33] but yeah, I mean, I can like, you know, again, we use HubSpot as our, you know, our CRM, like, it almost already feels odd to me that we can't simulate email sends and HubSpot or landing pages. And like predict performance because they have all the customer data. Like, this goes back to the idea that started, you know, why I started pursuing AI all those years ago is this idea of a marketing intelligence engine where you could make predictions about outcomes Yeah.

[01:16:57] Based on data sets. And I [01:17:00] just, yeah, I think this will be it. It will, like I said, it'll just feel archaic to not have simulations run before you launch products. Launch campaigns. Launch landing pages. 

[01:17:12] Mike Kaput: Yeah. And I was just kind of brainstorming before we started recording, just like. What are the downstream implications of this?

[01:17:19] Like if it eventually works at scale as advertised, so big if, but I just wanted to throw these out there, take 'em early, leave even. But obviously this is just a massive accelerant to any type of site optimization. If you are a marketer, if you are in that world, huge immediate savings on running paid to sites for optimization.

[01:17:37] We, which people often do if this works like traditional market research, like you have to pivot your entire business model tomorrow. Like, I don't know how their business works, but whatever they're charging would not be worth anything after something like this. just kind of interesting and think, I wonder how this, does this continue to work if agents end up buying everything for us?

[01:17:58] I'm not sure, but [01:18:00] it's interesting to consider all these implications 

[01:18:02] Paul Roetzer: plus the friction with creatives because like, you know, part of the value of a creative is their intuition and the. Subjective nature of their experience that goes into devising ads and creating products and developing logos and all those things.

[01:18:16] Yeah. If it's just like, yeah, we're gonna trust the data. All right. It does create a lot of friction down the road. 

[01:18:24] Mike Kaput: Well, it's certainly hard to say, like, if this worked and you're just like, wow, we have a 25% uptick in sales, like nobody's gonna go back to your position because it's more creative. Probably.

[01:18:35] Yeah. Would be my guess. Yeah. 

[01:18:37] Research on Teen AI Usage

[01:18:37] Mike Kaput: All right. Next up, a new survey from the Pew Research Center looks at how AI is being integrated into the digital lives of American teenagers. So this was data released this month in December. It reveals 64% of US teens between the ages of 13 and 17 now use AI chatbots 30% roughly use them on a daily basis.

[01:18:57] Chat. GPT is the clear market [01:19:00] leader right now. It's used by 59% of the teens surveyed. That's more than double the adoption rate of Gemini and meta ai, which sit at 23% and 20% respectively. there's some interesting, demographic usage patterns. Black and Hispanic teens more likely to use chatbots daily than white teens.

[01:19:19] ChatGPT specifically sees higher adoption in households earning $75,000 or more annually. So Paul, just some more data here. On episode 180 4, we had featured some research from a site for parents called Kids Out and about.com. It's run by one of our longtime podcast listeners. In that research, only 5% of parents surveyed said they felt truly confident in their ability to guide kids on AI usage like this.

[01:19:44] Pew Research just shows how important it is to address that because 64% of teens say they're using AI chat bots. And honestly, it seems a little low to me. 

[01:19:53] Paul Roetzer: I, I was just noting the difference between about 30% using it on a daily basis. And then [01:20:00] the stat we heard earlier about from Gallup poll, only 10% of workers using it daily.

[01:20:04] So apparently, like, be careful because these teens coming up and a few years are gonna be in the workforce and they're gonna be, you know, AI native, better catch up as the workers. Yeah. I don't know. I mean, I see this with my own kids. I was actually having a conversation with my daughter this weekend.

[01:20:22] She brought up about, you know, ways she was using ChatGPT with, to inspire some of the storytelling she was working on and things like that. Just like unprompted, it was like a guiding and you say, Hey, I was, here's what I was doing with it this weekend. It's like, oh, that's a really smart way to use it.

[01:20:37] so yeah, I think kids are gonna start figuring this out, but I also know, like in her case, a lot of her friends don't use it. Yeah. Like she's, she's definitely more of an anomaly within her eighth grade class in terms of kids using it. but I also know there are some kids misusing it based on what she's telling me.

[01:20:53] so there, there are some using it in ways that I would not guide. And that actually leads back to the stat you mentioned about [01:21:00] parents not sure how to guide their children. Yeah. I can tell you from firsthand experience, there are kids in that age bracket using this, who are getting no guidance from the parents.

[01:21:08] Yeah. So, 

[01:21:10] Mike Kaput: yeah. 

[01:21:11] OpenAI Certifications

[01:21:11] Mike Kaput: All right. Our last topic this week, openAI's, is moving directly into workforce training. They're launching their first official certification courses, so this rollout of openAI's certifications as they're calling them, begins with two specific tracks. The first is something called AI Foundations.

[01:21:28] It's designed to teach practical skills directly inside the ChatGPT interface. So in this format, AI is going to act as a tutor providing a practice space and immediate feedback for users. Right now, this is currently being done through pilot programs with major corporate partners, including Walmart, John Deere, and Boston Consulting Group.

[01:21:48] The second course is ChatGPT Foundations for Teachers. This was launched on Coursera. It's tailored for K to 12 educators and shows them how to use the tech for things like lesson planning and administrative [01:22:00] support. So OpenAI says these credentials, verify job ready skills in a market where 800 million people now use chat GP ChatGPT weekly.

[01:22:08] And where workers with AI expertise reportedly earn about 50% more than those who don't have it. So Paul, I just thought generally very interesting part about this being, in a full learning experience available directly in chatGPT also jumped out to me as notable. 

[01:22:25] Paul Roetzer: I definitely, like they've been moving in this direction.

[01:22:28] They hired an executive from Coursera, like, I don't know, like eight, eight months ago or so. Yeah. So it was pretty obvious they were gonna start investing pretty heavily. They rolled out some other education initiatives a few months back. early signs are, it's pretty good. Like I went in and played around a little bit.

[01:22:41] in, in some of the early courses they put out. So I think this is good to see, like the way we think about our AI Academy is that it, it should be complimentary to the stuff that's coming out of the AI labs and the tech companies, right? So, you know, we'll often guide people like we have content around obviously ChatGPT, and Google, Gemini and Anthropic within our academy.

[01:22:59] But we'll always guide [01:23:00] our learners like go and like, take the courses and get certified wherever you can with these technology companies and even entry level employees. Like I would absolutely encourage people go get as many of these certificates as you can, go get a broad range of perspectives and experience with the platforms and take advantage of this stuff if they're making this education free.

[01:23:18] Great. Like, go, go do it. 

[01:23:21] Mike Kaput: Awesome. Well Paul, just a couple of final announcements here and then we'll wrap up. if you have not left us a review on your podcast platform of choice, please go do so. It helps us make the show better and reach more people. If you could also give us a follow on whatever podcast platform you use, that would be extremely helpful as well.

[01:23:41] And as a reminder, go ahead and take this week's AI pulse survey, SmarterX dot ai slash pulse to go take the few questions, help us out learning more about our audience, and learning more about how people are viewing this week's topics in ai. 

[01:23:58] Paul Roetzer: Good stuff as always, Mike. And [01:24:00] again, the newsletter, I mean, there was, there's probably 10 other topics we wanted to talk about this week and we already went a little longer than normal, but hopefully, you know, it was worth the extra 10 minutes or so this week.

[01:24:08] There's just so many good things as we wind the year down. stuff we wanna make sure we touch on leading into 2026. So again, we will be back, what is that gonna be, December 23rd? Does that sound about right? Yes. The 23rd 

[01:24:20] Mike Kaput: is gonna be our final episode of the 

[01:24:21] Paul Roetzer: year, and then we'll be back January 6th.

[01:24:24] Sixth, yep. Sixth. Okay. Alright. So thanks everyone for listening as always, and have a great week. Thanks for listening to the Artificial Intelligence show. Visit SmarterX dot AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy and engaged in the SmarterX slack community.

[01:24:55] Until next time, stay curious and explore [01:25:00] ai.

Recent Posts

[The AI Show Episode 186]: GPT-5.2, Disney-OpenAI Deal, New Trump AI Executive Order, OpenAI State of Enterprise AI Report, Teen AI Usage & Data Centers in Space

Claire Prudhomme | December 16, 2025

On Ep. 186, OpenAI launches GPT-5.2, Disney invests $1B in Sora, and Trump signs an Executive Order on AI regulation.

83% of Professionals Willing to Switch AI Models Due to Ads (Informal Survey)

Mike Kaput | December 16, 2025

OpenAI faced backlash in past weeks for testing "app suggestions" in ChatGPT, a move many users interpreted as a shift toward ads. Data says that might not be a good idea.

[The AI Show Episode 185]: AI Answers - Getting Started with AI, Core AI Concepts, In-Demand AI Jobs, Data Cleanliness & AI Fact-Checking

Claire Prudhomme | December 11, 2025

AI Answers Episode 185: Paul Roetzer and Cathy McPhillips answer audience questions on AI careers, agents, AI ops, and what’s coming in 2026.