OpenAI is drawing fire after its CFO hinted the company might want a government "backstop" for its massive infrastructure costs. And Microsoft has published a new manifesto pledging to build "humanist superintelligence" that keeps humans in control.
This week, Paul and Mike talk about those stories and more, including Google's new paper on the future of AI in learning, new data that shows AI is driving layoffs, and the backlash against Coca-Cola's latest AI-generated holiday ad.
This week's episode also covers a feud between Amazon and Perplexity over AI shopping agents, a shocking deposition from Ilya Sutskever about OpenAI's internal power struggles, and much more.
Listen or watch below—and see below for show notes and the transcript.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:09:09 — OpenAI Draws Fire for Comments About Government Backstop
- OpenAI CFO Calls for More Exuberance - Bloomberg
- OpenAI's Sam Altman backtracks on CFO's government 'backstop' talk - NBC News
- X Post from Sam Altman
- X Post from David Sacks
- Debt Has Entered the AI Boom - The New York Times
00:23:02 — Microsoft’s Humanist AI Manifesto
- Microsoft’s Humanist Superintelligence Manifesto - Microsoft
- Microsoft Outlines OpenAI-Independent AI Vision - The Wall Street Journal
- Microsoft Team Pledges Humans Stay in Charge - Semafor
- X Post from Mechanize: Critique of Microsoft Superintelligence Plans
00:38:36 — Google AI and the Future of Learning
- AI and the Future of Learning - Google Services
- X Post from Yossie Matias on Google’s AI in Learning Initiative
- AI and the Future of Learning - Google Blog
00:48:28 — Data Shows AI Is Driving Layoffs
- AI Drives Highest October Layoffs Since 2004 - Bloomberg
- Nov 06 October Challenger Report: 153,074 Job Cuts on Cost-Cutting & AI - Challenger, Gray, & Christmas
- X Post from Andrew Curran on AI Jobs Impact Act Announcement
00:52:43 — Coca-Cola’s AI Christmas Ad Generates Controversy
- Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different - Hollywood Reporter
- Man Who Created AI Holiday Coke Ad Says It Took More Creativity Than You Realize - Hollywood Reporter
00:57:46 — Amazon and Perplexity Feud Over Agent
- Amazon Demands Perplexity Halt Purchasing Agent - Bloomberg
- Perplexity Responds: Bullying Isn’t Innovation - Perplexity
01:03:18 — Ilya Sutskever Deposition
- OpenAI Founder Deposition on Anthropic Talks - The Information
- X Post from Helen Toner: Take on OpenAI Deposition
- Full Deposition
01:08:48 — Apple Nears Google Deal
01:11:56 — AI Companies Are Going on the PR Offensive
- Meta Touts Data Centers’ Economic Impact - Meta
- Google Announces Oklahoma Workforce Pipeline - Google Blog
This episode is brought to you by our MAICON 2025 On-Demand Bundle.
If you missed MAICON 2025, or want to relive some of your favorite sessions, now you can watch them on-demand at any time by buying our MAICON 2025 On-Demand Bundle here. Use the code AISHOW50 to take $50 off.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: We have these two opposing sides. So there's the humans should always remain in control. Mustafa Suleyman approach. And then there's the, we won't have control and we should just accept that all the, like the all knowing AI is going to control us approach. So this is what's leading to increased chatter in political circles, religious circle.
[00:00:20] And societal revolt is too strong of a word at the moment, uneasiness, I'll say. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host.
[00:00:41] Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for [00:01:00] all.
[00:01:03] Welcome to episode 179 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording November 10th, 11:00 AM Eastern Time. It could be an interesting week. I don't know. I mean, we, I, we're running outta weeks to launch all these models, Mike, there's all these rumors of models coming out and I, you know, I think we still may see a, a few of them before the end of the year, and gosh, it's crazy to think we only have about, what, seven weeks left in the year.
[00:01:31] Mike Kaput: Yeah. Seven weeks. So, you know, like two years in ai. Seriously.
[00:01:36] Paul Roetzer: All right. Well, we, we have some really interesting big picture topics this week. I actually had, I, I don't know if fun is the right word, but certainly like intriguing process going through, getting ready for, this week's episode over the weekend.
[00:01:50] there's some topics I'm just excited to talk about. I think that are gonna open people's minds and sort of lay the groundwork for some of the bigger conversations I think we're gonna [00:02:00] start having about ai, in society and education and business, things like that. So, plenty to get into. This episode is brought to us by MAICON, 2025 On Demand.
[00:02:09] If you listen to the podcast. regularly. You have heard us talk about this. So MAICON was almost a month ago, right? Mike? Was it? Yeah, four weeks ago now. Yeah. kind of crazy to think about, but you can get, on demand access immediately to 20 of the top breakout sessions and keynotes from that event.
[00:02:27] they're an incredible value, for what you get. So it starts with my keynote, but you can also actually watch my keynote on YouTube. Now it's available for free on YouTube, on our, SmarterX YouTube channel, but we have becoming AI-driven leader with Geoff Woods, which is amazing. Mike's 38 tools, shaping the future of marketing better than prompts.
[00:02:48] with Andy Crestodina. We have Michelle Gansle, the former Chief Officer for McDonald's on empowering teams in the age of ai, human side of ai, which is incredible with, with, with, [00:03:00] leaders from the AI labs themselves, talking about what's going on within the labs. Just a amazing collection of, session.
[00:03:07] So check that out. You can go to MAICON ai, that's M-A-I-C-O n.ai, and you can click on the on-demand bundle. We'll also put the link in the show notes and you can use AI show 50 for $50 off of that. All right, so last week, Mike, we introduced AI Pulse, which was our weekly, now I guess we could call it a weekly series where we're going to do short surveys and try, kind, kind of gauge what's going on with our audience, get feedback on key topics.
[00:03:34] so that's the premise is each week we're just gonna kind of lead off with a couple of questions, and then you'll have the week, before the next podcast to go in and give us your thoughts. And they're quick hitting. These are not, this is not a demand gen thing for us. We're not even collecting email addresses as part of this survey.
[00:03:50] So, it's literally just, if you see your email address, it's only because it's tied to the Google form, which is, we're using Google Forms, but we aren't, you know, connect collecting emails and marketing to you [00:04:00] as a result of this. It's literally just research for us. So we asked last week about following statements.
[00:04:05] Best describe your current personal feeling about AI's impact on job security. We had just over a hundred responses, so this is a small sample size. This is not like projectable data, but it's more just like, to give you a sense of kind of how our audience is thinking about things. So again, which do you, the following statements best describes your current personal feeling about AI's impact on job security?
[00:04:24] The number one answer, Mike, was it's a near term threat. I'm not worried about today, but I'm concerned about the next one to two years followed. That was at 30, 6.8%, followed closely at 28.3% with it's an immediate threat. I believe it's already causing significant job displacement. And then 27.4% said it's a long-term rebalancing.
[00:04:44] It's gonna cause some change, but you know, over time it's gonna be okay. 7.5% said it's an opportunity, it'll create more jobs and value than it disrupts. And then no, responses. It looks like Mike said it's overhyped. So no, no one that listens to our [00:05:00] show feels it's over hyped. The second question we asked was, which statement best describes your personal day-to-day use of AI tools in your professional work?
[00:05:09] by far the top answer was, it's a habit. It's fully integrated. I use it daily as a core part of my workflow. That is 58.5% of respondents. 34% said they use it consistently multiple times a week for specific important tasks. 7.5% occasional, mostly for experimentation and non-critical tasks. And then no one said rare to none, which doesn't surprise me.
[00:05:33] Right. With our audience. and then as, as we said last week, we'll ask some, questions just to get a sense of the audience a little bit more. So as things like job title, size of company, and again, these are more just benchmarking to try and get a sense of like who the people are that are responding, without collecting that data, you know, at a personal level and making it identifiable.
[00:05:52] so nice mix of audiences. In terms of like industries, the biggest was 17.9% is professional services marketing. [00:06:00] It's a lot of probably agencies, consultants, I would guess. And then professional services other, that was at 17.9, 14.2% was other 13% in education. Like we knew we had a big education audience.
[00:06:10] Yeah. But it's nice to see that software was right there at 12%, manufacturing at 10%. So again, just to give you a sense, and then the titles might were all over the place. Yes. a lot of leadership titles like CEOs, founders, chief technology Officers, chief op operating Officers, heavy Dose of marketing and sales, directors of marketing, chief marketing officers, tech and Data software architect, ML engineer, data Analyst, and then product and strategy.
[00:06:37] Looks like it had a decent amount. And then consulting and education. So again, we, we kind of sensed our audience was a pretty diverse mix of backgrounds of industries and titles and things. And this, this certainly seems to prove that out. So. That was last week. Again, you can take part in this week's and we actually have all of these on the site.
[00:06:53] You'll be able to go back and look at this each week. Yeah. so this week's we're asking, do you believe the concentration of [00:07:00] power in a few major AI labs is a significant problem? And number two is what should be the primary focus for advanced AI development right now? So both of these questions are gonna make a lot more sense as we go through today's content, Mike.
[00:07:12] but again, you can go to, what is it, Podcast.smarterx.ai/ai-pulse
[00:07:18] Mike Kaput: and we'll include a link to all of this and to the survey right in the show notes.
[00:07:25] Paul Roetzer: Yep. Yep. So yeah, we'd love to have you participate. It's, great to see these answers and we'll keep this going. Again, that was the first one.
[00:07:32] So yeah, we had no idea what to expect if people are gonna take the survey or not. So, it's pretty cool and we'll just kind of keep doing it this way. We'll share the insights from the week prior and then we'll put it in. And then we also, we'll include the summaries of this in the weekly newsletter.
[00:07:45] Mike Kaput: And as an ai shout out here, Google Forms kudos because Google has layered Gemini over Google Forms now. So all of these results were instantly summarized and they created the charts for me. And it was the work of just a [00:08:00] moment to, at a glance seat, everything we needed to ask about when we were trying to figure out, okay, what were the responses?
[00:08:05] What's worth paying attention to? My gosh. Like I feel like we should use it for every survey moving forward. Yeah. So we did for the
[00:08:11] Paul Roetzer: AI council going into make that week when we had the council together, and I think I shared that on the podcast. It it is, it's one of those like, oh my gosh, like life before having Gemini baked into forms and life after.
[00:08:23] It's incredible. And I've started to feel the same way with, integration into Google Sheets. Yeah. Like I use Gemini and Google Sheets all the time, and it just seems like it's always been there. Like it's almost, I don't wanna remember what working in Sheets was like before I had it. All right. Well we had, like I said, some big picture topics.
[00:08:38] one that was just kind of crazy to follow. Last week, Mike was starting off with this openAI's and. The blowback they got from what seems like a bit of a, misspoken way of explaining what they were hoping to get from the government. And it led to some confusion and some retractions and yeah, and it, but it actually is like a important topic on a [00:09:00] bigger picture here about are we or are we not in this AI bubble?
[00:09:02] And, so lead us off, Mike, with this openAI's, issue I guess from last week.
[00:09:09] OpenAI Draws Fire for Comments About Government Backstop
[00:09:09] Mike Kaput: Yeah. They are facing a little bit of controversy right now because, openAI's, CEO Sam Altman is basically pushing back on talk that the company wants government help to fund its massive data center build out. Now this talk started because at a recent Wall Street Journal event, openAI's, CFO, Sarah Roetzer said all sorts of stuff.
[00:09:29] This was not the topic of the event, but in the course of some remarks during the event hinted that the government might quote, backstop the guarantee that allows the financing, meaning the financing for data centers to happen. This set off some controversy because some industry watchers believed it meant that openAI's wanted the government to essentially de-risk or guarantee its trillion plus dollar bet on building the data centers and computing infrastructure needed to build [00:10:00] advanced AI systems.
[00:10:01] This worried a fair amount of people including the White House themselves. White House AI Czar David Sachs posted on X. There will be no federal bailout for ai. The US has at least five major frontier model companies. If one fails, others will take its place. A few hours later, Sam Altman put out a post on X addressing the controversy saying Unequivocally openAI's, neither want has nor wants guarantees for its data centers from the government.
[00:10:29] He emphasized that taxpayers should not. Be in a position where they might have to bail out companies that make bad business decisions. So Paul, you alluded to this. The reason this is happening and touching a nerve is because there's this deepening anxiety about the AI infrastructure race at large.
[00:10:46] Analysts have started to warn that this surge in spending from major players like openAI's, nvidia, Microsoft is inflating valuations and concentrating risk. And investors additionally are questioning [00:11:00] how is openAI's going to actually finance all these, this $1.4 trillion in long-term commitments to build data centers and computing infrastructure that is all needed to power the next wave of ai, given that it only projects 13 billion in revenue this year.
[00:11:17] So Paul, I guess reading this and kind of learning about the controversy, my, it came to mind like, was this just a misunderstanding or a misinterpretation or did they accidentally say the quiet part out loud?
[00:11:32] Paul Roetzer: Yeah, it is hard to tell. So Sam's sort of like retraction was a little bit wonky. It's like, Hey, we don't want this, but what we do want, and you could actually see by the, but we do want how it could be misinterpreted of what they're actually looking for here.
[00:11:47] So I do think there's a lot more to the story of specifically what openAI's is trying to do here. But I think if we zoom out, like why is this the lead topic today over potentially just like a misspoken thing from the CFO, [00:12:00] because I do think it gets at the, uneasiness that investors are starting to feel, that economists are starting to feel that business leaders may be feeling around how, impactful and important these like four to five companies are becoming.
[00:12:15] And so the key for me is that the US economy is actually becoming increasingly reliant on AI and the companies that are building and empowering it. So we're starting to see, obviously the impact on jobs. We seem to talk about that every week. We'll talk about it again this week. And then GDP, seems to be of being affected by this build out of the data centers.
[00:12:37] Mm. So the spending on CapEx for energy and data centers by these big labs. If you think about Microsoft, Google, openAI's, meta X ai, and we could throw Amazon in there. They're gonna spend probably close to a half a trillion in 2026. and certainly trillions thereafter, openAI's on their own plans to spend well over a trillion just in the next, like six to seven [00:13:00] years.
[00:13:00] So we're looking at trillions of dollars Now, why are they doing this? Well, because as we've talked about, the market opportunity is trillions of dollars. So the companies that build out these data centers that control the cloud infrastructure, that serve up the intelligence that everyone is gonna be demanding and every piece of software they use and every hardware device they use what I've called the age of omni intelligence, where literally the AI is just omnipresent everything we use.
[00:13:27] And their omni models, meaning they don't just do text, which doesn't require a massive amount of compute. They do AGI agentic, they do reasoning, they do image and video generation. Eventually they'll generate world models and video games, and so the compute demand is gonna become so massive. The other thing tied to the economy is a lot of what we're seeing is these AI labs talking about the fact that the build out of energy and data centers is what's going to create jobs.
[00:13:53] So while jobs may be flat in other areas, they're saying, Hey, we're gonna hire 30,000 people over the next five years to build [00:14:00] out all these data centers. And so truly the economy starts to become dependent upon this being true. So if what openAI's is presenting as this future, and again, not just openAI's, these other major labs, they are all in on this build out and they need the energy, they need the data centers because they expect the demand for intelligence to keep skyrocketing.
[00:14:20] From a government perspective, they are very much on the record as saying they plan on quote unquote winning this race against China at all costs. So the government needs these private companies to have these bold visions to take on enormous risks in order to get to super intelligence first or AGI or whatever they want to call it.
[00:14:40] So the danger is that we become too reliant on these companies and they become kind of that quote unquote too big to fail. So the thing Sachs was referring to, so for context, what is too big to fail? Well, it was a book, by Andrew Ross Sorkin in 2010, about the 2008 [00:15:00] banking crisis, and then it became an HBO movie as well.
[00:15:03] So now what happened in the banking crisis? Many of our listeners are probably old enough to remember what was going on back in 2008, but in essence, the housing bubble and risky lending. So banks and mortgage lending lenders issued enormous numbers of subprime loans, which are mortgages to borrow borrowers with poor credit.
[00:15:21] Often with little verification of income or assets. These loans were then bundled into mortgage backed securities and collateralized debt obligations or CDOs that were then sold worldwide. The assumption of that economy was housing prices are just always gonna rise. So this would never, we'd never have to like kind of pay the piper on this.
[00:15:40] So there was excessive leverage and complex financial products were created that like people didn't really understand. They started getting passed around and all of a sudden these banks run up in September, 2008 of like, oh my gosh, they might fail. And what is gonna happen if these big banks fail and they're not able to be supported privately anymore?
[00:15:58] Well, the government has to [00:16:00] step in and intervene. So Treasury Secretary Henry Paulson, fed Chair, Ben Bernanke and New York Fed President Ted Geithner led emergency efforts, orchestrating bailouts, forced mergers, and eventually the 700 billion trouble asset relief program. So the government had to step in and fix the problem.
[00:16:17] And so that's where people are worried like, oh my gosh, what if this happens again? What if we don't all understand the complexity of this infrastructure? And all of a sudden, openAI's shows up and once these unique financing packages put together, so there's an article in the New York Times will link to, that talks about this debt is entered, the AI boom.
[00:16:36] And so that article says, to fund heavy spending on infrastructure for artificial intelligence, companies have leveraged a growing list of complex debt financing options. According to McKinsey, 7 trillion in data center investment will be required by 2030 to keep up with projected demand, Google Meta, Microsoft and Amazon.
[00:16:56] So again, they're not even including openAI's and Xai, which aren't publicly [00:17:00] traded. Yeah, have spent 112 billion on capital expenditures in the past three months. To obtain the capital they need. Hyperscalers have leveraged a growing list of complex debt financing options, including corporate debt, securitization, markets, private financing, and off balance sheet vehicles.
[00:17:19] That shift is fueling speculation that AI investments are turning into a game of musical chairs whose financial in instruments are reminiscent. Of the 2008 financial crisis that we just talked about, big tech companies are looking for new sources of funding while meta Microsoft, Amazon, and Google previously relied on their own cash flow to invest in data centers.
[00:17:38] More recently, they've turned to loans to diversify their debt, their repackaging, much of it as asset-backed securities, about 13.3 billion in asset-backed securities. Backed by data centers have been issued across 27 transactions this year, a 55% increase. So basically using the data centers as collateral to borrow money.
[00:17:59] Well, if [00:18:00] the data centers demand for the data centers collapses at some point, all of a sudden the collateralization of the loans falls apart and somebody owes a somebody else hundreds of billions of dollars. So you can see how this like starts to almost feel like this shell game and it's like, oh boy, we better not be wrong.
[00:18:19] And so that's where the comments from the C ffo come in is they know the government needs them to do this build out, right? They're gonna take that risk on, but if it doesn't go as planned, like they don't want to be left high and dry. So the comments from Sarah Roetzer, the CFO of openAI's, she suggested the market is overly focused on and anxious about this possible bubble.
[00:18:41] And there isn't enough exuberance going on. but then she talked about we're just building out all full infrastructure data that allows more compute to come into the world. I don't view it as circular at all when she's talking about like how this is all being done in these deals between, but then she said, in addition to opening eyes deals with chip makers, the ChatGPT makers also eyeing a broad mix of [00:19:00] financing vehicles to fund its infrastructure.
[00:19:02] Fire said OpenAI is quote unquote looking for an ecosystem of banks and private equity to support its ambitious plans. And then this is where all the problem came in. She also hinted at a role for the US government to quote, backstop the guarantee that allows the financing to happen. Hmm. So that's where like all this blew up is the word backstop.
[00:19:22] So she then actually had to retract this and post it on LinkedIn. She said, I want to clarify my comments earlier today. openAI's is not seeking a government backstop for our infrastructure commitments. I used the word backstop and it muddled or muddied the point as the full clip of my answer shows. I was making the point that American strength in technology will come from building real industrial capacity, which requires the private sector and government playing their part.
[00:19:49] As I said, the US government has been incredibly forward-leaning and has really understood that AI is a national strategic asset. So long story short, AI is becoming increasingly political and [00:20:00] its impact on the economy is growing way more than most investors or business leaders realize or understand.
[00:20:07] And so no matter how they try and clarify this, the reality is you have private companies taking on enormous risks that the government is encouraging them to do and needs them to do. And of course, they're going to try and find ways to say, okay, we're gonna do this, but we, we need your help. If, if at some point like this isn't going as planned.
[00:20:30] And David Sachs, again, who's the, you know, the AI guru for the administration speaks on behalf of the administration is like, yeah, tough luck. Like you go under, we got four other labs we're depending on, so you're on your own. But then everyone hedges. So even Sachs is like the tough guy. Like, oh, and then he comes back and what his, his last thing, I think I had it in here.
[00:20:52] yeah, he said finally to give benefit of the doubt. I don't think anyone was actually asking for a bailout. That would be ridiculous. But company executives can clarify [00:21:00] their own comments. So, you know, again, everyone's gonna start talking about this, but if there's a, this much conversation going on, going on, people may wrong use the wrong words here and there, but they absolutely are gonna want the support of the government to take on the risk they're taking on, however you wanna phrase that, or whatever that looks like.
[00:21:18] Mike Kaput: And so, just to be sure I have this clear, the real risk here, the real danger is. There are all these pie in the sky projections about the demand for intelligence moving forward. If there is not, if that demand does not materialize, given the speed and scale of these data center build outs and investments, these companies can be left high and dry being unable to cover their loan obligations because not enough people are using their products.
[00:21:44] Paul Roetzer: Correct. The economy's screwed, right? Yeah. So like who's the, who's the dude the big short, who is, did you remember?
[00:21:50] Mike Kaput: Yeah, yeah. Michael, Michael Brewery, who was the, okay. Michael Lewis wrote the book, correct. Brewery was the guy who made the big short on the company. I
[00:21:58] Paul Roetzer: believe I just saw last week. He took [00:22:00] a billion dollar position against this build out.
[00:22:03] So the guy who bet against, oh boy, those mortgage backed loans is now betting against the build out of ai. But yeah, Mike, you're a hundred percent right. The assumption in all of this is that scaling laws continue, the models keep getting smarter. They increasingly can do human labor. So we talked about this $11 trillion annual wages market within the us.
[00:22:26] So the assumption is they will continue to be able to do more and more of that work and that we humans will continue to demand an insatiable demand for intelligence in every product we use and every piece of software we use. If that holds true, yeah, then all these data centers, all this energy we're trying to create will be used and we will probably still be at a deficit for it.
[00:22:50] If at some point supply and demand gets outta whack, we're screwed. Is is basically where this is all going.
[00:22:58] Mike Kaput: Alright. [00:23:00] That is a good, clear breakdown there. I love that.
[00:23:02] Microsoft’s Humanist AI Manifesto
[00:23:02] Mike Kaput: so our second big topic today concerns Microsoft. So Microsoft is actually forming a new super intelligence team and it's got a bit of a twist.
[00:23:12] And the twist is they promised to keep humans firmly in control. So the head of AI at Microsoft, Mustafa Suleman, who we've talked about quite often, published an announcement called Towards Humanist Super Intelligence, where he said this new team will pursue what they call humanist super intelligence, or what they would call powerful AI systems that are explicitly designed to serve not surpass humanity.
[00:23:37] Suleman argues that the world's current race towards AGI misses a deeper question, which is what kind of AI should humanity actually want? And their answer is, instead of unbounded autonomous systems that outthink humans, Microsoft is proposing a model that is built more for containment, alignment and purpose in serving humanity.
[00:23:58] So humanist super intelligence or [00:24:00] HSI. Envisions essentially domain specific systems that solve concrete global challenges. Things like medical diagnoses or clean energy without drifting toward uncontrollable behavior. So he describes it as super intelligence with limits, which is designed to keep humanity in the driver's seat while amplifying progress.
[00:24:22] So this approach kind of rejects both the DOR fears and the accelerationist positioning. Microsoft's new team as a long-term steward of safe and practical ai. And Suleman even wrote in here, humans matter more than ai. So Paul, I'm curious, like why does this manifesto's kind of positioning really matter here?
[00:24:43] Paul Roetzer: So I don't think the timing is a coincidence with what's going on with these super intelligence labs at, at meta and now openAI's very openly talking about super intelligence. but the timing was particularly keen based on what happened with Elon Musk last week. [00:25:00] In essence what's going on is we have these two opposing sides.
[00:25:03] So there's the humans should always remain in control. Mustafa Suleyman approach. And then there's the, we won't have control and we should just accept that all the, like the all knowing AI is going to control us approach, believe it or not. Like if, if that sounds weird to you, there is a absolutely a techno optimist camp who assume it is inevitable that AI takes control.
[00:25:25] and that, that's okay that we as a species, like eventually other species come along, that are just more powerful and that's who ends up leading society. So this is what's leading to increased chatter in political circles, religious circles and societal revolt is too strong of a word at the moment.
[00:25:44] uneasiness I'll say. So if, if you follow Twitter closely, there is, you can't go a day without leading politicians talking about AI now, and as a matter of fact, the Pope is like a regular [00:26:00] tweeter now about, ai. So we are gonna see this increasing, know, like division within society of where does this really all go.
[00:26:10] So the, to the, we should seed control to the all knowing ai. We'll get into Elon Musk in last week's Tesla shareholder meeting. So if you didn't follow this story, Elon sort of got, a major pay package worth billions of dollars taken from him, because it was d determined to be unlawful was how the package works.
[00:26:31] So basically Elon threatened last year to leave Tesla, but in not so many words, if they didn't approve a new pay package for him, that was not so much about the money. He's already the richest person in the world. It was about control and control when humanoid robots become available at scale. Hmm. So in essence, what was happening is Elon sees the future of Tesla as a [00:27:00] humanoid robot company.
[00:27:00] The cars are gonna be very secondary to what he thinks the overall market opportunity for Tesla is. And so he is envisioning the AI he's building at Xai, his lab being embodied within Tesla robots, which he claims will be the biggest product in human history because everyone will want to have probably multiple humanoid robots.
[00:27:19] So billions of humanoid robots will exist, is what he thinks is gonna happen. So he wanted a $1 trillion pay package approved one 1 trillion for an individual person. but the more important thing was that 1 trillion represented control of Tesla and control of where these robots go. So, again, this is a bit of a side story, but it's interesting.
[00:27:42] So CNBC has a breakdown of how this works, this trillion dollar pay package, which sounds absurd, but we'll run with it for a minute. So, the pay package for Musk already the world's richest person consists of 12 tranches of shares to be granted if Tesla hits certain milestones over the next [00:28:00] decade.
[00:28:00] It would also give Musk increasing voting power over the company, a seeding to demands that he's made publicly since early 2024. His ownership would increase from about 13% to 25%, adding more than 423 million shares. So again, this is not about money to him. This is about control. The first tranche of stock gets paid out of Tesla, hits a market cap of 2 trillion, which they will probably hit by spring of next year.
[00:28:28] Tesla's current market cap is 1.54 trillion awards tied to market. Cap gains are paired with operational achieve. The next nine tranches would be awarded as the value of Tesla's market cap increases every 500 billion. So every half a trillion, he gets another tranche of this, the, these shares. So up to 6.5 trillion, he would earn the last two tranches.
[00:28:52] If the market cap, rises by increments of 1 trillion, meaning when they hit 8.5 trillion, so they're 1.5 today when they get to [00:29:00] 8.5, then he would get the full package of a trillion dollars. So that seems crazy, but that is the backdrop to last week's shareholder mailing where this pay package was approved.
[00:29:11] So this is now moving forward. It's the comments he made though during the shareholder meeting. That leads to why we're connecting this to the Mustapha thing. So here's what Elon said. People often talk about eliminating poverty, giving everyone amazing medical care. There's actually only one way to do that, and that's the optimist robot.
[00:29:32] Musk later doubled down. Optimists will actually eliminate poverty. The robots would increase the global economy by a factor of 10. Musk said, or possibly even 100 optimist robot will have five times the productivity of a human per year. He predicted because it would be able to operate 24 7, and this is a direct quote.
[00:29:53] I came to the conclusion that the only way, the only way to get us out of the debt crisis and to prevent America [00:30:00] from going bankrupt is AI and robotics. He then said, AI and robots will replace all jobs. Working will be optional, like growing your own vegetables instead of buying them from the store, but the one that directly leads into the mustapha conversation, he gets asked a question by a guy in the audience.
[00:30:18] You can watch this video for yourself. He said long term the AI is gonna be in charge. To be totally frank, not humans. If AI vastly exceeds the sum of human intelligence, it is difficult to imagine that any humans will actually be in charge. So we just need to make sure that the AI is friendly. Hmm. So that was Elon last week.
[00:30:39] Mustafa's tweet tied to this article he wrote, said, it shouldn't be controversial to say AI should always remain in human control. That we humans should remain at the top of the food chain. That means we need to start getting serious about guardrails now before super intelligence is too advanced for us to impose them.[00:31:00]
[00:31:00] So that's what I think, again, they've been working on all this, but it's not like Elon's perspective is new. It's just point blank saying it now. but this has been talked about for a while. And so it's interesting because now you have Microsoft, which I don't know, I'm trying to think about this for a second.
[00:31:17] I'm trying to, like, this is probably the first major AGI company to come out and just directly say this. Like, Anthropic has sort of said some similar things, but I also feel like Dario's just sort of also under the assumption, like, yeah, the AI's just probably gonna be in control at some point. so this is a very different position than the other labs have taken.
[00:31:35] I would say, I, I'm trying to think if Google has directly addressed this.
[00:31:40] Mike Kaput: I don't, well to up to that point too, have you heard of anyone? I think he called out in here, I'm trying to find the exact language. He called out basically like, we're okay with fewer capabilities if it remains in human control.
[00:31:51] Like, have you heard of anyone say like, well, we're just gonna not make it as smart as it can be so that we can remain in control. That seems mean to me. No. [00:32:00] Yeah.
[00:32:00] Paul Roetzer: And that's the thing is like, it's one of the parts that keeps being brought up as, Hey, at some point we may have to like come to an agreement as labs to not push forward.
[00:32:08] So it was talked about. You know, with openAI's, and Sammy said this recently, I've certainly heard Shane Legg and Demis talk about this at, at Google DeepMind, that like there might come a point where we all have to work together. Yeah. Like, if someone proves that these things truly are becoming dangerous and their past are ability to align them, that there may be this reckoning where we all have to get in a room and say, you know what?
[00:32:33] We do need to pause this. Like we, we are now afraid of what we've created. So they have talked about that. to have someone like Mustafa who leads AI at Microsoft basically say that they're willing to put the brakes on when everybody else keeps accelerating. That might be the first time I've seen that, Mike.
[00:32:54] Yeah. Yeah. Where like a leading a leader at a lab is saying, Hey, listen, we're kind of getting to that point and we need to [00:33:00] define this as humanist, su humanist, super intelligence. We're gonna give it a name. We are going to adjust our product roadmap and our own company strategy. We're gonna keep spending these tens of billions, hundreds of billions.
[00:33:11] We're gonna start building our own models. We're gonna do all those, but like, we're gonna do it specifically to solve problems and make 'em domain specific. They're not gonna be unbounded, they're not gonna just be able to do this runaway intelligence thing. Containment is necessary. Like all this language.
[00:33:27] It's the first time I'm seeing that from like a, a lab that's saying, we're going to be willing to do this differently. Now my whole takeaway, 'cause again, he kind of wraps it with their big thing is like AI companion for everyone, which is weird wording, but like that everyone should have a perfect cheap AI companion that helps you learn, act, and be productive and feel supported.
[00:33:45] medical super intelligence and then plentiful, clean energy. That was, it was kinda a weird three things, honestly. Yeah. I was like, that's, I don't know. But that was how they kind of ended it. Here's where I keep coming back to though. Mike is like, so Mustafa, [00:34:00] google DeepMind co-founder spent time at Google DeepMind, went and did inflection, with Reid Hoffman, who is not in the techno optimist, like race ahead and build this stuff.
[00:34:11] So he is more similarly aligned. They did, what was the book called? Super Intelligence, wasn't It? Or Super Agency. What was the super agency?
[00:34:18] Mike Kaput: That sounds familiar. I think that's Hoffman's most recent.
[00:34:21] Paul Roetzer: All right. You can look that up real quick while we're doing this. But, so he came up with a book last year, where they basically wrote about everything he's saying here.
[00:34:28] The thing I can't, like, I keep trying to make this add up. I, I just don't know that Mustafa realizes this vision at Microsoft. Like at some point Microsoft has to compete with all these other companies. Like Microsoft's the second most valuable company in the world, give or take, like I think bounces between second and third.
[00:34:49] If all the other labs keep racing forward and pushing super intelligence and they're willing to commercialize it in ways that Microsoft isn't, like at some point there's a fiduciary [00:35:00] responsibility to the shareholders to maybe not always follow through on this humanist super intelligence position.
[00:35:08] And so I can't help but feel like eventually this clashes with what Microsoft has to do to justify their investment in ai. Right. I may be totally wrong on that, and maybe Satya and everybody else in Microsoft is like 100% lockstep with Mustaf on this. But when I read stuff like this, I, I generally feel like it's Mustafa's vision.
[00:35:32] And belief and not always a hundred percent aligned with like what Microsoft is actually going to do. I wanna believe it, like this is actually the most aligned position I personally would have. Not that my personal like alignment matters that much here. But this is like, I feel like there just has to be this middle ground.
[00:35:53] I've always felt that, but I feel that in everything. Like I feel that in politics, I feel that like ai, like I always just feel like, why can't we just rationally [00:36:00] listen to all these sides and like, let's arrive at what a reasonable middle ground is. It doesn't have to be AI's gonna take over and let's just give in.
[00:36:05] It doesn't have to be, we have to stop everything to figure this out. It's like, no, like what's the center part? Like, can we just all get in a room and actually find a reasonable way to do this? So, I don't know. That's the one thing I keep taking away from this, but it's a good read. Like if people should go read it, I, I just don't know how, how much it's gonna actually come to life.
[00:36:25] Mike Kaput: Well, it is why all these numbers we often talk about matter so much. It's like, we talked last week about how Microsoft had to take a billions of dollars of write down on its investment in openAI's because they're not, you know, making any money yet. It's like money isn't everything, but that's what they're incentivized to optimize for.
[00:36:42] So Correct. If they see a future where it's like, yeah, we can 10 x this investment or like make good on it. But it requires Sam Altman going full bore Right. On AI or super intelligence. Then you get into conflict, right?
[00:36:56] Paul Roetzer: Yeah. And I, so even I'm thinking out loud again. [00:37:00] Microsoft acquiring ia, SVA and Safe super intelligence
[00:37:05] becomes much more interesting thing I hadn't really thought about before. Yeah. So Ilya obviously isn't a Microsoft guy, you know, came from the Google tree as well, and then into openAI's. But like safe, super intelligence certainly aligns with this humanist, super intelligence. so if you start looking at who's aligned, it's Anthropic ish.
[00:37:26] So if I put, if I had to bucket these, like Microsoft is now kind of like in that certainly safe super intelligence, a little bit Anthropic mentality. Google DeepMind, I still believe is truly about like the good of humanity and the scientific discovery. Yeah. But like, I'm not sure yet how much they'll push.
[00:37:43] Super intelligence before they would like arrive at a point like this where they would write this kind of essay. but they, they own the, you know, the biggest share of Anthropic, I think at like 14% or more. I don't know. Like I, I still think there's consolidation of labs here at some point. And so it's just interesting to start to think [00:38:00] about how the different labs look at this big idea of like, what do we do when super intelligence starts becoming real or within reach?
[00:38:07] Mike Kaput: Well now that you mention it, I could be convinced of an argument that this is just phishing for IA or the next ia, right? Like this, this is how you get talent. Not only the billions of dollars people are spending on talent, but the perspectives on the technology do really matter to these researchers.
[00:38:24] Paul Roetzer: Correct. Like, we're not gonna pay you what zucks gonna pay you at meta, but
[00:38:28] Mike Kaput: come build the actually good super intelligence
[00:38:31] Paul Roetzer: instead
[00:38:32] Mike Kaput: of whatever they're doing. Yeah. It's the human centered thing. Yeah. All right.
[00:38:36] Google AI and the Future of Learning
[00:38:36] Mike Kaput: Our big third topic this week is that Google has published a paper that details more about how it's planning to integrate AI into global education.
[00:38:47] Now this paper is titled AI In the Future of Learning, and in IT Google Leaders outline how AI could help address a worldwide teacher shortage and declining academic outcomes while tailoring learning to [00:39:00] individual needs. And they argue AI's role in classroom should be to amplify human teaching, not automated.
[00:39:05] So they outline all sorts of ways that AI is going to have an impact in their per from their perspective on the future of education. So a couple things that jump out in this paper. First, they outlined how their Gemini AI models are now grounded in what they call core learning science through their system called Learn lm.
[00:39:26] And this basically embeds proven pedagogical principles. Into the models and in their words, this makes Gemini the world's leading model for learning. So Gemini can now act as a tutor, lab partner, study coach across different tools like search, YouTube, and classroom. According to Google data, Gemini outperforms other models on every learning science metric, and second, they kind of prescribe that AI driven personalization at scale in education needs to be the norm.
[00:39:56] We need tools that adapt lessons, feedback, and pacing to [00:40:00] each student, while freeing teachers to focus on creativity and mentorship. They also call for new assessment models that reward reasoning, collaboration, and originality. Google actually argues that schools should maybe move towards oral exams, project portfolios and debates, which are areas AI can't easily simulate.
[00:40:18] And they also prescribe some systemic safeguards, so things like adding in safety filters, privacy protections for under 18 users, and red teaming specifically for educational context. So Paul, the impression I got here was that Google's really giving some serious thought to how AI is going to reshape education as we know it.
[00:40:38] I mean, it's certainly just the initial flag planted in the ground here, but it seemed, I'd be curious to your perspective, especially building an education platform, how you're looking at what they're proposing and where this is going.
[00:40:52] Paul Roetzer: Yeah, so the reason I wanted to do this one is a main topic was a, a few things.
[00:40:56] the first one, I read it over the weekend. I'd flagged it last week and then I read over [00:41:00] the weekend. And it's nothing like, like newsworthy in terms of like some major announcements that they're making or some major shift. But like I, I did read it from the perspective, okay, we're trying to build an e-learning platform and how is this gonna affect like, professional education, how would that trickle into what we're doing with AI Academy by SmarterX?
[00:41:17] And so there was that, but then there's you know, I'm a parent, seventh and eighth graders and so I'm constantly. Battling with this integration of AI into education. I'm looking at what the schools are doing where, you know, my daughter will go into high school next year. I'm assessing the high schools based on what their AI roadmap is and what their policies are.
[00:41:36] I talk to family and friends all the time who have kids at different levels, including higher education, where they're struggling with like, what, it's not even integrated. My kid's not learning anything. Like what, what are, what are they gonna do for a job in a year when they come out? and then as we highlighted in AI pulse upfront, we have a lot of educators in our community, a lot of educators in our audience who are trying to drive innovation and AI adoption.
[00:41:56] So I think it's interesting to, from all those perspectives, to [00:42:00] look at what is Google saying and what are they talking about? And so, you know, I think when I was going through this, I'm trying to just get a hint of where are they gonna go with this? Like, where's the product gonna go? How is it gonna evolve?
[00:42:11] Like my kids have Google Classroom, that's how they, all their assignments are done. That's how they submit everything. They use Google Docs, they use sheets. And I was actually working with my son, yesterday on Sunday with him and a friend on a, a pitch competition they have to do. And so we were in, Google Slides and then I was actually, they, they'd done a survey and they went over to like, you know, exported into sheets and there's no Gemini in there, there's no like guided learning capability and in the classroom feature of Google.
[00:42:37] And I was like, ah, I wonder when that's coming because I was, I was actually hoping it was there 'cause I was gonna show them how to talk to Gemini about the survey data, but I, I couldn't do it 'cause it wasn't embedded. So I think it's just really interesting. And so some of the things you, you highlighted Mike, but I'll just call it a couple of the things I had bold faced as I was going through this.
[00:42:54] So, they talk about AI unlocking human potential all around, around the world and tying to their overall [00:43:00] mission of making the information uni universally accessible and useful. AI does present urgent challenges and unknowns that society must collectively reckon with their goals to help improve learning outcomes by developing AI products that are grounded in that core learning science.
[00:43:13] You mentioned. And in close partnership with education community, they talked about learning, being the bedrock of human potential and societal progress, equipping people with new skills, sparking curiosity, exchanging fresh ideas. They did get into, like today's jobs, demand, not just this foundational skills, but advanced problem solving, collaboration, the ability to learn throughout life.
[00:43:33] And like that, that's interesting to me's like, well, how do we make sure they keep doing that, that that students and even young professionals continue to have that curiosity. I was, oh, yeah. Okay. I don't, I don't wanna get into like naming names here. I was having a conversation recently where someone said something about like, well, how are you doing school?
[00:43:52] And they're like, oh, I literally just use ChatGPT for every assignment. Like, and this is like at a high school level. and so the friend [00:44:00] was like, well, you can't do that. Like, you're not learning anything. And they're like, yeah, whatever. And so my assumption is there's probably a lot of kids who are like that.
[00:44:06] Like as long as they can get away with it, they're just gonna keep doing it. So. Yeah, I don't know. I thought, there's just a lot of interesting perspectives. So for educators, I would definitely read this for parents. I think it's really good to be aware of how the tech companies are thinking about this and what they find to be important and the different challenges that they're currently solving for.
[00:44:25] they did end with some interesting questions, like things that they're thinking about how AI will change what we need to learn or even what it means to learn. How might historical forms of evaluations and assessments change due to ai? How will the nature of teaching evolve and how can AI facilitate new types of learning previously not possible?
[00:44:45] So I, I liked all of those, ideas. And then one related note, Mike, I was listening to Ddu Kesh podcast with Andres Carpathy recently. Yeah. I don't think I've mentioned this on the podcast yet. it is, I mean, Andres is always amazing to listen to, but he's working on [00:45:00] like the future of education himself.
[00:45:02] And so there was this part that I remember, like I stopped when I was listening to it and like rewound it and like listened to it again. And so they were talking about this assumption that like AGI does happen, that we get to the stage where the AI is at or above human level, at least average human level at basically all cognitive tasks.
[00:45:19] And so he said to Andre, he's like, well, what happens then? Like if we get to AGI, like, do we still need school? Do we still need education? Like what does that look like? And he said, I often say that pre AGI education is useful. Post AGI education is fun and in a similar way as people, for example, people go to the gym today, but we don't need physical strength to manipulate heavy objects because we have machines for that.
[00:45:42] Mm. They go to the gym, like, why do they still go to the gym? It's because it's fun, it's healthy, and you look hot when you have a six pack. What I'm saying is it's attractive for people to do that in a certain very deep psychological evolutionary sense for humanity. And so I kind of think that education will play out in the same [00:46:00] way you'll go to school, like you go to the gym and that's, that's so interesting because.
[00:46:05] I think there's people who are just naturally curious, lifelong learners who will always seek education. They will always take courses. They will always go back to school and get another degree or just take some, you know, interesting. Or watch a YouTube channel about something like they, they wanna learn because it makes 'em feel fulfilled and good.
[00:46:23] And that's kind of what Andres is saying is like, maybe that's what education becomes, is like, it's just for the people who want to keep learning. And if you want to, you know, kind of peak as a human and let the AI do everything, like, you'll probably do that, just like, you probably won't go to the gym and eat healthy and stuff.
[00:46:38] Like, so that, I don't know. I just thought it was like a really interesting perspective, sort of a, a related topic of the future of education.
[00:46:45] Mike Kaput: Well, you know what's interesting is during that podcast as well, he mentioned something to the effect of, look like the times I have learned the best have been when I've had like a direct tutor helping me work through problems at the right pace for me.
[00:46:59] And [00:47:00] they mentioned in the Google paper. They even say as part of their aspiration, ideally every student would spend ample time working in their zone of proximal developments. Yeah. What they call it, which is the sweet spot of just Right. Learning challenges that lead to new skill growth and like no knock on the education system.
[00:47:17] But that's like not very possible today. Right. So if you're thinking of it that way, you're almost like, we've been teaching with two hands behind our backs for 200 years. Right. So, or longer. Yeah. So that's where the real excitement I think comes in. Can AI make possible that personalized tutoring or learning, like we might all be able to learn far better than we ever have.
[00:47:39] Paul Roetzer: Yeah. And I've definitely spent time with a lot of educators who see this opportunity who know education is nowhere near reaching its potential to impact people. Yeah. Because it can't be personalized and it can't be adapted based on the way people learn. that's why things like Khan Academy and Duolingo have been so popular is it enables that kind of learning.
[00:47:58] And that's like what we're trying to do with [00:48:00] Academy is like we imagine the way to do this stuff. So yeah, it's just, I mean, education touches all of us. If it's not you personally, it's your family, it's your friends, it's, you know, your coworkers, it's your employees if you're a leader. Like, and so thinking about this, how we do re-skilling and up-skilling in a professional environment, how we guide our kids at different levels of education, like this is just a fundamentally critical topic as we move forward.
[00:48:23] So we're gonna, you know, try to do our best to, to start making it just a more regular part of what we discuss.
[00:48:28] Data Shows AI is Driving Layoffs
[00:48:28] Mike Kaput: Yeah, for sure. And if you needed any reassurance why this is such an important topic. Our first rapid fire is definitely tangentially related to re-skilling and up-skilling, because unfortunately, the US companies announced more than 153,000 job cuts in October, and this was the highest for this month in over two decades.
[00:48:48] And according to some new data from Challenger Gray and Christmas. They show that AI is a factor here. So they show that layoffs nearly tripled from the rate a year ago. [00:49:00] And the firm's chief revenue Officer, Andy Challenger, said the cuts reflect AI adoption, softening consumer and corporate spending and rising costs, which is driving companies to tighten belts and freeze hiring.
[00:49:12] Now, we've talked about some of these layoffs they have Amazon Target, paramount have announced sweeping reductions. Each of them have cited automation and management restructuring. We talked about UPS eliminated 34,000 operational roles and year to date job cuts have topped 1 million while hiring plans are apparently at their lowest since 2011.
[00:49:33] So some employers, like JP Morgan say they'll redeploy workers affected by AI rather than reduce headcount. But it sounds like based on this data for many, finding new roles is getting harder. So. You know, they call this out, Paul, explicitly in the data they call AI out. They say October's pace of job cutting was much higher than average for the month.
[00:49:52] Some industries are correcting after the hiring boom of the pandemic, but this comes as AI adoption, softening consumer and corporate [00:50:00] spending and rising costs drive belt tightening and hiring freezes. So, seems like we've talked about this issue at length, but it does seem more and more like people are pointing to ai at least as one factor.
[00:50:10] Paul Roetzer: Yeah, it is. It's coming to this recurring topic, every week, unfortunately. And I, I don't know, like I, I really wanna be optimistic here. I, but I've been pretty consistent saying I think there's gonna be some short term pain. I don't know how short term that is. Like, I don't know how long that Shortterm pain lasts, but I mean, I had some conversations last week.
[00:50:32] I was at a couple of major events and, there there's more coming. Yeah. Like there's, yeah, there's ways to. Know that cuts are gonna be occurring that the general public and the economy isn't hearing about yet. And I'll just say like, there's quite a, there's some indicators that we're just sort of at the leading edge of this and it's, [00:51:00] it's, in, in a way imminent that the next, like three to six months, we we're probably gonna see some pretty significant cuts.
[00:51:08] that may, people may be a little bit more transparent about their connection to ai, but I, I have a relatively high level of confidence that we need to be having these conversations and preparing more for continued cuts that are going to be, affected in, in at least part by ai. Yeah, I mean, like we always say on this podcast, like, we have to be realistic about this.
[00:51:32] Like, ignoring, it's not gonna do anything. And I think we're probably past the point where we can just deny that AI's gonna have any impact on jobs in the economy. I mean, hopefully, people have kind of accepted that by now. and we just have to move as quickly as we can. Like what do we do about it?
[00:51:46] Like we, we have to just be more proactive about this. It's not gonna stop, anytime soon.
[00:51:52] Mike Kaput: Yeah. And in case people think we're harping on, I mean, I don't think either of us get any enjoyment from having talk about this me, but like look at that AI pulse [00:52:00] survey. Yeah. It's not just us. 65% of the audience either thinks today it's a near term existential problem, we're in the next one to two years, and they're not just taking that from us.
[00:52:08] So
[00:52:09] Paul Roetzer: yeah, I would happily lead every, episode saying AI is creating jobs. Yes. And here's where it is and here's what the data is telling. It's like, trust me, I am anxiously awaiting the day when we can switch gears and start talking about the growth and innovation and jobs that are being created.
[00:52:25] that is not. Currently in sight, like, there's isolated. Certainly it's creating new roles. Like we talk about this out time. Like there are new roles that are being created. They're just, yeah, nowhere near at the level at which they're going to disappear in the, in the, in the coming months.
[00:52:43] Coca-Cola’s AI Christmas Ad Generates Controversy
[00:52:43] Mike Kaput: All right, our next rapid fire topic, a year after its first AI generated holiday, commercial drew backlash, Coca-Cola is trying again, and they say that this time they've gotten it right or even more right from their perspective.
[00:52:56] So the company's new seasonal ad is, once again, [00:53:00] AI generated. It mimics the style and tone of past Coca-Cola commercials. It's got some cartoony animals watching the brand's. Famous red trucks travel through the snow. They have a reveal of Santa Claus stepping outta one of the trucks. And this was produced by the LA based AI studio secret level, and the new commercial was built almost entirely with generative AI tools.
[00:53:21] Multiple models helped shape the concept, visuals and the animation style. Every frame was generated and refined through AI prompts guided by a small team of human creatives. And Koch's Global VP for generative AI says that the craftsmanship on this ad is 10 times better than the previous ad. There are advances that now allow for more natural emotion and movement in the videos.
[00:53:44] And if you recall, last year's AD drew controversy for using AI instead of more people to generate the company's kind of iconic Christmas creative. And this time, no different viewers have criticized the visuals. They've questioned the trade off [00:54:00] between speed, cost, and artistic value. Critics have raised environmental and labor concerns, warning that AI driven campaigns could accelerate job displacement across the creative industry.
[00:54:11] So Paul, it seems like it is becoming a holiday tradition for people to get pissed at Coca-Cola about their AI ads. That doesn't seem to be changing. It is interesting in a couple of the articles here, they, Hollywood reporter has some really good in-depth dives with Koch and with the agency about how they made these super fascinating, but they're pretty unapologetic.
[00:54:32] Paul Roetzer: I think they have to be like, I, I think you just have to own this. the way that think this plays out is people who watch this ad and don't know was AI generated and don't like, have ho, you know, strong feelings about AI one way or the other. Probably think like, this is a really good ad. Like that was beautiful.
[00:54:48] Like it was put me in the holiday spirit. And then there's gonna be people who watch it, who know, whos AI generated, know the backstory. And they either like or don't like ai. They either are the, what was the training data, you know, reply boys who are just like, ev [00:55:00] every single AI thing is like, what was the training data?
[00:55:02] What was the training data? which I get like, I, I understand if that's the talking point. I, I just feel like Koch is gonna go out into that frontier. They're gonna piss off 40% of the population. and then at some point I was gonna just stop. Being pissed and they're just gonna accept that this has evolved creativity.
[00:55:23] Mike Kaput: Yep.
[00:55:23] Paul Roetzer: And somebody's gotta do it. Like people have to go out into that frontier and be willing to be different and do what they consider to be an evolved form of creativity and they're gonna get crap for it. Like, I don't, I don't know that I saw a single positive post on X about this. Yeah. Like X is definitely a bubble of people who, are, are challenging this evolution.
[00:55:47] And again, this isn't me taking a personal stance on like, saying, just get over it and let's move on with life. I totally get it. Again, I've said this a hundred times, my wife is an artist. My daughter's an artist. I'm a writer by trade. Like, I get it. [00:56:00] That, that this is not a, a black and white, like zero and one thing, it's not a binary decision.
[00:56:06] It is like there's a fuzzy middle, middle ground here. Yeah. There's things that are, are messy and uncomfortable, but brands are gonna keep moving. I think at some point, society just sort of like. It just becomes creativity. Like it's, it's different and it's gonna take a little while, but, I don't know. I mean, good for Coke for standing on the brand, like this is what we're doing and then not backing down.
[00:56:30] So,
[00:56:31] Mike Kaput: yeah. Yeah. I think this is one of those areas too. I think you just have, and rightly so, so much emotion and personality and identity tied up in some of the skills here and the creativity of it and the humanness of it, that this is where you start to see some of that backlash in this area especially.
[00:56:47] Paul Roetzer: Yeah. And it's take, you can't, like, that's the thing is like the creatives are using the tools available to them and they're doing incredible things with those tools. Is it their fault that they're trained on data that was stolen? Like, are they just not supposed to [00:57:00] use the tools like this? Is that weird?
[00:57:02] There's no perfect answer, right? There's what people believe, like there's these, subjective opinions about was it right or was it wrong? And things like that. versus, you know, I think there's a lot of people, which I probably put myself in this bucket of like. It just is like, I can't change how they train the models.
[00:57:21] I can choose as a company, are we gonna use the models or aren't we? and we choose to use the model. So I suppose like, in a way I'm saying, well, I, if I don't believe they should be allowed to do this, then we probably shouldn't be using Chad, GPT and Google Gemini retain our company. And that doesn't seem like a sustainable business decision.
[00:57:39] So it's, it's hard. Like I, I get why people would feel strongly on both sides of this.
[00:57:46] Amazon and Perplexity Feud Over Agent
[00:57:46] Mike Kaput: All right, next up. Amazon has filed a federal lawsuit against Perplexity, accusing the startup of illegally using its new AI agent to shop on behalf of users. So at the center of this dispute is Comet Perplexity AI browser, which can [00:58:00] log into user's Amazon account, search for products and complete purchases automatically.
[00:58:05] Amazon says this violates its terms of service. The company says Comet disguises itself as a normal Chrome user, which degrades the shopping experience and creates privacy risks. Perplexity CEO argues instead that agents should have the same rights and responsibilities as human users acting on their own behalf and called Amazon suit a bully tactic to suppress competition.
[00:58:29] So I'm curious about the nuances you see here, Paul, because it doesn't sound like Amazon, from what I was reading, is totally against agents. But their CEO Andy Jassy did say on an earnings call last week that the current customer experience for AI shopping agents was quote, not good. He cited a lack of personalization and user specific shopping history and bungled delivery estimates and pricing.
[00:58:50] But he did seem to hint at, he thinks there's gonna be ways they find to partner with companies related to agents. So what do you think here?
[00:58:59] Paul Roetzer: I don't [00:59:00] think they have a choice. I mean, this is the future agents are gonna be able to do shopping. So obviously Amazon has to, Amazon has to have a play here. this is a tricky one for me.
[00:59:09] Like RN the CEO has a very publicly available, you can go look at it yourself. questionable legal and ethical choices. So I've referenced this before, like Bragg's openly on podcasts about the fact that he built a company that scraped LinkedIn data knowing it was against the terms of use, but everyone else was doing it.
[00:59:28] So he justified it by everyone else doing it. so a again, it's like I have a hard time feeling empathy for perplexity who knowingly, abuses rules in terms of use. So anytime that they're talking about like, oh, we're, you know, the victim here, it's like, come on. Like you, you guys have made an entire company worth billions of dollars on stealing from people.
[00:59:49] Like, this is what you do. So re take that for what it's worth. do I think Amazon is in the right? I don't know. Like, I'm not a lawyer. I, I didn't analyze like [01:00:00] exactly what the lawsuit is here. I went through like the basics of it. But this bullying is not innovation thing. I just kind of like laughed at.
[01:00:07] But I, I'll regardless, I'll pull a couple excerpts. So this is the article from Perplexity that like when they got this lawsuit served to them, the point of technology is to make life better for people. We call it innovation, but it's just the constant, constant process of asking how to make things better.
[01:00:21] Bullying, on the other hand, is when large corporations use legal threats and intimidation to block innovation and make life worse for people. A, KA when we steal shit and we break rules, they, they try and stop us from doing it. And that's bullying is basically what this, this is. So like, the rest of it's just like, okay, whatever.
[01:00:39] Like, then they go into how this is just the future and Amazon should just accept it and, agents are gonna do this stuff and which is probably right. Like they, they probably will, like, this is where it's gonna go. Doesn't mean that you just get to rewrite the rules. And so this is like this approach from, from technology overall.
[01:00:57] It's like. Well, we're just gonna steal the 7 million [01:01:00] books because we think we should be allowed to. Right. Even though it's against the law, it's like, okay, but that's not how the law works. That's not how terms of use work. Like there are standards in society that we're kind of supposed to abide by, but that it just isn't the techno optimist like accelerate at all costs position.
[01:01:16] So again, I'm not trying to make like, you know, who's, who's the villain, who's the hero here? I'm not, I'm not trying to play that role. But like, don't just look at this like bullying, stopping, invasion, whatever, and like on the surface and think, oh my God, perplexity iss. Right? Like, Amazon's the bully. Like, no.
[01:01:32] Right. There is way more to this story here. that being said, when I think to the future, and we're probably gonna do like a 2026 like AI trends special episode, we'll talk a little bit about this, but one of the ones, Mike, that I had kind of jotted to myself is this agent to agent communications and commerce.
[01:01:48] This was like a couple months ago when I was thinking about this, when I said, businesses must solve for consumers, using agents to gather information, engage with brands, and make purchases. This may alter how we design user experiences on the web [01:02:00] and in apps. And it could rapidly evolve marketing, sales, and customer experience strategy.
[01:02:03] So this is like, we're all gonna face this, this is Amazon doing it now, but like, what's stopping an agent coming to your brand's website and interacting with your chat bot and like annoying you or going through processes where you're losing insights into like the analytics and you have no idea what users are.
[01:02:19] Like, this is gonna happen to all of us at kind of like a micro level. So it's, it's a, I guess, a good topic from that perspective.
[01:02:25] Mike Kaput: Yeah. It seems like such a fundamental question, right? Because I just can't help but see agent to agent, like rewriting every assumption of marketing Yeah. And of e-commerce, because it's all based on the brand at some level mediating the experience and that goes away with agents.
[01:02:42] Paul Roetzer: Yeah. And I haven't really heard of a brand that's figured this out yet, in part because it requires a leap of assumption about the agents becoming more autonomous and reliable, which they aren't right now. Right. And so, but that doesn't mean they won't be by Christmas season, by like holiday season 2026, like 12 months from now.
[01:02:59] This [01:03:00] may be a very real thing affecting like every website of our listeners. So it's like we're almost sort of, that I always say like trying to see around the corner like 3, 6, 12 months. Yeah, this is one of those where like we're kind of looking around the corner at the moment and it's gonna become very real to a lot of other people in the near future.
[01:03:18] Ilya Sutskever Deposition
[01:03:18] Mike Kaput: Next up, we have some pretty juicy gossip, some inside baseball here in the world of ai. A newly released court deposition has shed light on how close openAI's once came to merging with its top rival Anthropic and how some of the internal power struggles played out when they tried to oust Sam as CEO at openAI's.
[01:03:38] So this testimony actually comes from OpenAI co-founder and former chief scientist, Ilya Sutskever, who spoke under oath, and a case tied to Elon Musk's lawsuit against open. So in this Scavo revealed that after Altman was fired in 2023, Anthropic expressed excitement about a potential merger that could have installed [01:04:00] Anthropics Dario Amodei as openAI's CEO.
[01:04:02] The talks however, collapsed to, due to what he described as practical obstacles, which were likely investor complications from both companies, massive funding rounds and interestingly, Ilya's deposition also detailed deep mistrust. Within open AI's leadership, he including a 52 page memo he wrote, accusing Altman of a quote, consistent pattern of lying and manipulating colleagues.
[01:04:27] Now, despite initialing. Altman's initiating Altman's removal. Ilya later supported his reinstatement, which as we discussed came after nearly all of their employees threatened to quit. So Paul, you had mentioned to me this was getting a lot of attention in AI circles Yeah. On the line. Like what's getting people talking the most about this?
[01:04:47] Paul Roetzer: I think just the fact that there's on the record testimony now about this. So when I first looked at it, there wasn't like a lot that jumped out to me that wasn't in Karen Howe's book, empire of ai. Yeah. So like one, if you read Empire of ai, that you probably get the gist [01:05:00] of this. that being said there, there was just some interesting context.
[01:05:03] So as you mentioned, he was a co-founder and we've talked about Ilia topic or two ago, with Musk, Altman, Brockman and others. He then left to start, safe Super Intelligence, which was valued at 32 billion in spring of 2025. couple interesting notes. So Helen Toner, who was a board member when Sam was ousted, and Helen is someone I follow closely online.
[01:05:25] She did tweet for the record for those Dissecting Ilya's deposition. This part is false, and she was referring to her being the one that had sort of suggested this merger with Anthropic. She said I wasn't the one who made the board Anthropic call happen, and I disagree with his recollection that board members other than him were supportive of a merger.
[01:05:44] My feeling at the time was that we were exploring crazy options because the situation was crazy. I had no intention of trying to merge the company with Anthropic, and as Ilya says in his deposition, the possibility was actively on the table for an extremely short time. [01:06:00] Now, keep in mind this whole thing happened over like 72 hours.
[01:06:02] Yeah, like Sam's fired. They go through all these different things and then Sam's back. she then said, I'm not planning to weigh in on details about November, 2023 whenever they come up, but since this was swarm testimony and about my personal views, I wanted to make clear that it's incorrect. So then.
[01:06:20] Part of the deposition I did read, was re referring to this 52 page memo. You, you mentioned, Mike. Yeah. And in essence, Ilya was asked by the independent board directors. At the time, there was three independent board directors, I think, and then it was, Sam, Greg, and Ilya, I believe were the other board members.
[01:06:36] so there were two or three independent board members. So the attorney said, all right, then you sent this 52 page memo to the independent directors of the board, correct? He said, correct. He said, why didn't you send it to the entire board? KO replies, because we were having the discussions with the independent directors, only the attorney.
[01:06:52] Okay. Why didn't you send it to Sam Altman Koho? Because I felt that had he become aware of these discussions, he would just find a way [01:07:00] to make them disappear. He continued. So the way I wrote this document was to, the context of the document is that the independent board members asked me to prepare it.
[01:07:08] To prepare it, and I did. And I was pretty careful because at this point Ilia had raised concerns at, has, at his, had MiiR Mirati that Sam might not be the right leader moving forward. most of the screenshots that I have, most or all I don't remember, I get them from Mira Mirati. So Mira was also sending screenshots to Ilya that was demonstrating Sam's inability to be a effective leader.
[01:07:29] It made sense to include them in order to paint a picture from a large number of small pieces of evidence, the attorney. Okay. In which independent directors asked you to prepare your memo, Skove said it was most likely Adam DeAngelo. Adam is the only remaining board member, by the way. Yeah. So after all this hoop law, Adam remained in, in as a independent board member.
[01:07:48] The attorney said. All right. And the document that you prepared, the very first page says, quote, Sam exhibits a consistent pattern of lying, undermines his execs and pitting his execs against one another. That was clearly your view [01:08:00] at that time. Correct. and Suka says, correct the attorney. And did you want them to take action over what you wrote Squa?
[01:08:07] I wanted them to become aware of it, but my option or my opinion was that action was appropriate. And what action did you think was appropriate? Termination said Sutz quo. Okay. You drafted a similar memo that was critical of Greg Brockman, correct? Said yes. And you sent that to the board? Yes. Does a version of your memo about Greg Brockman exist in any form?
[01:08:25] He said yes. Somebody has it. I didn't know about the Greg Brockman one. That was the first time I heard about it. So my whole takeaway with this, Mike, is if anybody watched the social network and the Facebook crazy story Yes. Like we are, this is on steroids. Like the openAI's movie is going to be insane.
[01:08:41] Mike Kaput: Yeah, no kidding. The just feels like an episode of Game of Throne. Yeah, it's wild.
[01:08:48] Apple Nears Google Deal
[01:08:48] Mike Kaput: Alright, next up. According to Bloomberg, apple is planning to pay about a billion dollars a year for an ultra powerful 1.2 trillion parameter artificial intelligence model developed by Google. That would help run its long promised overhaul of the Siri voice assistant.
[01:09:04] According to people with knowledge of the matter, Bloomberg went on to say, quote, following an extensive evaluation period. The two companies are now finalizing an agreement that would give Apple access to Google's technology. So reportedly, Google's AI will handle Siri summarizer and planning planner functions, helping the assistant better understand context and execute complex tasks internally.
[01:09:27] This overhaul is Codename Linwood, part of a broader project known as Glenwood, led by Vision proc creator Mike Rockwell and Software Chief Cred Fedi. And the redesign series is slated to launch next spring as part of iOS 26.4. So interestingly, while Apple continues developing its own trillion parameter model for release next year, Google's AI will also apparently quietly run behind the scenes, powering a smarter, more capable series.
[01:09:56] So Paul, interesting. We're seeing some movement on this. I personally am just [01:10:00] still a little confused. Like they're developing their own model too. Like I, I like, why are they even bothering, given that they're already in this position? Like, at some point, I mean, I'm not saying people should give up, but what are we doing here?
[01:10:13] Paul Roetzer: Yeah. I, this is like Apple and Google have this really unique history where they, you know, do collaborate on things. This isn't news like it became news again because someone else started talking about this, but Mark Germond broke this in like, April of this year that these, that the companies were working on something like this together.
[01:10:29] So he came out with an updated story after last week when everybody else was just running with this, without crediting him with the fact that he'd already said this was gonna happen. And I think we talked about on the podcast like multiple times that this was the direction it was going. Yeah, I think they're just accepting that they're not a frontier lab.
[01:10:46] Like they're, they're not gonna build to compete with like what Gemini three is. 'cause my guess is they're gonna be building at least like a version of like what Gemini three is gonna make possible.
[01:10:55] Yeah.
[01:10:55] Paul Roetzer: and I think that Apple's gonna make a bet on smaller [01:11:00] models eventually are probably gonna be sufficient.
[01:11:02] So like whatever Gemini three or whatever version of Gemini they're gonna get, that'll run this thing. They're probably looking out 18 to 24 months and saying, well, we'll be able to run a 10 x smaller model on device so it won't need to go to the cloud. Yeah. And so my guess is Apple's is gonna focus in on that, like building these more efficient, smaller models, that can run on device, but in the interim, they need to fix Siri bad and they've accepted that.
[01:11:28] They, they, they just aren't gonna get there on their own. And so I think it makes a ton of business sense. And again, they've done deals like this before on search and maps and other things, so, I think it's good. Like I think Apple investors are just like, just fix it. Like we don't care if it's your model or not.
[01:11:44] Yeah. Yeah. I don't, as a user of Apple products, like I don't care. Just make it work. Like I, it doesn't matter to me what it's running on. Just make it functional. So yeah. Seems like the very logical choice from Apple.
[01:11:56] AI Companies Are Going on the PR Offensive
[01:11:56] Mike Kaput: Alright, our last topic today, we're seeing some stories where tech [01:12:00] giants are working over time to reframe the narrative around ai, with a wave of positive announcements about jobs, education, and economic impact.
[01:12:09] So first, meta unveiled what it calls a $600 billion commitment to American infrastructure and jobs touting its new AI optimized data centers as engines of growth. The company has published a specific report on this saying its US projects have already supported 30,000 skilled trade jobs and added 15 gigawatts of power, capacity, and pledges to be water positive by.
[01:12:32] 2030. So the message is that AI is not just about technological progress. They're tying it to American workers, communities and sustainability. Now, interestingly enough, on the same day, Google spotlighted a $5 million investment in Oklahoma to expand AI training and workforce programs through local partners.
[01:12:53] Now, both of these moves are happening amid heightened criticism that big tech's AI push threatens [01:13:00] privacy jobs, public trust, and as well as the environment. So we're kind of talking about this now, Paul, because we wanted to highlight the how they're actually starting to try to get ahead of possible societal backlash, like you had mentioned.
[01:13:13] Yeah. This, in our internal chat in the case of meta was kind of quote them trying to get out ahead of societal revolt by positioning it as a job creator and responsible use of energy.
[01:13:23] Paul Roetzer: Yeah. The first time I've actually heard the term water positive, so I, I don't know if that's like a common phrase, but I had to look it up.
[01:13:30] So a concept or an entity. Restores or adds more water to a watershed than it consumes or depletes, which I mean, that's obviously like, that makes a lot of sense. but I'd never heard that talked about like that. yeah, so I, I, I don't know, like a pretty safe job for a while is gonna be lobbying or working in community investment in these frontier labs.
[01:13:52] Like they, they're gonna be working overtime on lobbying in Washington and they're gonna pour money [01:14:00] into community investment programs that they can invest into cities like what we saw Google do with Oklahoma, and PR staff for the Frontier Labs. They're gonna be very busy. So, and again, I came from that world.
[01:14:13] I have, I haven't done lobbying, but I have done community investment. I led PR programs, for some brands, through our, our agency. I get how this all works, and I'm just telling you, if you don't live in this world or haven't lived in this world. This is how it starts. Yeah. You look at these very specifically timed, very crafted messages, very strategic investments in certain areas, specifically places where data centers are gonna be built, where it's gonna affect the environment.
[01:14:40] Like this is pr at its core. You try and affect the way people perceive things and not in a negative, it's, I'm not saying this is a negative thing, this is just what PR people do. you try and affect perceptions and behaviors and you do that in an environment like this where it's through lobbying and [01:15:00] community investment programs.
[01:15:01] So I think we are gonna see a flood of stories like these ones. Yeah. Coming from the major AI companies,
[01:15:08] Mike Kaput: from a total realist perspective, no value judgment on any side of this. Like the reason this matters too is these are the talking points you are going to hear next year as we get into political.
[01:15:19] Yeah. You know for sure. Whatever firestorm we into over this, you're gonna hear about jobs, water positive, all this stuff. I bet you. You're gonna hear these terms again and again.
[01:15:29] Paul Roetzer: Yeah. We are gonna hear endless about how many jobs are created by data centers. Yeah. Like that is like the most obvious talking about, we're already seeing it, how sustainable those jobs are versus like one time things to build the data center
[01:15:41] and then they don't, aren't needed after the, you know, 12 months, whatever. But yeah. So again, like a lot of times on this podcast, we're just trying to zoom you out and show how this all is connected. And you can see from the first topic of the impact on like, openAI's looking for that government support all the way back to this like, lobbying and community [01:16:00] investment.
[01:16:00] Like, it's all interrelated and like, I, you know, it's a fun part for me as a storyteller, Mike, is like, every week we do this podcast, like, almost, it's kind of, I think about Macom, like I think about the overall picture and the story and then like the topics become like this part of the story and something that's not obvious, the connection between them.
[01:16:18] But I think that people that listen enough and like, think about this, the way we think about it each week. You just start to like connect the dots and you see how this all is connected and it's a fascinating picture to be able to look at. It's not always like optimistic or it doesn't always make you like feel super excited about the near term future, but it's nice to at least see the picture when so many other people are just like, you know, oblivious to what's happening.
[01:16:43] Mike Kaput: Right. I couldn't agree more. That's my favorite part of this is just the connecting the dots and getting to learn about so many different areas that AI touch it. Yep. So to that end, Paul, thank you for connecting the dots for us this week. That's all we've got this week. Just a couple quick final announcements.
[01:16:59] If you have [01:17:00] not yet subscribed to the Marketing AI Institute newsletter, it is called this Week in AI and we run down all the stories we talked about today, plus all the news we couldn't fit in the episode. So go to marketing ai institute.com/newsletter and you get a nice weekly brief of all your AI news that you need to stay on top of these critical issues.
[01:17:20] I would also mention if you can please give us a review on your podcasting platform of choice, it helps us get better, improve the show and get into the ears of more listeners over here. So if you have not taken a second to leave us a review, we'd greatly appreciate it. So Paul, appreciate you, appreciate you breaking down everything for us again this week.
[01:17:40] Paul Roetzer: Yeah, and I'll add two more quick ones, Mike. Again, the pulse survey. We'd love to have your feedback on the AI pulse survey. And then I am doing, I do a free, monthly scaling AI webinar. So we have a class coming up on Friday, November 14th. We'll drop the link in the show notes. that is, in partnership with Google Cloud, part of our AI literacy project.
[01:17:59] I teach a [01:18:00] free in intro to AI each month and a free scaling AI class each month. So November 14th at noon is a free, it's a Zoom webinar, so it's easy to join. Again, we'll put the link in the show notes, but if you've got time on Friday and you want to go through five steps for scaling AI heading into 2026 planning, it's a great time to go through that class.
[01:18:18] All right. thanks Mike. We will be back with you all next week. Thanks for listening to the Artificial Intelligence Show. Visit SmarterX AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters. Downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy, and engaged in the Marketing AI Institute Slack community.
[01:18:47] Until next time, stay curious and explore ai.
GPT for every assignment. Like, and this is like at a high school level. and so the friend [00:44:00] was like, well, you can't do that. Like, you're not learning anything. And they're like, yeah, whatever. And so my assumption is there's probably a lot of kids who are like that.
[00:44:06] Like as long as they can get away with it, they're just gonna keep doing it. So. Yeah, I don't know. I thought, there's just a lot of interesting perspectives. So for educators, I would definitely read this for parents. I think it's really good to be aware of how the tech companies are thinking about this and what they find to be important and the different challenges that they're currently solving for.
[00:44:25] they did end with some interesting questions, like things that they're thinking about how AI will change what we need to learn or even what it means to learn. How might historical forms of evaluations and assessments change due to ai? How will the nature of teaching evolve and how can AI facilitate new types of learning previously not possible?
[00:44:45] So I, I liked all of those, ideas. And then one related note, Mike, I was listening to Ddu Kesh podcast with Andres Carpathy recently. Yeah. I don't think I've mentioned this on the podcast yet. it is, I mean, Andres is always amazing to listen to, but he's working on [00:45:00] like the future of education himself.
[00:45:02] And so there was this part that I remember, like I stopped when I was listening to it and like rewound it and like listened to it again. And so they were talking about this assumption that like AGI does happen, that we get to the stage where the AI is at or above human level, at least average human level at basically all cognitive tasks.
[00:45:19] And so he said to Andre, he's like, well, what happens then? Like if we get to AGI, like, do we still need school? Do we still need education? Like what does that look like? And he said, I often say that pre AGI education is useful. Post AGI education is fun and in a similar way as people, for example, people go to the gym today, but we don't need physical strength to manipulate heavy objects because we have machines for that.
[00:45:42] Mm. They go to the gym, like, why do they still go to the gym? It's because it's fun, it's healthy, and you look hot when you have a six pack. What I'm saying is it's attractive for people to do that in a certain very deep psychological evolutionary sense for humanity. And so I kind of think that education will play out in the same [00:46:00] way you'll go to school, like you go to the gym and that's, that's so interesting because.
[00:46:05] I think there's people who are just naturally curious, lifelong learners who will always seek education. They will always take courses. They will always go back to school and get another degree or just take some, you know, interesting. Or watch a YouTube channel about something like they, they wanna learn because it makes 'em feel fulfilled and good.
[00:46:23] And that's kind of what Andres is saying is like, maybe that's what education becomes, is like, it's just for the people who want to keep learning. And if you want to, you know, kind of peak as a human and let the AI do everything, like, you'll probably do that, just like, you probably won't go to the gym and eat healthy and stuff.
[00:46:38] Like, so that, I don't know. I just thought it was like a really interesting perspective, sort of a, a related topic of the future of education.
[00:46:45] Mike Kaput: Well, you know what's interesting is during that podcast as well, he mentioned something to the effect of, look like the times I have learned the best have been when I've had like a direct tutor helping me work through problems at the right pace for me.
[00:46:59] And [00:47:00] they mentioned in the Google paper. They even say as part of their aspiration, ideally every student would spend ample time working in their zone of proximal developments. Yeah. What they call it, which is the sweet spot of just Right. Learning challenges that lead to new skill growth and like no knock on the education system.
[00:47:17] But that's like not very possible today. Right. So if you're thinking of it that way, you're almost like, we've been teaching with two hands behind our backs for 200 years. Right. So, or longer. Yeah. So that's where the real excitement I think comes in. Can AI make possible that personalized tutoring or learning, like we might all be able to learn far better than we ever have.
[00:47:39] Paul Roetzer: Yeah. And I've definitely spent time with a lot of educators who see this opportunity who know education is nowhere near reaching its potential to impact people. Yeah. Because it can't be personalized and it can't be adapted based on the way people learn. that's why things like Khan Academy and Duolingo have been so popular is it enables that kind of learning.
[00:47:58] And that's like what we're trying to do with [00:48:00] Academy is like we imagine the way to do this stuff. So yeah, it's just, I mean, education touches all of us. If it's not you personally, it's your family, it's your friends, it's, you know, your coworkers, it's your employees if you're a leader. Like, and so thinking about this, how we do re-skilling and up-skilling in a professional environment, how we guide our kids at different levels of education, like this is just a fundamentally critical topic as we move forward.
[00:48:23] So we're gonna, you know, try to do our best to, to start making it just a more regular part of what we discuss.
[00:48:28] Data Shows AI is Driving Layoffs
[00:48:28] Mike Kaput: Yeah, for sure. And if you needed any reassurance why this is such an important topic. Our first rapid fire is definitely tangentially related to re-skilling and up-skilling, because unfortunately, the US companies announced more than 153,000 job cuts in October, and this was the highest for this month in over two decades.
[00:48:48] And according to some new data from Challenger Gray and Christmas. They show that AI is a factor here. So they show that layoffs nearly tripled from the rate a year ago. [00:49:00] And the firm's chief revenue Officer, Andy Challenger, said the cuts reflect AI adoption, softening consumer and corporate spending and rising costs, which is driving companies to tighten belts and freeze hiring.
[00:49:12] Now, we've talked about some of these layoffs they have Amazon Target, paramount have announced sweeping reductions. Each of them have cited automation and management restructuring. We talked about UPS eliminated 34,000 operational roles and year to date job cuts have topped 1 million while hiring plans are apparently at their lowest since 2011.
[00:49:33] So some employers, like JP Morgan say they'll redeploy workers affected by AI rather than reduce headcount. But it sounds like based on this data for many, finding new roles is getting harder. So. You know, they call this out, Paul, explicitly in the data they call AI out. They say October's pace of job cutting was much higher than average for the month.
[00:49:52] Some industries are correcting after the hiring boom of the pandemic, but this comes as AI adoption, softening consumer and corporate [00:50:00] spending and rising costs drive belt tightening and hiring freezes. So, seems like we've talked about this issue at length, but it does seem more and more like people are pointing to ai at least as one factor.
[00:50:10] Paul Roetzer: Yeah, it is. It's coming to this recurring topic, every week, unfortunately. And I, I don't know, like I, I really wanna be optimistic here. I, but I've been pretty consistent saying I think there's gonna be some short term pain. I don't know how short term that is. Like, I don't know how long that Shortterm pain lasts, but I mean, I had some conversations last week.
[00:50:32] I was at a couple of major events and, there there's more coming. Yeah. Like there's, yeah, there's ways to. Know that cuts are gonna be occurring that the general public and the economy isn't hearing about yet. And I'll just say like, there's quite a, there's some indicators that we're just sort of at the leading edge of this and it's, [00:51:00] it's, in, in a way imminent that the next, like three to six months, we we're probably gonna see some pretty significant cuts.
[00:51:08] that may, people may be a little bit more transparent about their connection to ai, but I, I have a relatively high level of confidence that we need to be having these conversations and preparing more for continued cuts that are going to be, affected in, in at least part by ai. Yeah, I mean, like we always say on this podcast, like, we have to be realistic about this.
[00:51:32] Like, ignoring, it's not gonna do anything. And I think we're probably past the point where we can just deny that AI's gonna have any impact on jobs in the economy. I mean, hopefully, people have kind of accepted that by now. and we just have to move as quickly as we can. Like what do we do about it?
[00:51:46] Like we, we have to just be more proactive about this. It's not gonna stop, anytime soon.
[00:51:52] Mike Kaput: Yeah. And in case people think we're harping on, I mean, I don't think either of us get any enjoyment from having talk about this me, but like look at that AI pulse [00:52:00] survey. Yeah. It's not just us. 65% of the audience either thinks today it's a near term existential problem, we're in the next one to two years, and they're not just taking that from us.
[00:52:08] So
[00:52:09] Paul Roetzer: yeah, I would happily lead every, episode saying AI is creating jobs. Yes. And here's where it is and here's what the data is telling. It's like, trust me, I am anxiously awaiting the day when we can switch gears and start talking about the growth and innovation and jobs that are being created.
[00:52:25] that is not. Currently in sight, like, there's isolated. Certainly it's creating new roles. Like we talk about this out time. Like there are new roles that are being created. They're just, yeah, nowhere near at the level at which they're going to disappear in the, in the, in the coming months.
[00:52:43] Coca-Cola’s AI Christmas Ad Generates Controversy
[00:52:43] Mike Kaput: All right, our next rapid fire topic, a year after its first AI generated holiday, commercial drew backlash, Coca-Cola is trying again, and they say that this time they've gotten it right or even more right from their perspective.
[00:52:56] So the company's new seasonal ad is, once again, [00:53:00] AI generated. It mimics the style and tone of past Coca-Cola commercials. It's got some cartoony animals watching the brand's. Famous red trucks travel through the snow. They have a reveal of Santa Claus stepping outta one of the trucks. And this was produced by the LA based AI studio secret level, and the new commercial was built almost entirely with generative AI tools.
[00:53:21] Multiple models helped shape the concept, visuals and the animation style. Every frame was generated and refined through AI prompts guided by a small team of human creatives. And Koch's Global VP for generative AI says that the craftsmanship on this ad is 10 times better than the previous ad. There are advances that now allow for more natural emotion and movement in the videos.
[00:53:44] And if you recall, last year's AD drew controversy for using AI instead of more people to generate the company's kind of iconic Christmas creative. And this time, no different viewers have criticized the visuals. They've questioned the trade off [00:54:00] between speed, cost, and artistic value. Critics have raised environmental and labor concerns, warning that AI driven campaigns could accelerate job displacement across the creative industry.
[00:54:11] So Paul, it seems like it is becoming a holiday tradition for people to get pissed at Coca-Cola about their AI ads. That doesn't seem to be changing. It is interesting in a couple of the articles here, they, Hollywood reporter has some really good in-depth dives with Koch and with the agency about how they made these super fascinating, but they're pretty unapologetic.
[00:54:32] Paul Roetzer: I think they have to be like, I, I think you just have to own this. the way that think this plays out is people who watch this ad and don't know was AI generated and don't like, have ho, you know, strong feelings about AI one way or the other. Probably think like, this is a really good ad. Like that was beautiful.
[00:54:48] Like it was put me in the holiday spirit. And then there's gonna be people who watch it, who know, whos AI generated, know the backstory. And they either like or don't like ai. They either are the, what was the training data, you know, reply boys who are just like, ev [00:55:00] every single AI thing is like, what was the training data?
[00:55:02] What was the training data? which I get like, I, I understand if that's the talking point. I, I just feel like Koch is gonna go out into that frontier. They're gonna piss off 40% of the population. and then at some point I was gonna just stop. Being pissed and they're just gonna accept that this has evolved creativity.
[00:55:23] Mike Kaput: Yep.
[00:55:23] Paul Roetzer: And somebody's gotta do it. Like people have to go out into that frontier and be willing to be different and do what they consider to be an evolved form of creativity and they're gonna get crap for it. Like, I don't, I don't know that I saw a single positive post on X about this. Yeah. Like X is definitely a bubble of people who, are, are challenging this evolution.
[00:55:47] And again, this isn't me taking a personal stance on like, saying, just get over it and let's move on with life. I totally get it. Again, I've said this a hundred times, my wife is an artist. My daughter's an artist. I'm a writer by trade. Like, I get it. [00:56:00] That, that this is not a, a black and white, like zero and one thing, it's not a binary decision.
[00:56:06] It is like there's a fuzzy middle, middle ground here. Yeah. There's things that are, are messy and uncomfortable, but brands are gonna keep moving. I think at some point, society just sort of like. It just becomes creativity. Like it's, it's different and it's gonna take a little while, but, I don't know. I mean, good for Coke for standing on the brand, like this is what we're doing and then not backing down.
[00:56:30] So,
[00:56:31] Mike Kaput: yeah. Yeah. I think this is one of those areas too. I think you just have, and rightly so, so much emotion and personality and identity tied up in some of the skills here and the creativity of it and the humanness of it, that this is where you start to see some of that backlash in this area especially.
[00:56:47] Paul Roetzer: Yeah. And it's take, you can't, like, that's the thing is like the creatives are using the tools available to them and they're doing incredible things with those tools. Is it their fault that they're trained on data that was stolen? Like, are they just not supposed to [00:57:00] use the tools like this? Is that weird?
[00:57:02] There's no perfect answer, right? There's what people believe, like there's these, subjective opinions about was it right or was it wrong? And things like that. versus, you know, I think there's a lot of people, which I probably put myself in this bucket of like. It just is like, I can't change how they train the models.
[00:57:21] I can choose as a company, are we gonna use the models or aren't we? and we choose to use the model. So I suppose like, in a way I'm saying, well, I, if I don't believe they should be allowed to do this, then we probably shouldn't be using Chad, GPT and Google Gemini retain our company. And that doesn't seem like a sustainable business decision.
[00:57:39] So it's, it's hard. Like I, I get why people would feel strongly on both sides of this.
[00:57:46] Amazon and Perplexity Feud Over Agent
[00:57:46] Mike Kaput: All right, next up. Amazon has filed a federal lawsuit against Perplexity, accusing the startup of illegally using its new AI agent to shop on behalf of users. So at the center of this dispute is Comet Perplexity AI browser, which can [00:58:00] log into user's Amazon account, search for products and complete purchases automatically.
[00:58:05] Amazon says this violates its terms of service. The company says Comet disguises itself as a normal Chrome user, which degrades the shopping experience and creates privacy risks. Perplexity. CCEO argues instead that agents should have the same rights and responsibilities as human users acting on their own behalf and called Amazon suit a bully tactic to suppress competition.
[00:58:29] So I'm curious about the nuances you see here, Paul, because it doesn't sound like Amazon, from what I was reading, is totally against agents. But their CEO Andy Jassy did say on an earnings call last week that the current customer experience for AI shopping agents was quote, not good. He cited a lack of personalization and user specific shopping history and bungled delivery estimates and pricing.
[00:58:50] But he did seem to hint at, he thinks there's gonna be ways they find to partner with companies related to agents. So what do you think here?
[00:58:59] Paul Roetzer: I don't [00:59:00] think they have a choice. I mean, this is the future agents are gonna be able to do shopping. So obviously Amazon has to, Amazon has to have a play here. this is a tricky one for me.
[00:59:09] Like RN the CEO has a very publicly available, you can go look at it yourself. questionable legal and ethical choices. So I've referenced this before, like Bragg's openly on podcasts about the fact that he built a company that scraped LinkedIn data knowing it was against the terms of use, but everyone else was doing it.
[00:59:28] So he justified it by everyone else doing it. so a again, it's like I have a hard time feeling empathy for perplexity who knowingly, abuses rules in terms of use. So anytime that they're talking about like, oh, we're, you know, the victim here, it's like, come on. Like you, you guys have made an entire company worth billions of dollars on stealing from people.
[00:59:49] Like, this is what you do. So re take that for what it's worth. do I think Amazon is in the right? I don't know. Like, I'm not a lawyer. I, I didn't analyze like [01:00:00] exactly what the lawsuit is here. I went through like the basics of it. But this bullying is not innovation thing. I just kind of like laughed at.
[01:00:07] But I, I'll regardless, I'll pull a couple excerpts. So this is the article from Perplexity that like when they got this lawsuit served to them, the point of technology is to make life better for people. We call it innovation, but it's just the constant, constant process of asking how to make things better.
[01:00:21] Bullying, on the other hand, is when large corporations use legal threats and intimidation to block innovation and make life worse for people. A, KA when we steal shit and we break rules, they, they try and stop us from doing it. And that's bullying is basically what this, this is. So like, the rest of it's just like, okay, whatever.
[01:00:39] Like, then they go into how this is just the future and Amazon should just accept it and, agents are gonna do this stuff and which is probably right. Like they, they probably will, like, this is where it's gonna go. Doesn't mean that you just get to rewrite the rules. And so this is like this approach from, from technology overall.
[01:00:57] It's like. Well, we're just gonna steal the 7 million [01:01:00] books because we think we should be allowed to. Right. Even though it's against the law, it's like, okay, but that's not how the law works. That's not how terms of use work. Like there are standards in society that we're kind of supposed to abide by, but that it just isn't the techno optimist like accelerate at all costs position.
[01:01:16] So again, I'm not trying to make like, you know, who's, who's the villain, who's the hero here? I'm not, I'm not trying to play that role. But like, don't just look at this like bullying, stopping, invasion, whatever, and like on the surface and think, oh my God, perplexity iss. Right? Like, Amazon's the bully. Like, no.
[01:01:32] Right. There is way more to this story here. that being said, when I think to the future, and we're probably gonna do like a 2026 like AI trends special episode, we'll talk a little bit about this, but one of the ones, Mike, that I had kind of jotted to myself is this agent to agent communications and commerce.
[01:01:48] This was like a couple months ago when I was thinking about this, when I said, businesses must solve for consumers, using agents to gather information, engage with brands, and make purchases. This may alter how we design user experiences on the web [01:02:00] and in apps. And it could rapidly evolve marketing, sales, and customer experience strategy.
[01:02:03] So this is like, we're all gonna face this, this is Amazon doing it now, but like, what's stopping an agent coming to your brand's website and interacting with your chat bot and like annoying you or going through processes where you're losing insights into like the analytics and you have no idea what users are.
[01:02:19] Like, this is gonna happen to all of us at kind of like a micro level. So it's, it's a, I guess, a good topic from that perspective.
[01:02:25] Mike Kaput: Yeah. It seems like such a fundamental question, right? Because I just can't help but see agent to agent, like rewriting every assumption of marketing Yeah. And of e-commerce, because it's all based on the brand at some level mediating the experience and that goes away with agents.
[01:02:42] Paul Roetzer: Yeah. And I haven't really heard of a brand that's figured this out yet, in part because it requires a leap of assumption about the agents becoming more autonomous and reliable, which they aren't right now. Right. And so, but that doesn't mean they won't be by Christmas season, by like holiday season 2026, like 12 months from now.
[01:02:59] This [01:03:00] may be a very real thing affecting like every website of our listeners. So it's like we're almost sort of, that I always say like trying to see around the corner like 3, 6, 12 months. Yeah, this is one of those where like we're kind of looking around the corner at the moment and it's gonna become very real to a lot of other people in the near future.
[01:03:18] Ilya Sutskever Deposition
[01:03:18] Mike Kaput: Next up, we have some pretty juicy gossip, some inside baseball here in the world of ai. A newly released court deposition has shed light on how close openAI's once came to merging with its top rival Anthropic and how some of the internal power struggles played out when they tried to oust Sam as CEO at openAI's.
[01:03:38] So this testimony actually comes from OpenAI co-founder and former chief scientist,Ilya Sutskever, who spoke under oath, and a case tied to Elon Musk's lawsuit against open. So in this Scavo revealed that after Altman was fired in 2023, Anthropic expressed excitement about a potential merger that could have installed [01:04:00] Anthropics Dario Ade as openAI's CEO.
[01:04:02] The talks however, collapsed to, due to what he described as practical obstacles, which were likely investor complications from both companies, massive funding rounds and interestingly, Ilya's deposition also detailed deep mistrust. Within open AI's leadership, he including a 52 page memo he wrote, accusing Altman of a quote, consistent pattern of lying and manipulating colleagues.
[01:04:27] Now, despite initialing. Altman's initiating Altman's removal. Ilya later supported his reinstatement, which as we discussed came after nearly all of their employees threatened to quit. So Paul, you had mentioned to me this was getting a lot of attention in AI circles Yeah. On the line. Like what's getting people talking the most about this?
[01:04:47] Paul Roetzer: I think just the fact that there's on the record testimony now about this. So when I first looked at it, there wasn't like a lot that jumped out to me that wasn't in Karen Howe's book, empire of ai. Yeah. So like one, if you read Empire of ai, that you probably get the gist [01:05:00] of this. that being said there, there was just some interesting context.
[01:05:03] So as you mentioned, he was a co-founder and we've talked about Ilya a topic or two ago, with Musk, Altman, Brockman and others. He then left to start, safe Super Intelligence, which was valued at 32 billion in spring of 2025. couple interesting notes. So Helen Toner, who was a board member when Sam was ousted, and Helen is someone I follow closely online.
[01:05:25] She did tweet for the record for those Dissecting Ilya's deposition. This part is false, and she was referring to her being the one that had sort of suggested this merger with Anthropic. She said I wasn't the one who made the board Anthropic call happen, and I disagree with his recollection that board members other than him were supportive of a merger.
[01:05:44] My feeling at the time was that we were exploring crazy options because the situation was crazy. I had no intention of trying to merge the company with Anthropic, and as Ilya says in his deposition, the possibility was actively on the table for an extremely short time. [01:06:00] Now, keep in mind this whole thing happened over like 72 hours.
[01:06:02] Yeah, like Sam's fired. They go through all these different things and then Sam's back. she then said, I'm not planning to weigh in on details about November, 2023 whenever they come up, but since this was swarm testimony and about my personal views, I wanted to make clear that it's incorrect. So then.
[01:06:20] Part of the deposition I did read, was re referring to this 52 page memo. You, you mentioned, Mike. Yeah. And in essence, Ilya was asked by the independent board directors. At the time, there was three independent board directors, I think, and then it was, Sam, Greg, and Ilya, I believe were the other board members.
[01:06:36] so there were two or three independent board members. So the attorney said, all right, then you sent this 52 page memo to the independent directors of the board, correct? He said, correct. He said, why didn't you send it to the entire board? KO replies, because we were having the discussions with the independent directors, only the attorney.
[01:06:52] Okay. Why didn't you send it to Sam Altman Koho? Because I felt that had he become aware of these discussions, he would just find a way [01:07:00] to make them disappear. He continued. So the way I wrote this document was to, the context of the document is that the independent board members asked me to prepare it.
[01:07:08] To prepare it, and I did. And I was pretty careful because at this point Ilia had raised concerns at, has, at his, had MiiR Mirati that Sam might not be the right leader moving forward. most of the screenshots that I have, most or all I don't remember, I get them from Mira Mirati. So Mira was also sending screenshots to Ilya that was demonstrating Sam's inability to be a effective leader.
[01:07:29] It made sense to include them in order to paint a picture from a large number of small pieces of evidence, the attorney. Okay. In which independent directors asked you to prepare your memo, Skove said it was most likely Adam DeAngelo. Adam is the only remaining board member, by the way. Yeah. So after all this hoop law, Adam remained in, in as a independent board member.
[01:07:48] The attorney said. All right. And the document that you prepared, the very first page says, quote, Sam exhibits a consistent pattern of lying, undermines his execs and pitting his execs against one another. That was clearly your view [01:08:00] at that time. Correct. and Suka says, correct the attorney. And did you want them to take action over what you wrote Squa?
[01:08:07] I wanted them to become aware of it, but my option or my opinion was that action was appropriate. And what action did you think was appropriate? Termination said Sutz quo. Okay. You drafted a similar memo that was critical of Greg Brockman, correct? Said yes. And you sent that to the board? Yes. Does a version of your memo about Greg Brockman exist in any form?
[01:08:25] He said yes. Somebody has it. I didn't know about the Greg Brockman one. That was the first time I heard about it. So my whole takeaway with this, Mike, is if anybody watched the social network and the Facebook crazy story Yes. Like we are, this is on steroids. Like the openAI's movie is going to be insane.
[01:08:41] Mike Kaput: Yeah, no kidding. The just feels like an episode of Game of Throne. Yeah, it's wild.
[01:08:48] Apple Nears Google Deal
[01:08:48] Mike Kaput: Alright, next up. According to Bloomberg, apple is planning to pay about a billion dollars a year for an ultra powerful 1.2 trillion parameter artificial intelligence model developed by Google. That would help run its long promised overhaul of the Siri voice assistant.
[01:09:04] According to people with knowledge of the matter, Bloomberg went on to say, quote, following an extensive evaluation period. The two companies are now finalizing an agreement that would give Apple access to Google's technology. So reportedly, Google's AI will handle Siri summarizer and planning planner functions, helping the assistant better understand context and execute complex tasks internally.
[01:09:27] This overhaul is Codename Linwood, part of a broader project known as Glenwood, led by Vision proc creator Mike Rockwell and Software Chief Cred Fedi. And the redesign series is slated to launch next spring as part of iOS 26.4. So interestingly, while Apple continues developing its own trillion parameter model for release next year, Google's AI will also apparently quietly run behind the scenes, powering a smarter, more capable series.
[01:09:56] So Paul, interesting. We're seeing some movement on this. I personally am just [01:10:00] still a little confused. Like they're developing their own model too. Like I, I like, why are they even bothering, given that they're already in this position? Like, at some point, I mean, I'm not saying people should give up, but what are we doing here?
[01:10:13] Paul Roetzer: Yeah. I, this is like Apple and Google have this really unique history where they, you know, do collaborate on things. This isn't news like it became news again because someone else started talking about this, but Mark Germond broke this in like, April of this year that these, that the companies were working on something like this together.
[01:10:29] So he came out with an updated story after last week when everybody else was just running with this, without crediting him with the fact that he'd already said this was gonna happen. And I think we talked about on the podcast like multiple times that this was the direction it was going. Yeah, I think they're just accepting that they're not a frontier lab.
[01:10:46] Like they're, they're not gonna build to compete with like what Gemini three is. 'cause my guess is they're gonna be building at least like a version of like what Gemini three is gonna make possible.
[01:10:55] Yeah.
[01:10:55] Paul Roetzer: and I think that Apple's gonna make a bet on smaller [01:11:00] models eventually are probably gonna be sufficient.
[01:11:02] So like whatever Gemini three or whatever version of Gemini they're gonna get, that'll run this thing. They're probably looking out 18 to 24 months and saying, well, we'll be able to run a 10 x smaller model on device so it won't need to go to the cloud. Yeah. And so my guess is Apple's is gonna focus in on that, like building these more efficient, smaller models, that can run on device, but in the interim, they need to fix Siri bad and they've accepted that.
[01:11:28] They, they, they just aren't gonna get there on their own. And so I think it makes a ton of business sense. And again, they've done deals like this before on search and maps and other things, so, I think it's good. Like I think Apple investors are just like, just fix it. Like we don't care if it's your model or not.
[01:11:44] Yeah. Yeah. I don't, as a user of Apple products, like I don't care. Just make it work. Like I, it doesn't matter to me what it's running on. Just make it functional. So yeah. Seems like the very logical choice from Apple.
[01:11:56] AI Companies Are Going on the PR Offensive
[01:11:56] Mike Kaput: Alright, our last topic today, we're seeing some stories where tech [01:12:00] giants are working over time to reframe the narrative around ai, with a wave of positive announcements about jobs, education, and economic impact.
[01:12:09] So first, meta unveiled what it calls a $600 billion commitment to American infrastructure and jobs touting its new AI optimized data centers as engines of growth. The company has published a specific report on this saying its US projects have already supported 30,000 skilled trade jobs and added 15 gigawatts of power, capacity, and pledges to be water positive by.
[01:12:32] 2030. So the message is that AI is not just about technological progress. They're tying it to American workers, communities and sustainability. Now, interestingly enough, on the same day, Google spotlighted a $5 million investment in Oklahoma to expand AI training and workforce programs through local partners.
[01:12:53] Now, both of these moves are happening amid heightened criticism that big tech's AI push threatens [01:13:00] privacy jobs, public trust, and as well as the environment. So we're kind of talking about this now, Paul, because we wanted to highlight the how they're actually starting to try to get ahead of possible societal backlash, like you had mentioned.
[01:13:13] Yeah. This, in our internal chat in the case of meta was kind of quote them trying to get out ahead of societal revolt by positioning it as a job creator and responsible use of energy.
[01:13:23] Paul Roetzer: Yeah. The first time I've actually heard the term water positive, so I, I don't know if that's like a common phrase, but I had to look it up.
[01:13:30] So a concept or an entity. Restores or adds more water to a watershed than it consumes or depletes, which I mean, that's obviously like, that makes a lot of sense. but I'd never heard that talked about like that. yeah, so I, I, I don't know, like a pretty safe job for a while is gonna be lobbying or working in community investment in these frontier labs.
[01:13:52] Like they, they're gonna be working overtime on lobbying in Washington and they're gonna pour money [01:14:00] into community investment programs that they can invest into cities like what we saw Google do with Oklahoma, and PR staff for the Frontier Labs. They're gonna be very busy. So, and again, I came from that world.
[01:14:13] I have, I haven't done lobbying, but I have done community investment. I led PR programs, for some brands, through our, our agency. I get how this all works, and I'm just telling you, if you don't live in this world or haven't lived in this world. This is how it starts. Yeah. You look at these very specifically timed, very crafted messages, very strategic investments in certain areas, specifically places where data centers are gonna be built, where it's gonna affect the environment.
[01:14:40] Like this is pr at its core. You try and affect the way people perceive things and not in a negative, it's, I'm not saying this is a negative thing, this is just what PR people do. you try and affect perceptions and behaviors and you do that in an environment like this where it's through lobbying and [01:15:00] community investment programs.
[01:15:01] So I think we are gonna see a flood of stories like these ones. Yeah. Coming from the major AI companies,
[01:15:08] Mike Kaput: from a total realist perspective, no value judgment on any side of this. Like the reason this matters too is these are the talking points you are going to hear next year as we get into political.
[01:15:19] Yeah. You know for sure. Whatever firestorm we into over this, you're gonna hear about jobs, water positive, all this stuff. I bet you. You're gonna hear these terms again and again.
[01:15:29] Paul Roetzer: Yeah. We are gonna hear endless about how many jobs are created by data centers. Yeah. Like that is like the most obvious talking about, we're already seeing it, how sustainable those jobs are versus like one time things to build the data center
[01:15:41] and then they don't, aren't needed after the, you know, 12 months, whatever. But yeah. So again, like a lot of times on this podcast, we're just trying to zoom you out and show how this all is connected. And you can see from the first topic of the impact on like, openAI's looking for that government support all the way back to this like, lobbying and community [01:16:00] investment.
[01:16:00] Like, it's all interrelated and like, I, you know, it's a fun part for me as a storyteller, Mike, is like, every week we do this podcast, like, almost, it's kind of, I think about Macom, like I think about the overall picture and the story and then like the topics become like this part of the story and something that's not obvious, the connection between them.
[01:16:18] But I think that people that listen enough and like, think about this, the way we think about it each week. You just start to like connect the dots and you see how this all is connected and it's a fascinating picture to be able to look at. It's not always like optimistic or it doesn't always make you like feel super excited about the near term future, but it's nice to at least see the picture when so many other people are just like, you know, oblivious to what's happening.
[01:16:43] Mike Kaput: Right. I couldn't agree more. That's my favorite part of this is just the connecting the dots and getting to learn about so many different areas that AI touch it. Yep. So to that end, Paul, thank you for connecting the dots for us this week. That's all we've got this week. Just a couple quick final announcements.
[01:16:59] If you have [01:17:00] not yet subscribed to the Marketing AI Institute newsletter, it is called this Week in AI and we run down all the stories we talked about today, plus all the news we couldn't fit in the episode. So go to marketing ai institute.com/newsletter and you get a nice weekly brief of all your AI news that you need to stay on top of these critical issues.
[01:17:20] I would also mention if you can please give us a review on your podcasting platform of choice, it helps us get better, improve the show and get into the ears of more listeners over here. So if you have not taken a second to leave us a review, we'd greatly appreciate it. So Paul, appreciate you, appreciate you breaking down everything for us again this week.
[01:17:40] Paul Roetzer: Yeah, and I'll add two more quick ones, Mike. Again, the pulse survey. We'd love to have your feedback on the AI pulse survey. And then I am doing, I do a free, monthly scaling AI webinar. So we have a class coming up on Friday, November 14th. We'll drop the link in the show notes. that is, in partnership with Google Cloud, part of our AI literacy project.
[01:17:59] I teach a [01:18:00] free in intro to AI each month and a free scaling AI class each month. So November 14th at noon is a free, it's a Zoom webinar, so it's easy to join. Again, we'll put the link in the show notes, but if you've got time on Friday and you want to go through five steps for scaling AI heading into 2026 planning, it's a great time to go through that class.
[01:18:18] All right. thanks Mike. We will be back with you all next week. Thanks for listening to the Artificial Intelligence Show. Visit SmarterX AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters. Downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy, and engaged in the Marketing AI Institute Slack community.
[01:18:47] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
