Sora 2 is here, and it's a mind-blowing, copyright-defying mess.
That kicks off this week's episode of The Artificial Intelligence Show. In it, Paul Roetzer and Mike Kaput break down everything going on in AI this week, including the release of Claude Sonnet 4.5, ChatGPT's new Instant Checkout feature, Elon Musk's Grokipedia, and much more.
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:07:24 — Sora 2 and OpenAI’s AI Social Video App
- X Post from OpenAI on Sora 2
- X Post from Sam Altman on Sora 2
- Sora 2 - Sam Altman Blog
- Sora 2 System Card - OpenAI
- OpenAI’s new social video app will let you deepfake your friends - The Verge
- OpenAI’s New Sora Video Generator to Require Copyright Holders to Opt Out - The Wall Street Journal
- OpenAI Is Preparing to Launch a Social App for AI-Generated Videos - Wired
- Sora - Billing FAQ - OpenAI Help
- X Post from Venture Twins
- X Post from Sam Altman on AI Slop
- X Post from Ed Newton-Rex on Sora Copyright Concerns
- X Post from Christopher Fryant on Sora Video Game Reproduction
- Sora Update #1 from Sam Altman - Sam Altman Blog
- X Post from Pietro Schirano
- X Post from Sam Altman
- X Post from Paul Roetzer on Sora 2 Copyright Impact
- X Post from Paul Roetzer in reply to Bill Peebles
- OpenAI Sora Copyright Opt-Outs: Law Prof Says Opt-In Is Better - Christa Laser
- X Post from MrBeast
- X Post from Vinod Khosla
00:31:30 — Claude Sonnet 4.5
- Introducing Claude Sonnet 4.5 - Anthropic
- X Post from Anthropic on Claude 4.5
- X Jan Leike Post on Claude 4.5
- Anthropic launches Claude Sonnet 4.5, its best AI model for coding - TechCrunch
- Anthropic releases Claude Sonnet 4.5 in latest bid for AI agents and coding supremacy - The Verge
- X Post from Jack Lindsey on Claude 4.5 Interpretability
- X Post on Alleged Claude System Prompt Leak
- X Post from Descript on Integrating Claude 4.5
- Sonnet 4.5 & the AI Plateau Myth — Sholto Douglas (Anthropic)
- Software is Eating Labor - a16z Podcast
00:42:01 — ChatGPT Instant Checkout and AI Commerce
- ChatGPT Instant Checkout - ChatGPT
- X Post from Tobi Lutke on ChatGPT Instant Checkout
- X Post from Fidji Simo on ChatGPT Instant Checkout
- OpenAI Looks to Build In-House Ad Infrastructure - Adweek
- Meta will soon use your AI chats to personalize your feeds - The Verge
- Improving Your Recommendations on Our Apps With AI at Meta - Facebook About
00:47:18 — OpenAI H1 Results
00:49:40 — How OpenAI Uses AI
00:53:43 — In New Interview, Sam Altman Says the GPT-5 Haters Got It All Wrong
00:57:27 — Grokopedia
- X Post from Elon Musk on Grokopedia
- X Post from Elon Musk on Grokopedia’s Importance
- X Post from Benjamin De Kraker on Grokopedia Feasibility
01:02:27 — Tinker from Thinking Machines
01:04:30 — California Enacts AI Transparency Law
01:07:45 — Mercor Launches AI Productivity Index
- APEX
- X Post from Mercor CEO on APEX
- SB 53, the landmark AI transparency bill, is now law in California - The Verge
01:13:27 — AI Impact on Jobs Updates
- Lufthansa to cut 4,000 jobs as airline turns to AI to boost efficiency - CNBC
- Nearly 90% of BCG employees are using AI — and it's reshaping how they're evaluated - Business Insider
- Citi Is Requiring AI Prompt Training for Hundreds of Thousands of Employees - Entrepreneur
- Evaluating the Impact of AI on the Labor Market: Current State of Affairs - Budget Lab
- LinkedIn Post from Molly Kinder on Labor Research
01:16:56 — AI Product and Funding Updates
- AI Mode can now help you search and explore visually - Google Blog
- Welcome to the next era of Google Home - Google Blog
- X Post from William Fedus on Periodic Labs
- Apple Shelves Vision Headset Revamp to Prioritize Meta-Like AI Glasses - Bloomberg
Summary
OpenAI's Sora 2 Is Here
OpenAI just unleashed Sora 2, its most advanced video generation model yet, and dropped it into a new social app that looks and feels exactly like TikTok.
But this isn't just another video feed. Every single clip is AI-generated, and a new feature called “Cameo” lets you drop your likeness—and your friends'—into any scene with just a short recording.
The technology is stunning. The model understands physics in a hyperrealistic way, making it feel less like a special effects tool and more like a true world simulator. But in its rush to launch what some are calling the “ChatGPT moment for video,” OpenAI also kicked open a Pandora’s box of copyright infringement, deepfake concerns, and questions about the future of online content.
Anthropic Releases Claude Sonnet 4.5
Anthropic has released Claude Sonnet 4.5, and it’s being billed by Anthropic as the best coding model in the world.
The new system can handle complex, multi-step tasks like building full applications, managing databases, even performing security audits. In one demo, it generated 11,000 lines of code to spin up a Slack-style chat app—and only stopped when the job was done.
Anthropic even says: “Practically speaking, we’ve observed it maintaining focus for more than 30 hours on complex, multi-step tasks.”
On benchmarks, Sonnet 4.5 is now state-of-the-art. It leads on SWE-Bench Verified, which tests real-world software engineering, and it’s three times better than its predecessors at navigating and using a computer. Enterprises like Canva say it’s already useful for deep engineering and research work.
Anthropic also unveiled the Claude Agent SDK, the same infrastructure it uses internally to build agents. That means developers can now design their own long-running AI systems with memory, context management, and multi-agent support baked in.
The company stresses that 4.5 isn’t just more capable, it’s also its “most aligned” model yet, with fewer cases of deception or prompt exploitation.
Claude Sonnet 4.5 is available today at the same price as before.
OpenAI Releases ChatGPT Instant Checkout
OpenAI just turned ChatGPT into a shopping platform. The company has launched Instant Checkout, a feature that lets people buy products directly inside conversations. Says OpenAI:
“Every day, millions of people use ChatGPT to figure out what to buy. Now, with Instant Checkout, they can buy directly from you inside those conversations.”
Here’s how this works, according to OpenAI:
Say you describe what you’re looking for, like “a durable carry-on bag under $300.” ChatGPT will recommend the most relevant products across the web, like it normally does. Then, users can buy the product without leaving ChatGPT if Instant Checkout is enabled, paying instantly with a credit card, Apple Pay, Google Pay, or Stripe. Merchants remain the seller of record, keeping control of payments, fulfillment, and customer data.
It’s all powered by OpenAI’s new Agentic Commerce Protocol, an open standard built with Stripe. Merchants like Etsy are already live, with Shopify integrations coming next.
At the same time, it appears that OpenAI is gearing up to turn ChatGPT into an ad platform. Adweek reports that a new job listing shows the company is hiring someone to build tools that let advertisers “create and manage ads” inside ChatGPT. The hire will be responsible for experimenting with “native ad formats,” suggesting a future where ChatGPT might serve suggestions the same way search engines show sponsored results.
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off either an individual purchase or a membership by using code POD100 when you go to academy.smarterx.ai.
This week’s episode is also brought to you by MAICON.
This is our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.
For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: Is this actually Sora? Here's your AI slop feed with all these Nintendo characters and Pokemon and South Park and SpongeBob Squarepants, everything. It was just like all this copyrighted stuff. Immediately. That was all that. And Sam Altman was all you see in the feed. And immediately, I was like oh, I will never use this like this.
[00:00:19] This is not interesting to me at all. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:52] Join us as we accelerate AI literacy for all.
[00:00:59] Welcome [00:01:00] to episode 72 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording Monday, October 6th at 11:00 AM which is very relevant because today is dev day for openAI's. So there will be news from October 6th that we will not be covering in this episode, but we will be covering it next week on, I guess that would be what, the 14th ish?
[00:01:25] Yeah.
[00:01:26] Mike Kaput: Yes.
[00:01:26] Paul Roetzer: Yeah, around 14th, which is the first day of MAICON, which is brought to us. MAICON is bringing us today's episode. So we have, we have a lot to cover. We did have some new models last week. We had Sora 2 from openAI's. We got Claude Sonnet 4.5. There, I think is gonna be a bunch of new stuff announced today by openAI's at the dev day, including, there's a lot of buzz around an agent builder, a no-code agent builder that'll allow people to build their own, agents easily through like a ChatGPT type interface.
[00:01:59] That would be [00:02:00] really interesting. And there's some other stuff that's being rumored as well. So, and then there we're hearing more and more buzz that Gemini three is, imminent from Google DeepMind. So we'd said October was gonna be crazy. It is off to a very busy start, and I think a lot more coming. So, we're gonna get into it a second.
[00:02:19] This episode, again, is brought to us by MAICON. I'll start there. AI Academy as well. We'll start with MAICON since I already mentioned it. This is, we are a week away. Which man, Mike, I don't know about you, but I think you're further along than I am in your prep, but, I'm getting there. I am, I'm like locked in on the workshop.
[00:02:39] I'm really excited. I'm doing AI innovations workshops. The first time I'm doing this. I built a new innovations GPT for it. I have not released it yet, so don't go searching for it on our website. Um. I'm really excited about that workshop. And then I'm doing the Move 37 Moment opening keynote, and I'm equally as excited about that.
[00:02:56] I still have to finalize that presentation, but this is your [00:03:00] last chance. If you want to be with us in Cleveland, October 14th to the 16th, we'd love to have you and you can join 1500 plus other, AI forward professionals and leaders who are gonna come together and, I don't know, hopefully like learn a ton, but inspire each other and make connections, build partnerships, launch companies together.
[00:03:18] Like it's, it's such an amazing three days. And, you know, I think I'm, I'm finally starting to like mentally get in the place, Mike, where I'm just, I'm just excited now. Yeah. Like, yeah, to get there and do it. It's, it's my favorite three days of the year professionally. so yeah, we'd love to see in Cleveland, it's MAICON ai, M-A-I-C-O-N.ai.
[00:03:38] You can use the Pod100 code for that last minute, a hundred dollars off your ticket. So again, we'd love to see you. Dozens of incredible speakers, incredible sessions. over three days in Cleveland, which it looks like we're gonna have just beautiful fall weather. I mean, the leaves are changing. It's no better time of year to be in Cleveland than in October.
[00:03:56] Sod love to have you join us. And then also AI Academy by [00:04:00] SmarterX. We've been talking a lot about AI Academy. You can learn Mar at Academy, do SmarterX.ai. I'm gonna turn it over to Mike for a second, give you a preview. We've been doing kind of these previews of some of the course series and CER certification programs.
[00:04:13] So we just launched one recently on AI for professional services. This is part of our AI for Industries collection, and Mike created that. So I'll, I'll let him give kind of a quick background on, on what that course series is like.
[00:04:25] Mike Kaput: Yeah, Paul, this is one I'm especially excited about, just given our background in the agency world.
[00:04:31] So professional services, AI for professional services doesn't just cover marketing agencies, but any type of firm. That is billing for any type of human expertise in the form of services. So you think things like accounting firms, lawyers, consultants, et cetera. so we cover a few representative samples, but the idea here is really an evergreen course that uses frameworks to help you as a professional, services professional or leader, really accelerate your career and company with ai.
[00:04:59] So [00:05:00] we go through a step-by-step process to actually understand at a high level what disruption is happening right now in the industry at large due to ai. What are some of those kind of structural factors that everyone needs to be aware of and adapting to? And then really getting in the weeds on for your particular job, the type of work you do, no matter what type of pro services firm it is, how do you actually reinvent that work, do it more efficiently, do it with more in innovation, focus and make it more performance driven using ai.
[00:05:31] So we teach you A to Z exactly how to do that. You come away from the course, not only with a professional certificate, but hopefully with a roadmap for the exact types of use cases and tools you should be adopting in your own professional services work or your, in your firm or your team at large.
[00:05:48] Paul Roetzer: That's great. And then just for people, you know, how the roadmap works in terms of what we're building. starting, I think later this year we'll start releasing AI for Businesses series where we'll actually then take [00:06:00] some of those like law firms or marketing agencies and drill, you know, more, I guess deeply into those specific businesses.
[00:06:07] And so the whole concept of what we're doing with Academy is really build this learning journey where you can kind of start at a macro level in fundamentals, piloting, scaling, and then go by industry, by department, by business type, by career path. And so over time. We, you know, we're working very aggressively to create as much of this content as we can, as quickly as possible, but the idea is to really give you a journey that you can follow to sort of pursue mastery in, in the area of interest for you and ai.
[00:06:32] So it's exciting to see all this stuff coming out. It's, the pace is incredible. I mean, Mike's been putting in a ton of work on this as well as everybody else on our team, and we'll be launching the new learning management system very soon here. We'll have more news on that. So, yeah, and, and thanks for all of our listeners who are part of AI Academy already.
[00:06:48] We, we appreciate it and hopefully you're really enjoying it and getting a ton of value out of it. All right, so I feel like this first main topic we could honestly just spend the entire episode on. There's so many [00:07:00] layers to Sora 2 and the new Sora app. We're gonna do our best to cover it concisely. Hit a few key points here, but I feel like this is a, a topic we're gonna probably be coming back to, 'cause not just the model itself and the app, but like the larger implications from technology standpoint, from a legal standpoint, things like that.
[00:07:19] So. Let's kick it off, Mike, with Sora 2 and their new viral app.
[00:07:24] Sora 2 and OpenAI’s AI Social Video App
[00:07:24] Mike Kaput: Yeah, Paul, this one is a doozy. So openAI's has unveiled Sora 2 . It's its most advanced video generation model yet, and it is rolling out this new social app to showcase it. So openAI's is pitching this as almost a ChatGPT moment for video because Sora 2 models physics very realistically.
[00:07:45] So it's not just making videos look better, it's actually understanding how physics works in these video clips. So, for instance, a basketball shot could miss and bounce off the rim in a hyper realistic way. A paddleboard back flip has played out in [00:08:00] video with buoyancy intact to these elements. This fidelity kinda makes it less like, you know, AI generated special effects and more like a true world simulator when you are generating a video.
[00:08:12] And one item here that's really Turning Heads is the new Sora iPhone app looks a lot like TikTok. It has a vertical video feed you scroll through, except every AI clip is. Every clip is AI generated. And the standout feature here that you can use with Sora that starts to kind of really make people pay attention is called Cameo, where you record a short clip of yourself and friends with your permission can drop your likeness into any AI generated scene.
[00:08:43] You are considered a co-owner of the result and can revoke it at any time. So this is part of Open AI's attempt to manage some very real issues around consent and deep fake abuse. And right now the app is invite only. I've had people already ask me if I have ways to get [00:09:00] invites. I don't, 'cause I don't have one myself, but it is only in the US and Canada right now.
[00:09:05] But there is expansion planned now. Paul, there's a lot to unpack here. There's a stunning new video model, which alone is really impressive. There's this AI generated video feed that potential for deep fakes seems out of control, and we'll talk about this a bit. There's some breathtaking copyright concerns as well.
[00:09:27] Paul Roetzer: Yeah, so I struggled a little bit with how to unpack this one. Honestly. This is like a, like I said, it's such a broad topic and I was watching very closely last week looking at all the responses online. and so what I'm gonna do is walk through four components here. Mike. I'm gonna talk briefly about the tech.
[00:09:43] I'm gonna go through my personal experience because somehow I do have access to it. Oh, nice. I don't know how I, but I do and actually have, four invites, which now that I said that they'll probably be gone before this airs. So please don't flood me with three requests for my four invites. the legal [00:10:00] perspective is incredibly important as you alluded to Mike, and then.
[00:10:04] what happens next? So, Mike, I'm gonna kind of walk through some thoughts here, interrupt at any time. Jump in if there's anything you want to add. So, on the tech side, Mike covered a little bit from it. There is a system card that goes with it, that it sounds like super technical. It, it's basically just more deep dive into the technology.
[00:10:23] So we'll put the link into the show notes in that system card post, it says, Sora 2 is a new state-of-the-art video and audio generation model builds on the foundation of soa. The new model introduces capabilities that have been difficult, difficult for prior video models to achieve, such as more accurate physics, sharper realism, synchronized audio enhanced steerability, and expanded stylistic range.
[00:10:45] Now, those all sound very similar to VO three from Google. So again, they're not, not the first ones doing this. I feel like they've raced to get this out in many ways in response to how popular VO has become, how viral it became for Google. [00:11:00] so just something to keep in mind, this, they're not the only ones doing this.
[00:11:04] The model follows user direction with high fidelity, enabling the creation of videos that are both imaginative and grounded in real world dynamics. my understanding, Mike, is they're about 12 to 15 seconds is what you can create the video clips. yeah, my personal experience, I only created a few that were permitted.
[00:11:21] I tried to do some with copyright characters. We'll talk about that in a minute. Sora 2 , back to the system card. Sora 2 expands the toolkit for storytelling and creative expression, while also serving as a step toward models that can more accurately simulate the complexity of the physical world. That is a recurring theme.
[00:11:37] You're gonna hear this idea, Mike, hit on it up front. This is all the basis for things that are much bigger than this. Just always keep that in mind. This is not the end game that we're seeing here. so R two is available on so.com. I just went and checked. You can go and like play around with it. You just have to get started now.
[00:11:53] I don't know, I assume you have to have an invite to, to get started. Button to work maybe. when I [00:12:00] downloaded the SOAR app, it, it just worked. I don't, again, I don't know why. and then in the future they'll make it available through their API. So Sora 2 's advanced capabilities require consideration of new potential risks including con, non-consensual use of likeness or misleading generations.
[00:12:16] our iterative deployment includes rolling out initial access to Sora 2 via limited invitations, restricting the use of image uploads that feature a photorealistic person, and all video uploads placing stringent safeguards and moderation thresholds on content involving miners. So it's some important context as we kind of move into these other areas that I wanted to touch on.
[00:12:40] The cameo thing, Sam Altman is everywhere. Like if you didn't follow last week, and like if you're not on Twitter at all and you just didn't see this stuff going on, he's, people like Sam put himself in there to make cameos of him doing whatever, and people took full advantage of that. I would say.
[00:12:57] So. Some of 'em are really funny, some are pretty [00:13:00] crazy. but again, like we say with all things AI related, be careful what you upload, like what permissions you're giving once you upload these things. So Sam became like the viral meme last week, just doing everything you can imagine. Alright, so my personal experience, so this thing comes out on Tuesday, September 30th.
[00:13:18] I finally go in Thursday evening, I think I went and tried it and I just like went and downloaded the app and I just instantly had access. So I didn't know at the time it was like, 'cause I have a pro account, like I don't know why. And then I, at the top left, it has four invites. So as you alluded to Mike, immediately I was like, wait, did I open Instagram reels?
[00:13:37] Like, is this, is this actually soa? It looks exactly like reels and TikTok. Like it's the same format, same scrolling mechanism. it's just all AI generated. And then. There were no that I recall disclaimers about creation of anything that infringes on copyrights. Like there was nothing up front. It was just, here's [00:14:00] your AI slop feed with all these Nintendo characters and Pokemon and, south Park and SpongeBob Square pants.
[00:14:08] It was, it Star Wars, like everything. It was just like all this copyrighted stuff. Immediately that was all that. And same album, Alman was all you see in the feed. And it's, I was immediately like, oh, I will never use this. Like, this is not interesting to me at all as a user. some people may be really excited about that stuff.
[00:14:25] And so then I thought, well, let me see what it's like to create something. I didn't know. Would it immediately go live? Like, am I gonna create something? And it automatically publishes. I didn't know. And so, I clicked the button and I realized like, I suck at giving video gen prompts. Like I can't think of creative things.
[00:14:43] So I actually went into, chatGPT, separate chat. And I just asked for help and I said, testing Sora 2 , write some prompts I can use to test its full capabilities. So it came back with things like a closeup of a glass of red wine being poured in slow motion droplets, [00:15:00] splashing on a wooden table, cinematic lighting.
[00:15:01] I could never write that. and underwater library with glowing jellyfish, drifting past shelves of ancient books. So I'm like, okay, these are clever. Like, that would create something fun, I guess. But then I was, I think at the time I was watching the Guardian's Playoff game, which I don't really wanna talk about 'cause I'm still sad about it.
[00:15:18] But, so I was like, all right, gimme some related to baseball. So it was like popping up with some stuff related to baseball. And then I knew that it was generating all of these copyrighted characters. Like I knew it was able to do this. So I said, make them more creative and fun. Incorporate well-known characters.
[00:15:35] So then it starts writing ones like Batman, stepping up to the plate in Full Armor, the Joker pitching with a wicked grin. Gotham Skyline glowing the background. So I was like, all right, well let's give that a try. That sounds kind of cool. So I hit the create button or whatever the button says, and it immediately pops up and it says, this content may violate our guardrails concerning similar similarity to third party content.
[00:15:56] So I was like, oh, okay. Like, how is everybody else creating all these characters? [00:16:00] Or I can't do it. So I tried one more. So I tried Harry Potter using his broomstick to chase down a fly ball in a magical baseball game at Hogwarts Quidditch hoops in the background. Boom. This content may violate our, I was like, oh, they put some guardrails in our place.
[00:16:12] So this is 48 hours later. the feed is still filled with all of these characters, but I'm now not able to generate it. So they've obviously kind of made some changes. So this then leads me into the legal aspect of all of this. It is blatantly obvious that this thing is trained on an immense amount of copyrighted content, including shows, movies and video games.
[00:16:37] Um. And so you immediately got by, by like, again, Tuesday night. Wednesday morning, people had access. They started immediately tweeting things related to ai, slop and the legality of this stuff. So there was one, Pietro Serrano, CEO of Magic Path. I don't, I don't know, Pietro. This tweet just ended up in my feed and I, this one made me laugh.
[00:16:56] he said, man, imagine being Mark Zuckerberg [00:17:00] spending billions to build A slop machine. Slop, SLOP, not slot, only for another slop machine to out slop you days later. I thought that was pretty funny. He's referring to vibes from, from Meta. so then on Wednesday, October 1st, a day after it launches, we already have, or two days after, we have commentary from Sam Altman.
[00:17:21] So someone had tweeted, this is, like, read this verbatim Sam Altman two weeks ago. We need $7 trillion in 10 gigawatts to cure cancer, Sam Altman. Today we are launching AI slop videos marketed as personalized ads. So Sam actually retweeted that and said, I get the vibe here. But we do mostly need the capital for building AI that can do science and for sure we are focused on AGI with almost all of our research effort.
[00:17:50] It's also nice to show people cool new tech products along the way, make them smile and hopefully make some money given all the compute we need. When we launched chat [00:18:00] GPT, there was a lot of quote, who needs this and where is AGI unquote, reality is nuanced when it comes to optimal trajectories for a company.
[00:18:08] So it's like, oh, okay. So Sam's basically admitting, yeah, we're just kind of filling this, you know, world with some crap, but it's gonna make us some money and it's kind of fun along the way. Doesn't address the fact that they're all compute constraint and they can't do all the breakthroughs they wanna do because they don't enough compute.
[00:18:24] And now they're gonna just pour compute into all the inference time compute that's gonna go into like generating these. So my first tweet. I think about this was, I had shared a, there was a Star Wars movie clip that was featuring Super Mario characters blatantly an issue. I said, does, does copyright law in the US just crumble or do the major brands fight back in a meaningful way?
[00:18:46] It will be fascinated to watch this unfold. One thing is clear. The leading AI labs are fully into their, don't give an F phase when it comes to copyright and IP law. Then on October 3rd, ed Newton Rex, we've mentioned numerous [00:19:00] times who worked at stability AI on, AI models. he was replying to a video of Michael Jackson.
[00:19:07] Looked completely real. You put this thing on Facebook, you know, your parents and grandparents think Michael Jackson's alive again. Like it looked like exactly like him. Sounded like him. So he said, even if OpenAI now tightens, sores, guardrails, the damage has been done, they will have used people's copyrighted intellectual property and likeness to go viral.
[00:19:25] Getting them to number one in the app store, which will let them make a ton of money. This is why after the fact, opt out is so parasitic. So now, few days later, where Friday, October 3rd, Sam has now dealt with all the blowback they, I assume have heard from Disney and, and all the other rights holders that, you know, control these characters.
[00:19:46] And so he publishes a blog post, called Sora Update number one. In this post he says, we have been learning quickly from how people are using Sora and taking feedback from users, rights holders and other [00:20:00] interested groups. We of course spent a lot of time discussing this before launch. I think that's a lie.
[00:20:05] But now that we have a product out, we can do more than just theorize. Keep in mind, there is nothing that we've already talked about here, Mike, or that has come out since. That couldn't have been not just theorized but known was going to happen, right? You don't train a model on all this copyrighted stuff, allow people to output it and not know that you're gonna get massive blowback.
[00:20:30] You absolutely know that. So it's disingenuous to even like, I don't know that that paragraph really bothered me. But anyway, we are going to make two changes soon and many more to come. First, we will give rights holders more granular control over generation of characters similar to the opt-in model for likeness, but with additional controls.
[00:20:49] So we're gonna like allow Disney to say if they want their characters created, is basically what he's saying. We are hearing from lots of rights holders who are very excited for this new kind of interactive fan [00:21:00] fiction and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used, including not at all.
[00:21:09] We assume different people will try different approaches and we'll figure out what works for them, but we wanna apply the same standard toward everyone and let rights holders decide how to proceed. Second, we are going to have to somehow make money for video generation. People are generating much more than we expected.
[00:21:24] No way that it's an invite. Only you controlled this thing. There's no way open eyes isn't smart enough to project usage. So again, this whole article just bothers me because it's unnecessarily gaslighting. I think like it, this wasn't needed. You knew all this was gonna happen. So we are trying to, come up with some revenue sharing thing for rights holders so they can make money along the way.
[00:21:46] Okay, so then, also Friday, another tweet. And again, some of this is just giving perspective. I don't know this guy Reed Southern, but it was a good tweet. He said, if your copyright is contingent on opting out, from anyone who randomly decides they're going to use [00:22:00] your work for profit, it becomes effectively worthless.
[00:22:03] There's a reason it doesn't work that way. No one else is allowed to do this. Why are AI accompanies the exception then Bill Peoples, who's the head of Sora at openAI's Tweets Friday night, or this was, this was Sunday. Good to see quick changes. Oh no, this was me. I was replying to a tweet from Bill Peoples saying some of the changes they were making.
[00:22:23] I said, good to see quick changes. Still shocking that Open and others are releasing these apps models with so little testing and so few safety and control measures for such obvious issues. Can't they just prompt GPT five to identify and fix this stuff before releasing it? So that was like, I, again, I know the answer to this.
[00:22:43] They probably did, and they chose to just do it anyway. So then Saturday night I was like, I wanna talk about the legality side of this, but Mike and I aren't lawyers like we, we can like. Theorize about this stuff, but I was like, I have friends who are IP lawyers, so I'm gonna email one of them. So I emailed my friend Krista [00:23:00] Lazer, who's an associate professor of law at Cleveland State University College of Law and owner of Learn Innovation Law.
[00:23:06] She's an IP attorney, shorthand. So I emailed her four questions. So the first one I said, is there Approach Legal? and she actually put a YouTube response up. So we'll put a link to that in the show notes and you can listen to her whole like, it's like five and a half minutes. I'm just gonna give you real quick highlights here.
[00:23:22] So I said, is there Approach Legal? She said, we've had courts come out mixed on that meaning the training on this data. And it's possible that some amount of training on lawfully obtained copyrighted works will be considered fair use, but there's no indication that OpenAI paid for access to these copyright works to engage in their training.
[00:23:39] Another question I have for is, are individual users. This is important for all you listeners, are individual users who choose to output copyrighted material using the model or app at Legal Risk. Her sh the short answer to what she said is, yes, unless OpenAI has licensing deals with the rights holders like Disney, that they [00:24:00] sublicense to users.
[00:24:01] So in their agreement with Disney, they're allowed to transfer a sublicense to you the user.
[00:24:06] Mike Kaput: Mm.
[00:24:06] Paul Roetzer: then there's potential legal jeopardy for SOAR users who create videos without permission. So you create a funny thing with Mario characters in the Star Wars movie, and maybe you get a letter saying, take it down, and you know, you, you know, here's, here's the legal ramifications of your actions.
[00:24:22] And then in terms of what happens next. She said it's pretty clear that openAI's has shifted towards more of this opt-in model because it's legally a lot safer. So they've already made change to, hey, it's gonna be opt-in. it's much safer, obviously, to negotiate upfront with rights holders and to make sure that, especially if a rights holder, like Disney for example, has said no, that you are not training on or especially outputting things.
[00:24:43] Based on that. So again, we'll, we'll put the whole link there, to her, to her video. So final thoughts here, Mike, where this goes. what they're doing, just in case it's not blatantly clear to everybody, this is what they wanted. Like they wanted to get a viral hit. They wanted it to [00:25:00] get to number one, the app store, which it did.
[00:25:02] So they got what they wanted. They claim it's part of their iterative deployment strategy, which it is in, in some way, which means, hey, we're just gonna put tech out in the world and see how people use it. Again, there is nothing that has happened in the six days since this came out that they couldn't have predicted and probably didn't predict The real reason they did this is for competition.
[00:25:22] Google got one up on 'em with VO three. There's other stuff coming, there's other models coming, and they, they had to just get out ahead of it and get it out there and then create enough demand for the stuff that then it like proves the model. And now people have to come to the table and negotiate with them basically.
[00:25:38] The other thing is dev day is October 6th today, like later in the day. Then we're recording this and there's other stuff coming in. My guess is they just wanted to get it out there. Mm. On the tech side, they said at the end of their Sora 2 announcement posts. Video models are getting very good, very quickly.
[00:25:54] General purpose, world simulators and robotic agents will fundamentally reshape society and [00:26:00] accelerate the arc of human progress. Sora 2 represents significant progress towards that goal. In keeping with open AI's mission. It is important that humanity benefits from these models as they are developed.
[00:26:12] We think Sora is going to bring a lot of joy, creativity, and connection to the world. So that is their macro level. And then one other note here on society and the creator economy. So Mr. Beast tweeted on Sunday, October 5th. If you don't know who Mr. Beast is, he is extremely popular with kids. My, my kids know him.
[00:26:31] YouTube, he has 443 million subscribers. I think he's the highest subscribed person in the world. I think so, yeah. Yeah. So he tweeted when AI videos are just as good as normal videos, I wonder what that will do to YouTube and how it will impact the millions of creators currently making content for a living scary times.
[00:26:51] One other perspective that I found, I don't, I wanna, like, I want to be as politically correct here as I can. [00:27:00] so OD Kla, who's the co-founder of Sun Microsystems and the founder of Cosla Ventures, he tweets, I think this is Sunday night. All the replies to openAI's announcement of Sora OnX that are criticisms, quote unquote finely pure lop, are from tunnel vision creatives.
[00:27:20] Let the viewers of this slop judge it not, I can't believe this. Not ivory tower Luddite, snooty critics or defensive creatives. Hope ends up so many more avenues of creativity if you have imagination. This is same initial reaction to digital mu music in the nineties and digital photography in the two thousands.
[00:27:42] There will be a role for traditional video still, but many more dimensions of creative video through ai. Okay. I, my only thought I, I'm gonna, again, I'm gonna try and say this as, as nice as I can. If you wanna turn the entire creative industry against AI labs and VCs [00:28:00] who are funding those labs, this is exactly the tone you take.
[00:28:04] I don't know why else you would say something like this. What was it? Luddite snooty critics or defensive creatives. So for me, as someone who sees enormous potential for AI in the world to do good. But also has tremendous respect and love for human creativity and creators. I really hope this sort of combative messaging stops, like there is no good that comes from this other than making you get that dopamine hit when you get to put out a tweet calling people.
[00:28:31] What is it? snooty creatives and tunnel vision creatives. that is not gonna go well in society. That's not gonna play well. There's a whole bunch of people who make their livings who aren't Disney, who can't just sue openAI's for doing this, who have no voice whatsoever, no control whatsoever, even if they think they've opted out.
[00:28:50] Like the vast majority of the people aren't the big brands that can, that have a team of attorneys. It's people who build stuff for a living, who write, who take pictures, [00:29:00] who create videos, like, and to just discard them in the economy makes no sense to me. So I don't know. But again, big picture, this is only the beginning.
[00:29:10] This is a very crowded space. This is not just openAI's Meta's doing it, Google's doing it, runway's doing it. Like everybody's in this audio video generation space. There's many more innovations to come and maybe even more legal challenges than that.
[00:29:24] Mike Kaput: Yeah, I'm personally excited to try it out, but it also felt a bit to me, like two things I guess jumped out when I was looking at this one.
[00:29:34] It's game over for deep fakes. Like I've thought that for a while, and VO three is comparably amazing. So I realized Sora 2 is not the only one to do this, but I think really seeing Sam just go hog wild embracing it. I was like, okay, this is game over for this. No matter if it's openAI's that allows it or others.
[00:29:51] And then I also have to feel like it's kind of game over for attention span because this AI video, slop feed is horrible taking off and like keep in [00:30:00] mind, I. Whether I'm embarrassed of it or not, enjoy short form video on YouTube and TikTok as much as anybody. But you have to also accept they are nuking our attention spans already.
[00:30:11] Now you can say, Hey, maybe it's just a new medium. Maybe it's just creativity expressed maybe in a different way. Right? I very possible. But you have to accept that short form video has ramifications to people's attention, whether you agree with that or not. And I think it's going to about to go into overdrive.
[00:30:29] And someone mentioned in one of the tweets or articles here, they were like, just wait until you have a reinforcement learning. And they used the term slop optimized feed, which I really liked. I was like, oh God, here we go. But those are, those were two kind of worrying things that, that jumped
[00:30:45] Paul Roetzer: out at me.
[00:30:46] Unfortunately, I have no idea why I would ever go back into that app. Yeah. other than just to test it like from an entertainment value or an educational value, it's zero to me. Like I just don't find it entertaining at all. Um. But I mean, [00:31:00] people were like, they had to shut off the South Park stuff because people were creating entire episodes of South Park.
[00:31:04] . South Park, like 25, 30 minute videos, but just snitch, stitching stuff together. yeah, it's gonna be insanely disruptive, but, I'm interested in the bigger picture of what this is a step toward. I am not interested in a scrolling feed of AI generated crap, which is basically what this is, even if that makes me a snooty critic and defensive creative.
[00:31:30] Claude Sonnet 4.5
[00:31:30] Mike Kaput: Alright, so next up this week, Anthropic has released Claude Sonnet 4.5, which is build by philanthropic as the best coding model in the world. The new model can handle complex multi-step tasks like building full applications, managing databases, and performing security audits. In one demo, it generated 11,000 lines of code to spin up a Slack style chat app, and stopped only when the job was done.
[00:31:57] Anthropic even said quote. Practically speaking, we've [00:32:00] observed it maintaining focus for more than 30 hours on complex multi-step tasks on benchmarks. Sonnet 4.5 is state-of-the-art. It leads on SWE bench verified, which tests real world software engineering. It is three times better than its predecessors at navigating and using a computer.
[00:32:19] Enterprises like Canvas say it's already useful for deep engineering and research work. Now, at the same time, Anthropic also unveiled the Claude agent SDK. This is the same infrastructure it uses internally to build agents. So this means developers can now design their own long running AI systems with memory context management and multi-agent support baked in.
[00:32:43] And Anthropic also stresses that this is their most aligned model, yet with fewer cases of deception or prompt exploitation. Now Claude sign at 4.5 is available today, and it's available at the same price as before. Paul, I find it interesting. Philanthropic [00:33:00] seems to be really leaning into becoming the AI for coding or AI for building agents with this release.
[00:33:07] Seems like they're trying to own that corner a bit here.
[00:33:11] Paul Roetzer: Yeah, I was actually listening to a podcast yesterday from s Sholto Douglas. So the podcast, again, link in the show notes, but the mad podcast with Matt Turk. So s Sholto is someone we've talked about before. he's been on a few podcasts in the last few months that have been great.
[00:33:28] He's extremely intelligent, very well spoken, makes things very approachable, like I really like listening to his stuff. So he's an Anthropic AI researcher, worked on this model, and then he is former Google DeepMind. He started there I think right before ChatGPT, if I remember correctly. So. The whole episode was about sonnet 4.5, and I was just gonna highlight a few of the things he touched on that maybe builds on some of the other links that we'll put in the show notes.
[00:33:53] so the way that Anthropic builds is Haiku is their smallest model. Sonnet is the [00:34:00] Midtier, and then Opus is the biggest model. What happened here is the sonnet, this mid-tier model is now outperforming their biggest model opus. And what he said is that they've found that these mid-tier models can be made smarter largely through, reinforcement learning, which is kind of like after the initial training run happens, you go through and sort of fine tune this thing and certain capabilities, expert knowledge in different domains, things like that.
[00:34:27] They're finding that basically you do a massive training run. So we say we build opus within three to six months. They can usually do a much more affordable, efficient model like sonnet and make it smarter than the big run they just did. And so he was pretty much saying like, this is what's gonna happen every three to six months.
[00:34:47] Like, we'll come out with, like, say a Gemini three is gonna come out probably this month from Google. There's a decent chance that they're gonna have a more efficient model three months later that already surpasses their dominant [00:35:00] largest model. So it's just sort of a byproduct of what's happening right now is like, we're in this every three to six month phase, and, and the way he tells it is, whatever, these people saying that there's, we're hitting walls of training and stuff, he's like, we're not seeing it.
[00:35:14] Like there's nothing we're seeing that tells us there's any wall whatsoever, that these things aren't gonna just keep getting smarter and more generally capable. He did address why they focus on coding. So we've talked about this quite a bit with Anthropic. Like they're, they've done a good job of their economic impact research, but, and like the research on like usage patterns within Anthropic, but we always sort of hesitate with it because the use for, Anthropic is so dominantly coding.
[00:35:42] They don't get a great perspective on overall knowledge work. but he said the reason they're focused on coding is twofold. One, they think the fastest path to build more powerful AI is to automate AI research. So they are very actively trying to automate AI researchers, which everybody's doing. Meta's [00:36:00] doing it, Google's doing it, opening AI is doing it.
[00:36:01] But this is like, this is their main North Star at the moment is like automate AI research because then we can compound it. Now, the other reason they're doing is because the software market is vast. So, estimate about 300 billion, and I'll explain where that number comes from in a second. So they see it as well, if we can build, coding agents that can build software, then we can go get a piece of that $300 billion annual market of software.
[00:36:29] Our tools can build it or we can build it ourselves. So they look at coding and, and economic impact. So build the research engine, then generate revenue by building software and enabling the building of software. So then related, I was listening to a separate, podcast episode this weekend where it was actually a presentation given by an A 16 Z Andreessen Horowitz general partner, Alex Rumpel at the LP Summit.
[00:36:51] So the title of his presentation was Software Is Eating Labor. He, in that, presentation cited that the worldwide SaaS market is about [00:37:00] 300 billion per year. But then to give this context back to this overall economic evals we've been talking about, Mike, the labor market in the US alone is 13 trillion.
[00:37:10] Mike Kaput: .
[00:37:10] Paul Roetzer: So again, if you're thinking about the funding, like the building of why are they building these AI models to be able to do the things humans do, it's because VCs wanna make money and there's $13 trillion sitting there to replace human workers. They're gonna build stuff. the other couple of quick notes that I thought was interesting is this idea of 30 hours of continuous work.
[00:37:31] So this is a theme we've been hitting on a lot recently. We had the agent three. Release from Rept where they talked about 200 minutes of continuous runtime for their agents to do coding. s Sholto talked about this idea of long-term coherency, meaning it like stays good at doing the task it's doing for extended periods of time.
[00:37:49] He actually referenced the meter evals, Mike, that you and I have talked about, which is that AI models today have a 50% chance of successfully completing a task that would [00:38:00] take a human expert one hour, and that's doubling every seven months. So seven months prior to that, it was 30 minutes, seven months prior to that it was 15 minutes.
[00:38:07] So what it means is we're seeing this continuous runtime with longtime coherency, specifically for coding. But then, you know, that's when you take it into other domains and say, well, can it do what a marketer would do for two hours or 30 hours? Would it do? And he was basically saying it's not a technical limitation.
[00:38:24] Like they could probably do 60 hours, they would just keep working at the problem. It becomes an issue of like taste and context that humans are still just better at saying, Hey, this is. You're actually wasting time. This is not a great direction. You should, you should go this direction instead or try this.
[00:38:39] he asked him about the difference between like Google and Anthropic, and he said that they very confidently believe that they're the best at coding. But he said scientific breakthroughs are gonna come from Google. He speaks very glowingly of like Google, Google DeepMind. and then the other thing that he touched on that I've seen come up a lot lately, and maybe it's just 'cause I listened to these podcasts or read this [00:39:00] stuff, but the Bitter Lesson, and I don't know if we've ever talked about this on the show, but there's like a, a, a computer scientist, Richard Sutton.
[00:39:07] He wrote an essay, I think it was in 2019, where he sort of coined this term, this, this bitter lesson. And the basic premise, and STO was a big believer in this, is that generalization and compute went over time. What I mean by that is, for a long time in, in AI development and computer programming, there was this belief that humans could code the best paths forward that we humans are uniquely capable of.
[00:39:28] Like. Figuring out the plan and what to do, and we're super clever and we'll always find ways to make these models better. And a lot of times we gotta get involved and we gotta write more instructions for the ai. What the Bitter lesson says is, no, actually the models are better than us. Like over time they just figure things out better than a human could.
[00:39:46] So the lesson is that methods that leverage computation scale better than those that rely on human design knowledge or heuristics. in other words, when researchers try to handcraft domain knowledge rules or clever shortcuts, these [00:40:00] approaches often work well at a small scale so in the, in the near term, but they fail to generalize or improve as problems get larger.
[00:40:07] Conversely, approaches that rely on general learning algorithms plus more compute, so throw more Nvidia chips at 'em, throw more training time at 'em eventually, while they may start off kind of less elegant, they tend to win out in the long run. So it's considered bitter the lesson because researchers naturally want their insights, expertise, and clever designs to matter.
[00:40:27] But history shows that repeatedly scale compute driven approaches outperform human ingenuity and crafted specialized solutions. So that's what he's saying here is like, yeah, we keep, we do all these things, we keep it, and it makes a difference in this near term. But at the end of the day, as of right now, we know that if we just keep building more data centers and giving more NVIDIA chips and giving more data, the things just get smarter.
[00:40:50] And it obsoletes all these human written rules that we think matter right now. So, I don't know. It's, it is fascinating stuff. again, like it's 4.5 is probably a great [00:41:00] model, especially if you're into coding. I know some other people love using Claude just in general, but for the most part it's a, a coding model.
[00:41:06] that's the primary use, at least that they think of it as.
[00:41:09] Mike Kaput: Yeah. And just to reemphasize one thing you mentioned earlier, we've talked about this in a number of contexts, but when you say that the A 16 general partner, Alex Rammell, says the worldwide SaaS market's about 300 billion a year. You said the labor market in the US alone is 13 trillion.
[00:41:25] Follow the money. Look at how much money the VCs are putting into every AI lab. I can guarantee you the labor market, the SaaS market is the ultimate
[00:41:35] Paul Roetzer: target. Yes, it is. It is pure economics and pure capitalism, and I don't think it's even a debatable thing that's, I just, I still like, if you just zoom out and you just look at those numbers, there's no way people don't build to replace humanly.
[00:41:51] Like, it, it's how humans work. It's how capitalism works. So yes, it's, it's a bitter lesson, I
[00:41:58] Mike Kaput: guess. Yeah. Different. Different, yeah. Another one. [00:42:00] Yeah. Alright.
[00:42:01] ChatGPT Instant Checkout and AI Commerce
[00:42:01] Mike Kaput: Our third main topic this week, OpenAI has turned ChatGPT into a shopping platform. They have launched a new feature called Instant Checkout, which lets people buy products directly inside conversations.
[00:42:15] OpenAI says, quote, every day, millions of people use ChatGPT to figure out what to buy. Now with instant checkout, they can buy directly from you inside those conversations. So here's how this works according to openAI's. So say you describe what you're looking for in the course of a chat. Like, Hey, I want a durable carry-on bag under 300 bucks.
[00:42:34] Chat, GBT will recommend the most relevant products across the web, kind of like it normally does. But then if you have instant checkout enabled, users can buy the product without leaving ChatGPT. You can pay instantly with a credit card, apple Pay, Google Pay, or Stripe. Merchants remain the seller of record here.
[00:42:54] They keep control of the payments, fulfillment, and customer data. And this is [00:43:00] all powered by open AI's, new Agentic Commerce protocol. This is an open standard built with Stripe, and right now merchants like Etsy are already live. Shopify integrations are coming next. And so we'll see very soon here how many things you can suddenly start buying right within ChatGPT without leaving the platform.
[00:43:20] Now somewhat related to this, at the same time it appears that openAI's is gearing up to turn ChatGPT into an ad platform. We talked about this a little bit in past episodes, but Adweek now reports that a new job listing at OpenAI shows that the company is hiring someone to build tools that let advertisers create and manage ads inside ChatGPT.
[00:43:42] So this hire will be responsible for experimenting with native ad formats, suggesting a future where ChatGPT may serve suggestions the same way search engines show sponsored results. So Paul, it feels like we're really recently seeing ChatGPT move very [00:44:00] quickly towards becoming a buying and maybe ad platform.
[00:44:03] Like is that what, where we're headed? Like what does that kind of start to mean for business?
[00:44:08] Paul Roetzer: It's certainly been all the indications over the last year. Plus with the hires that they've made, the, you know, the one you just highlighted that they're currently looking for. They've been putting these steps in place and Sam has talked about becoming more of a platform and personalization being a key.
[00:44:23] We talked about it with the new pulse feature a couple weeks ago and how natural that's gonna be to inject, you know, purchasing decisions. 'cause you're talking about trips you're taking or, you know, fitness and health needs. And it's just like, it's so natural to just inject ads in and be all the better if I can just click one button and I can just make my purchase right from there.
[00:44:42] So, I mean, my general experience right now with Jet GT has been that I often would trust the links less than I would if it was served me in Google. Mm. And so I've personally found myself going to Google to verify sites and vendors that I find through ChatGPT. specifically when I'm using like agent mode in ChatGPT to help conduct [00:45:00] research for purchasing decisions.
[00:45:02] I will often like go out and then verify that it's like legitimate companies and stuff like that. I think that this is an, an instance where there's just so many unknowns and we're not even asking probably all the best questions yet. You think about this initial, you know, human to, commerce, but what about when it's agent to agent and like, my agent is going and doing this research and then maybe it's buying directly through ChatGPT and how does that change things?
[00:45:27] But I think that as we look into 2026 and we can start to project some of the impact it's gonna have some of the changes in buying behavior. Definitely something that marketers, you know, brands, you really gotta start thinking about how SEO is changing, how e-commerce is changing. There's some tremendous trends that are emerging that are going to dramatically affect the way you're doing business 12 months from now.
[00:45:54] Like, I mean, we can already start to see it. So I think that a key going into your 2026 planning is [00:46:00] make sure you're asking the right questions. Make sure you're not just building your strategies based on what you know to be true to today. Because there's gonna be some in incredible, like innovation opportunities in the near future, but also like things could move fast and you could find yourself like maybe you're in a really strong competitive position today in the traditional way of doing search and and commerce, and maybe that changes really, really fast.
[00:46:26] And some upstarts come along and, and take that market share just because you weren't asking the right questions or like thinking more deeply about this. So yeah, I mean, I, again, I think these are just huge trends, commerce, personalization, hey, you know, ChatGPT and others as platforms. Yeah. To business, not just like tools.
[00:46:45] It's, it's gonna change things pretty quickly.
[00:46:47] Mike Kaput: Yeah. And I think just to harp on why we promote AI literacy so much, the only way to know what questions to even start asking is to understand what's possible.
[00:46:57] Paul Roetzer: Yeah, and you gotta know where the tech is going too. And [00:47:00] again, we heard the Sholto thing every three to six months.
[00:47:03] Like new models, like this is not something you can just sit back and not worry about for a quarter. Like, oh, figure it out in January. It's like, no, I wouldn't be waiting until January. I, I'd try and stay up on this stuff now.
[00:47:15] Mike Kaput: All right, let's dive into this week's rapid fire. First up,
[00:47:18] OpenAI H1 Results
[00:47:18] Mike Kaput: openAI's is growing at breakneck speed and burning through cash just as fast.
[00:47:23] According to the information, openAI's brought in $4.3 billion in revenue in the first half of 2025. That's already 16% ahead of all of last year. And this surge reflects explosive demand for ChatGPT and other tools. But the costs are equally staggering. openAI's burned 2.5 billion in those same six months.
[00:47:45] Research and development alone are going to be topping 6.7 billion. Most of that's going into building bigger, more powerful models. And keeping ChatGPT running. But even with this burn, OpenAI is not running on fumes. It had nearly [00:48:00] 17.5 billion in cash in securities at midyear, and it's aiming for 13 billion in revenue by the end of 2025, alongside 8.5 billion in total burn.
[00:48:11] Now, Paul, we're used to pretty big numbers in ai. These still seem pretty immense. the demand here is crazy, especially since we've been talking about they've barely monetized ChatGPT except through subscription and obviously tokens, right?
[00:48:26] Paul Roetzer: Yeah. And the personalization of ads, video ads, like these are all revenue channels that they're obviously developing.
[00:48:32] And then continuing to expand into, the software market, which I think we'll touch on a little bit later. and the enterprise market, you know, taking, going head on with openAI's or with, Microsoft and Google. Yeah, especially on the productivity platform side. So. Yeah, I mean, I think obviously ChatGPT is the cash cow for them.
[00:48:52] but I think that that, you know, is gonna keep expanding out into these other lines and maybe even, you know, like [00:49:00] complete, we've talked about comp competing on an infrastructure side where they do this massive build out of compute capacity and then they start competing with AWS and Google Cloud and Microsoft Azure where they're now selling, you know, compute and data storage and things like that.
[00:49:14] And in intelligence on demand, which is gonna be everywhere, like the inference for this stuff. So it, it, I would love to see the breakdown of what the revenue mix is projected to look like. Not the total revenue, but like where the percentages lie. . Like 2028, like I'm sure they have that in a deck somewhere.
[00:49:34] I would be interested to see what they think the bigger business lines are going to be.
[00:49:40] How OpenAI Uses AI
[00:49:40] Mike Kaput: Next steps more openAI's News. They've also launched a new series showing how they actually run their own business on openAI's technology. This is called openAI's on openAI's, and this project highlights internal tools the company is building to solve everyday problems.
[00:49:56] So a few recent entries as they launch this series [00:50:00] include highlighting a tool called GTM Assistant, which is a slack bot that centralizes account research and product knowledge to boost sales productivity. Another is docu GPT, which turns contracts into structured searchable data, so finance teams can review deals faster and more consistently.
[00:50:18] They also have a research assistant that analyzes millions of support tickets to surface trends and a support agent framework that turns each customer interaction into new training data. Even inbound sales are now routed with AI ensuring personalized responses and fewer missed opportunities. So Paul, I mean, this is really valuable, I think, to see how openAI's is actually using their own ai.
[00:50:42] I mean, I feel like we've been waiting for this for a while to hear about this more. Like, what did you take away from these initial examples?
[00:50:49] Paul Roetzer: SaaS companies are in trouble. So I, my initial thought was that, and then I happen, I still own some HubSpot stock. I was, again, [00:51:00] I no investing advice. so I, people who haven't followed along for a while know, my former agency was HubSpot's first partner back in 2007.
[00:51:09] So I was lucky enough to buy into HubSpot when an IPO at $35 a share back in 2012 or whatever it was. so I have long been a follower of HubSpot stock. I still own a bit of it, and so I saw it cratered and I was like, what the hell happened? Like, I, so I go into like, did they have earnings call? I missed.
[00:51:27] Like what, what had happened to HubSpot stock? So do a search. What happened to HubSpot stock last week? First result is Yahoo Finance, and here's verbatim. Shares of customer platform provider HubSpot hubs is the NYSC listing fell 7.2% in the afternoon session after OpenAI announced internal software applications that could potentially compete with existing SaaS offerings.
[00:51:49] The news sparked concerns across the sector as OpenAI revealed internally developed tools for sales, inbound marketing and customer support core areas for HubSpot. According to TD Cohen [00:52:00] analyst Derek Wood, the announcement has quote refueled the debate that SaaS is at risk of being displaced by DIY solutions.
[00:52:07] On top of LLMs unquote, the potential for openAI's to enter the applications market with its own AI native solutions triggered a broader off among enterprise software stocks. So yeah, it's tough. I mean, HubSpot's a great company. We still are powered by HubSpot. We love HubSpot. but I think they, them and other SaaS companies have to deal with this reality that, that people are gonna be able to vibe code stuff.
[00:52:31] They're gonna be able to build something when they get tired of a software product or becomes too unwieldy or. They don't like the pricing model. we're seeing it with changes in pricing from HubSpot and others where they're moving away from license based pricing and they're trying to figure out how, when there's fewer humans to buy our licenses, like how do we make more money?
[00:52:48] How do we get more value-based pricing based on outcomes, consumption, things like that. So the software industry's in a bit of a upheaval and there's lots of unknowns about where it goes. And you can see, I mean, we have, I don't know, [00:53:00] seven to 10 core SaaS products we use to run SmarterX. I can see every one of 'em dealing with this stuff.
[00:53:05] Like Asana's another one where you just, you can feel them trying to figure out where this goes and what the business model is and how they kind of get out ahead of it. But that's, I mean, one, yeah, it's fascinating to see open, I share this stuff. Two, it does open up all kinds of concerns, especially if they launch this agent builder later today.
[00:53:22] Like that goes at Zapier and . Make, and all these other players, even like, agent Force, you know, from Salesforce, like it's a direct attack on those kinds of companies. They're a very, very ambitious company that needs to make a whole bunch of money and they're gonna try a whole bunch of ways to make that money and you don't want to be in their way when they do.
[00:53:43] In New Interview, Sam Altman Says the GPT-5 Haters Got It All Wrong
[00:53:43] Mike Kaput: Alright, next up, Sam Altman says that critics of GT five have it all wrong in a new exclusive interview with Wired. So he sat down with Wired after a rocky August launch that was filled with glitches and gripes about ChatGPT or about GPT five rather, and [00:54:00] many called GPT five overhyped and even pointed to it as a sign that the AI boom was cooling off.
[00:54:07] But in this interview, Altman insists GPT five marks a real turning point. He argues it's the first model genuinely accelerating scientific discovery. It's helping physicists and biologists solve problems in ways earlier systems couldn't. And while skeptics claim scaling has stalled, openAI's says the gains in GPT five came from smarter training, not bigger data and more compute necessarily.
[00:54:33] They're still spending hundreds of billions of dollars on new data centers and betting that scale plus reinforcement learning will eventually get them to the next phase of AI development. Now, what's interesting is Altman actually in the interview, changed how he defines the goal, which is AGI. He says, it's not a single moment where machines surpass us.
[00:54:54] It's a process. So it's not really being treated anymore as a finish line, but as a [00:55:00] long accelerating curve progress. Now, Paul had some pretty interesting about face from Sam here. what did you take away from this interview? It seemed like it was kind of a. Trying to rewrite the record on some of these issues.
[00:55:13] Paul Roetzer: They've been moving on this AGI definition for a while. He's been hedging against it for the last 18 months. Like, it, it does, it, it's interesting. He does give a different definition every single interview he does. Yeah. but this like no longer having this definitive moment is sort of a talking point, has been weaving into a lot of what he's been saying for a while.
[00:55:33] the first model, genuinely accelerating scientific discovery. That, that would be hard to make that statement. I mean Right. Google is certainly pretty far along in making some massive impacts on biology and chemistry. I mean, Dems won a Nobel Prize for chemistry. So, that being said, I do think GPT five is underrated.
[00:55:51] I I do think that most people don't really understand how good of a model it is. And there's been a lot of stuff even in the last couple weeks I've [00:56:00] seen where it's like assisting with math theorems and things like that. Yeah. And like where you're starting to see some, the top mathematicians in the world who are actually using it to assist them.
[00:56:10] So, you know, I think, I think it is a great model and it did certainly probably not get in the first week or two out of the gate. It, it probably didn't get the recognition it deserved as being a great model. I use it all the time. Yeah, I mean I definitely, I was definitely at a point where I was using Gemini 2.5 Pro more, and I would say that's kind of shifted it, you know, it's maybe like 60 40 now.
[00:56:30] I'm probably back in ChatGPT 60% of the time and Gemini 40% of the time. depends on the use case. Often I test them both, but it, it's a really good model.
[00:56:39] Mike Kaput: Yeah, that is a point that always strikes me. You know, I'm sure you've had people come up to you as well at events where they're like, oh, I just got rid of my Gemini account.
[00:56:48] I'm over on ChatGPT or Claude full-time. And I'm like, I don't understand how you do this. They change so often. I have to just have all the accounts.
[00:56:55] Paul Roetzer: Right. You just lost all your chat history in three months now and when you change back to the other one. [00:57:00] Yeah. Yeah. Yeah. All right. Just ask for a Christmas like, or ask for your birthday.
[00:57:04] Say, Hey, just gimme a ChatGPT license for the year, a Gemini license for the year. That's a great idea. Instead of a gift
[00:57:09] Mike Kaput: card, give me this, you know?
[00:57:11] Paul Roetzer: Yeah. And by the way, that's a great gift idea for if you have students in your life, like gift them a, although you can get 'em free higher ed, you can get all the models for free until the, you know, may of 26th, but maybe they don't know that you can ble it up and say, spent the $200.
[00:57:26] There you go.
[00:57:27] Grokopedia
[00:57:27] Mike Kaput: All right. Next up. Elon Musk says that Wikipedia is hopelessly biased. So his AI company X ai is building its own rival. This project is called Wikipedia, an open source knowledge platform, powered by X's Chatbot Grok. And like Wikipedia, gr, Wikipedia will invite public contributions, but Musk is promising fewer guardrails and more openness.
[00:57:52] He's casting it as an answer to what he calls an army of activists who are shaping narratives on the existing Wikipedia. Which also happens [00:58:00] to be a primary source for Google results and AI training data. Musk has also said this project is quote, super important for civilization in a post on X. So Paul, I guess a lot of things potentially going on here, certainly within his rights to create Wikipedia.
[00:58:20] I guess I'm pretty skeptical we're suddenly going to get truth from a guy who routinely tweaks his own model when it says something he doesn't like. But I could be proven wrong.
[00:58:29] Paul Roetzer: Yeah. Th So this one, we've talked a little bit about, I dunno, a couple months ago, I don't remember what episode it was, but where he said like, Hey, we wanna basically rewrite history Yes.
[00:58:39] Like that to be correct. and at the time we said, well, whose version of history is correct? Like, why would your version be more correct than somebody else's? So I would say like at a high level, no, no one would argue that Wikipedia doesn't have issues like that. It doesn't have, you know, inaccuracies and things like that.
[00:58:56] I saw actually someone, ran an experiment last week where they [00:59:00] were giving Wikipedia pages to GT five and asking it to find the inaccuracies, and it was doing very well at it. . Like it was finding multiple inaccuracies and things like that. So there's no argument against this idea that Wikipedia is not a perfect source.
[00:59:12] that being said, like my immediate reaction is, well, where's the, you're gonna have bias in yours too. So I thought, well, you know what, let's try Grok. So I went into my X count and I opened Grok, which is their AI assistant, their version of Che pt, and I said, won't Wikipedia suffer from the same human bias?
[00:59:30] He, Elon Musk is accusing Wikipedia of just from a different perspective. So this is rock's response that Elon Musk built. Elon Musk's push for Wikipedia aims to counter what he perceives as wiki as Wikipedia's bias. His vision is an AI driven open source knowledge repository built by Xai, his AI company, leveraging the Grok model to provide unrestricted public access and sidestep Wikipedia's hierarchical editing structure, which he's [01:00:00] criticized since 2023 for stifling free speech.
[01:00:03] However, your point hits on a fundamental challenge. AI isn't immune to bias. It inherits it from the humans who design, train, and curate it. Wikipedia's reliance on Grok and AI developed by xai means its outputs will reflect the data its fed and the perspectives of its creators. If Wikipedia's bias stems from its editor community, often skewing left-leaning as Musk has claimed or influenced by Western media dominance.
[01:00:29] Wikipedia could simply mirror a different bias, potentially one aligned with Musk's views or X's mission to understand the universe, which might prioritize tech-centric or libertarian leaning narrative. For instance, X AI's recent GSA contract to deploy Grok to federal agencies could in introduce a pro innovation, pro-business slant shaped by Musk's influence and company goals.
[01:00:53] The risk is compounded by AI's opacity opacity. Unlike Wikipedia's editable pages where bias can be traced to [01:01:00] specific edits, Grok Wikipedia's AI generated content might embed bias in its training data, say from skewed data sets or human level examples. Without clear accountability, will Wikipedia prioritize neutrality or reflect its creator's agenda?
[01:01:13] To avoid this, edia would need a diverse global contributor base and transparent bias mitigation processes. Without them, it risks swapping one bias for another, just with a shinier AI wrapper. Musk's track record suggests ambition, but the devil's in the execution. Data diversity and oversight will determine if Wikipedia transcends human human bias or just reframes it.
[01:01:37] What do you think? Should they crowdsource the training data to balance it out? So that's a great answer from Grock. Yeah. All Grocks perspective, none of my personal opinions in there. All I know is he tweeted that like, version one of this thing or version 0.1 is gonna be out this month, and there's no way that they went through the process that Crock is proposing to eliminate bias from this with [01:02:00] that time horizon.
[01:02:01] So all I'll say like Grok will be trained on Wikipedia, so they're using Grok to write Wikipedia, and then that synthetic data from Wikipedia will then be used as a replacement to Wikipedia, which is a dominant source of exercise training data for Grok. no further comment.
[01:02:23] Mike Kaput: What it's, it'll be able to try it out soon enough.
[01:02:25] Paul Roetzer: Yeah. Yeah.
[01:02:26] Mike Kaput: All right. Next up.
[01:02:27] Tinker from Thinking Machines
[01:02:27] Mike Kaput: Thinking Machines Lab, the startup headed by former openAI's, CTO, Mirati, has released its first, product or tool called Tinker. Tinker is a training API that strips away the messy infrastructure work of fine tuning large language models, while still giving researchers control over the parts that matter most, data, algorithms, and evaluation.
[01:02:49] So the significance here is that makes experimentation really easy. Princeton, Princeton and Stanford researchers testing it, set it freed them up from worrying about compute and let them focus on [01:03:00] their science. Andres Carpathy called it a clever way to slice up the complexity of post-training, giving developers about 90% creative control with under 10% of the engineering overhead.
[01:03:11] So in practice, tinker could accelerate everything from building specialized classifiers to refining smaller models for niche tasks. So Paul, the reason we're kind of talking about this is thinking Machines Lab has raised a ton of money, but they've been really quiet about what they're actually building.
[01:03:28] Tinker is kind of a first look behind the curtain. Like what does this tell us about what we can expect from ti and her startup?
[01:03:35] Paul Roetzer: Their last round, they raised 2 billion at a $12 billion valuation. And I think the company's about a year old. Right. That was July of 2025 when they raised. Yeah, I mean, Meir is a major player.
[01:03:46] CTO of OpenAI, played a major role in Sam, you know, getting ousted as the CEO and then a bigger role in getting them back as CEO. it seems like they're, they're focusing on the technical side. Like I don't think we're gonna be [01:04:00] getting a ChatGPT competitor from thinking Machines, labs in the near future.
[01:04:03] Maybe that's on their roadmap, but, they have been very stealthy to date, very little is known, but it seems like they're gonna take a very open approach to their research and their building and, and so it's gonna be intriguing to follow this. This is not, yeah, the average listener who's not a developer building ai, you're not gonna be using Tinker, but it is just a company we like to keep an eye on because Mirror is a, an important figure in AI today.
[01:04:30] California Enacts AI Transparency Law
[01:04:30] Mike Kaput: Next up, California just passed the nation's most ambitious AI transparency law. So Governor Gavin Newsom signed Senate Bill 53, SB 53, the Transparency and Frontier Artificial Intelligence Act. And it requires large AI developers to publicly disclose their safety frameworks, publish updates within 30 days, and explain how they're aligning with national and international standards.
[01:04:53] The law also creates whistleblower protections and a new system for reporting critical AI safety [01:05:00] incidents directly to the state's Office of Emergency Services. Non-compliance carries civil penalties enforceable by the Attorney General. So California has nearly 40 million residents and AI hubs like Silicon Valley.
[01:05:14] So state level rules here matter and can also help set defacto national standards, which is why we're talking about this. And it's also why some of the AI companies lobbied so hard against. OpenAI argued this could stifle innovation. They pushed for federal or global agreements. Instead, meta launched a super PAC to sway feature regulation, and Anthropic by contrast, endorsed the final version of this bill after some negotiations.
[01:05:40] So Paul, this is definitely not as strict as the previous AI regulation being proposed in California, which we've talked about in the past, which was SB 10 47. That was vetoed by Newsom. But this does seem like a pretty big deal. Nonetheless,
[01:05:55] Paul Roetzer: I labs hate this. I mean, outside of Anthropic, who I'm sure doesn't actually [01:06:00] love it, generally speaking, but they are much more safety conscious and feel like there, there has to be something done.
[01:06:06] But the idea of having to adhere to 50 different state laws, is a nightmare. And, it, yeah, again, like their point about dragging down innovation, increasing the cost of doing these things and allowing America to keep a competitive advantage. Which again, I'm not an expert in this stuff. That makes sense.
[01:06:23] Like it seems, if I was running an AI lab, I would hate the idea of having to work with all these different states. I can say as a employer who has an employee in the state of California, they're pain in the ass to deal with, like as a whole. California is very challenging, but, you know, I get why Governor Newsom is doing this.
[01:06:42] I understand that they can't wait for federal, who also, the current administration doesn't want states to get involved. they don't want any regulation really. So like the current administration isn't gonna step in and put the kind of protective measures in place because they don't want it. certainly not gonna happen internationally.
[01:06:59] There's no [01:07:00] way that they're, so, I don't know what the alternative is. So I don't personally see state by state as a very logical way to do this. It doesn't seem like it's coming from anywhere else. So like, who, who's gonna do it if they don't do it? So I think that's kind of just where we are is like somebody feels like they gotta do something, the labs don't want it.
[01:07:19] The current administration, the White House doesn't want it. But that's why states have their own rights and their ability to do these kinds of things in their own laws. So yeah, I don't know. I haven't, maybe we'll dive into this one in a future episode, like get a little deeper in like, the reaction to this, I didn't have time to pull like a ton of how the different labs and leaders are responding to this.
[01:07:40] but it's certainly not what they wanted to have happen.
[01:07:45] Mercor Launches AI Productivity Index
[01:07:45] Mike Kaput: So, on a past episode we talked about open AI's, GDP valve benchmark that basically is trying to measure how well AI does real world economic work and hot on the heels of that. the company Merkur has just released the AI [01:08:00] Productivity Index or Apex, which ranks models on their ability to handle work in consulting, finance, law and medicine.
[01:08:07] Apex, like GDP valve, simulates real world deliverables like drafting contracts, building financial models or diagnosing patients, and then has domain experts grade the output. The final results of this initial benchmark put GPT five on top scoring 64% overall. ROC four came in second, Gemini 2.5 flash came in third.
[01:08:30] Now, one interesting finding is that while GPT five led across all four of these fields, some cheaper models outperformed premium ones, suggesting cost doesn't always equal, equal, equal capability. And even the leading models struggled on tasks like redlining contracts where the top models barely cleared 50% success rates.
[01:08:51] So advisors to Apex interestingly include Larry Summers in economics, gas, Sunstein and law, and ATO in medicine, all huge luminaries in their [01:09:00] fields. And Paul, what I found really interesting here is that. They devised these test assignments in all these domains, partnering with experts in each one. So for instance, I was drawn to looking at some of the consulting stuff.
[01:09:12] . They had a benchmark for the role of consulting associate, and they had input there and review from experts from McKinsey, BCG, Deloitte, Accenture, ey. They were also advised on this by a former McKinsey Global Managing Director. So I thought that was pretty fascinating to see.
[01:09:31] Paul Roetzer: Yeah. And we talked about Brendan Foodie, the CEOI think on the last weekly episode.
[01:09:36] I just listened to another podcast with him, a couple days ago. I think this one was. So I talked about the 20 VC podcast.
[01:09:44] Mike Kaput: Yeah.
[01:09:45] Paul Roetzer: this was Lenny's podcast. Why experts writing AI evals is creating the fastest growing companies in history. Another really good podcast episode, I'll drop the link in the show notes.
[01:09:55] Um. Yeah, I mean, FF further, again, you gotta remember mer what MER Corps is doing, [01:10:00] which is they're building the reinforcement learning economy of experts teaching AI models how to do their jobs. So they're hiring bankers from Goldman Sachs and attorneys from top law firms and doctors and consultants, and they're paying these people 95 to $500 an hour to train the models, to do the work of experts across all these domains.
[01:10:22] So there, there's a number of reasons why they would do this kind of research, but, it, it's fascinating stuff. And, you know, I think that when you listen to Brendan, the CEO, like, they're, they're very aggressively gonna go after this. And I think it's good that they're sharing the research because they, they need to create a more awareness about what they're doing and the impact it's gonna have on the economy and jobs.
[01:10:47] So I would, I would, you know, we mentioned Brendan is someone to pay attention to. He's only 22. Yeah. Which is kind of crazy. He's already got this, the impact he's having. But I would listen to what he says and listen to these episodes if you really wanna understand [01:11:00] what they're doing and the broader impact all this is gonna have.
[01:11:03] but yeah, these evals are fascinating. And I think I mentioned maybe on the episode one 70, it would've been, one, yeah, one 70. I'm, I've been working on this idea. I mentioned it to Mike a couple months ago of like helping companies develop their own evals. I think this is really important that not you're sitting around watching for all these other evals and waiting for them to do a study of your job or your industry.
[01:11:25] Mike Kaput: Hmm.
[01:11:26] Paul Roetzer: I think you need to do those yourself. And I've got some ideas of how to help people do that. But like, at a real high level, just to sort of open source the thinking here. Take, take key, key job titles or key roles within your company and find the fundamental things that those people do. And then when new models come out, like have set prompts that allow you to evaluate how far along have these models come.
[01:11:49] So if you have a task that. Generally takes, like, say it's like 10 steps, whatever, and it takes you two hours today. . And you go check, you know, Claude son at 4.5 or you know, [01:12:00] Gemini three when it comes out and you say, okay, give it the prompt, give it that project. And how well does it do you know, accuracy?
[01:12:07] How much time did it take to do it? And then when the next model comes out, same prompt. Like you, you have this set way to benchmark your own work, of how the models are evolving. And I think that's what's missing. I don't know a single company that's doing this right, from a knowledge work perspective.
[01:12:23] They're doing it from a coding perspective. But, I, again, my, I guess my call to action here is don't wait around for MER or or openAI's or whoever else to figure out the benchmarks for your company. Develop them yourself. It's something we're gonna do at SmarterX this fall. And I'm gonna try and take that learning and I'll try and package it up so other people can do the same thing in sim simple ways.
[01:12:44] Mike Kaput: I love that. In, in the meantime, go to the link, we'll have in the show notes to Apex, because if you click into each of these areas, they literally show you like a sample prompt that they use. Yeah. Go. And it's just straightforward. It's really good for ideas in your own industry.
[01:12:59] Paul Roetzer: Yeah. So, [01:13:00] okay. Yeah. I'm looking at, because if you don't, I see it.
[01:13:02] Yeah. Like consulting
[01:13:03] Mike Kaput: associate and they'll say like, your client's a private equity investor targeting Malaysian small to medium sized companies, blah, blah, blah. And there's this whole like series of steps to take, giving you a sample file, go like, run these calculations or find this output.
[01:13:17] Paul Roetzer: Yeah. This is really good.
[01:13:18] Yeah, definitely take a look at this and, and use it as inspiration for those kind of like personalized ones we were talking about. Yeah, that was good.
[01:13:27] AI Impact on Jobs Updates
[01:13:27] Mike Kaput: All right. We've got some more news around AI's evolving impact on jobs in the economy. So a few items here we're tracking. So first, a report from CNBC says the airline lift tanza is cutting 4,000 jobs and leaning and leaning on AI to fill the gap.
[01:13:44] The airline says most of the cuts will come from admin roles in Germany. This is part of a sweeping restructuring plan, and they said that the increased use of AI will lead to greater efficiency in many areas and processes. Second, business insider says nearly 90% of [01:14:00] BCGs 33,000 employees now use AI and performance reviews fold AI use into the various skills.
[01:14:07] Consultants are judged on like problem solving and insight. I found it interesting too. They're also pushing hard into custom GPTs, so employees are building no code tools to check slides, anticipate client questions, and even enforce BCG formatting, and they've now built more of these than any other openAI's customer.
[01:14:26] Apparently. Third, Citibank is putting every one of its SE 175,000 employees through AI training. The bank sent out a memo this week announcing that learning how to prompt effectively is now mandatory. And so basically they just say that if you learn to get better at prompting, you're gonna make your AI work much more powerful.
[01:14:46] Now, fourth kind of a code to all this is for all the headlines about AI threatening jobs, wiping out jobs. Some new data tells a different story. so new data says the US labor market hasn't been yet meaningfully [01:15:00] disrupted. Researchers at Yale looked at labor market trends since the launch of ChatGPT in late 2022 and found no major disruption.
[01:15:08] They found the mix of jobs as shifting a little bit faster than usual, but not dramatically, and not in a way that clearly points to ai. And they see some of the biggest shifts started before generative AI even hit the scene. So AI tools like ChatGPT and Claude are transformational for many, but according to this, haven't yet transformed employment patterns.
[01:15:31] Most workers are still in the same kinds of jobs. There's been no clear spike in unemployment tied to AI exposure. So Paul, another week, another set of signals that AI is definitely having an impact on how companies are hiring and training. Interesting to see that according to the Yale research, the widespread job loss or disruption is not yet showing up in their data.
[01:15:52] Paul Roetzer: I would love to see that report in 12 to 18 months. Yeah. And I hope it says the same thing, right? I'm not optimistic that it will. [01:16:00] yeah. I I like to see the movement on the literacy stuff though, and the prompt training. Yes. Like that's great. And the building of GPTs, which is what we've always said, like that want the fastest way to value and help people understand ai, who don't understand it, build AGI PT that helps them do their job personalized GPT that does the things they do and like assist them and takes away some of the mundane, repetitive stuff that they don't enjoy and find fulfilling.
[01:16:24] Like that's how you have success. So it's, it's good to see this stuff. the prompt training is critical. You gotta have like the fundamental training with it though, like prompt training on its own. Like, oh, Mike, here's like five prompts to use. It's like, why am I using 'em this way? Right? How does machine work?
[01:16:39] Like what? It's fine. Like it, it'll get further to provide prompt training. That's good. And a catalog of prompts to use, but fundamental understanding of the technology is also critical if you actually want to like reskill and upskill people in the organization. So hopefully there's an element of that going on as well.
[01:16:56] AI Product and Funding Updates
[01:16:56] Mike Kaput: All right, Paul, our final topic, we've got AI product and funding updates. I'm just gonna [01:17:00] run through these real quick and kind of close this out here. Sounds good. All right. First up, Google is rolling out a major visual upgrade to its AI mode. In search, you can now search with images and conversational text asking for, you know, things like, Hey, show me barrel jeans that aren't too baggy, and get back a stream of shoppable options.
[01:17:19] And this new feature is powered by Gemini 2.5. It's designed to make visual exploration and shopping online feel much more natural. Google is also launching a redesigned Google Home app. It's deeply integrated with Gemini. It has a new Ask Home feature that lets you use natural language to control devices, find specific camera clips, and even describe complex routines you wanna create.
[01:17:43] Many of the new Gemini powered features, including daily summaries called Home Brief will require a new home premium subscription starting at 10 bucks a month. A new AI startup called Periodic Labs has launched with a massive $300 million seed round to build what they call [01:18:00] AI scientists. This is founded by top researchers at OpenAI from OpenAI and Google DeepMind and the company aims to move beyond internet data by creating automated robotic labs.
[01:18:12] These labs will allow AI to design, run, and learn from physical experiments to accelerate discoveries in fields like material science. Apple is reportedly shelving plans for a cheaper, lighter version of its vision. Pro headset. Instead, the company is shifting resources to prioritize the development of AI powered smart glasses.
[01:18:31] Designed to compete with products from meta. The first version of these glasses is expected to pair with an iPhone and rely heavily on voice interaction with a potential release in 2027. And last but not least, we mentioned this a couple times. A reminder that OpenAI will have its dev day today, Monday, October 6th after we record today's episode.
[01:18:54] So tune in for that. OpenAI says it'll be the largest one they've run with 1500 [01:19:00] developers expect it. So Paul, that's all we've got in a packed week of ai and I'm sure there's more to come. Thanks for breaking everything down for us.
[01:19:08] Paul Roetzer: Yeah, and I was, I was just scanning to see if any news leaked to the dev day, and I came across an Axios article we might have to talk about next week.
[01:19:16] It says, Senate Democrats warn AI could erase nearly 100 million US jobs in the next decade according to a new report from the Senate Help Committee. This, you know, again by Axios. Chad, GBD based analysis says 89% of fast food jobs, 64% of accounting roles, and 47% of trucking positions are at risk. Senator Bernie Sanders wrote that AI and robotics being developed today will allow corporate America to wipe out tens of millions of decent paying jobs, cut labor costs, and boost profits.
[01:19:42] This, this is the prelude to the political upheaval that they want to cause going into the midterms. So I don't want to diminish the significance of things like this at the end of an episode, but again, I'm, I'm just scanning this now. this is exactly what I would expect. This is the playbook I would've expected them to [01:20:00] start running.
[01:20:00] Where you have to start seeding doubts about, AI's positive impact and focus on the negatives. So yeah, we, we, we will have to pull this and, talk a little bit more about it next week, along with all the dev day stuff and. Maybe some other new models from some other companies. It's gonna be an endlessly exciting October, but we also have MAICON next week.
[01:20:22] So yes, Mike and I are gonna record on Friday this week, I think. 'cause we gotta be ready for make on next week. Final call to action. MAICON.ai, MAICON.ai. We would love to have you in Cleveland, October 14th, the 16th with us. we will have a regular episode next week, even though we're gonna be at MAICON.
[01:20:38] We're gonna record it ahead of time. We'll launch that and then I guess, we'll, we'll stick it. We, we can't skip an episode in October. Too much to talk about. We would run outta time. Alright, thanks everyone for joining us. we will talk to you next week and hopefully we will see a bunch of you in Cleveland next week.
[01:20:54] Thanks for listening to the Artificial Intelligence show. Visit SmarterX.AI to continue on your AI learning [01:21:00] journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy, and engaged in the Marketing AI Institute Slack community.
[01:21:18] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.