We paused for the holiday, but AI didn’t!
In this episode of The Artificial Intelligence Show, Paul Roetzer and Mike Kaput explore how AI is already reshaping the job market, with new research showing sharp declines in entry-level roles. They unpack Silicon Valley’s $100M super PAC aimed at blocking AI regulation, highlight Google’s breakthrough “Nano Banana” image editor, Meta’s AI team struggles and more in our rapid fire section.
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:07:17 — AI Labor Market Signals
- AI Canaries in the Labor Coal Mine - Stanford Digital Economy
- X Post from Brynjolfsson on Early AI Labor Effects
- Study: AI Replacing Young Workers - Wired
00:16:37 — AI Industry’s Increasing Political Influence
- Silicon Valley Launches Pro-AI PACs - The Wall Street Journal
- AI Coalition Launches “Leading the Future” - PR News Wire
00:28:33 — Google’s Stunning “Nano Banana” Image Editor
- Google Updates Gemini Image Editing - Google Blog
- X Post: Sundar Pichai Announces Gemini Image Editing Updates
- X Post: LM Arena Demos Gemini Image Editing
- X Post: Google AI Studio Updates for Gemini
- X Post: Demis Hassabis Highlights Gemini Image Editing
- X Post: Addy Osmani Shares Gemini Editing Examples
00:34:26 — OpenAI Parental Controls and Support Features
- OpenAI Adds Parental Controls Amid Lawsuit - The Verge
- OpenAI Details Crisis-Support Features - OpenAI
00:38:23 — Anthropic Settles Authors’ Copyright Lawsuit
- Anthropic Settles Authors’ Copyright Lawsuit - Hollywood Reporter
- Anthropic Settles U.S. Authors Class Action - Reuters
00:42:44 — Meta’s AI Strategy in Flux
- Researchers Leave Meta for Rival AI Labs - Wired
- Meta Weighs Using Rivals’ AI Models - The Information
00:46:06 — GenAI App Landscape Report
00:51:10 — OpenAI–Anthropic Joint Safety Evaluation
- OpenAI–Anthropic Joint Safety Evaluation - OpenAI
- OpenAI Evaluation Findings - Anthropic Alignment
- X Post: Zaremba Comments on Safety Evaluation Findings
00:54:37 — Jensen Huang Suggests AI Will Create a Four-Day Workweek
01:00:11 — Microsoft’s AI Excel Warning
- Microsoft launches Copilot AI function in Excel, but warns not to use it in 'any task requiring accuracy or reproducibility' - PC Gamer
- Bring AI to your formulas with the COPILOT function in Excel - Tech Community
01:03:17 — Claude in Classrooms
01:07:07 — AI Product and Funding Updates
- Google Translate Adds AI Language Learning - Google Blog
- Claude for Chrome Announced - Anthropic
- Salesforce Research Unveils Agent Tools - CIO
- Microsoft Launches In-House Models - Microsoft AI
- Higgsfield Unveils Records Video Generator - Higgsfield
- Perplexity Search and Publisher Revenue - The Wall Street Journal
Summary
AI Labor Market Signals
Stanford researchers say they’ve found the clearest evidence yet that AI is reshaping the labor market, and young workers are taking the biggest hit.
Using payroll data from ADP, Erik Brynjolfsson, a professor at Stanford University, Ruyu Chen, a research scientist, and Bharat Chandar, a postgraduate student, tracked employment patterns from late 2022, when ChatGPT launched, through mid-2025.
In industries they had previously identified as most exposed to generative AI (like customer service and software development), they found that jobs for workers aged 22 to 25 fell by 16%.
The losses weren’t spread evenly. More experienced employees in the same fields saw job opportunities hold steady or even grow.
The takeaway: AI is replacing entry-level, repetitive work, while seasoned workers benefit from tools that speed up their jobs. Crucially, wages haven’t dropped, at least not yet.
AI Industry’s Increasing Political Influence
Silicon Valley is gearing up for next year’s US midterm elections and putting more than $100 million behind a new network of political action committees (or PACs) designed to protect the AI industry from heavy regulation.
A new major super PAC, called Leading the Future, is backed by venture firm Andreessen Horowitz, OpenAI president Greg Brockman, and other major players.
Their pitch: they’re not pushing for total deregulation, but they want to stop what they see as overreach like state-by-state rules or proposals to slow down AI development until safety concerns are resolved.
The group plans to fund candidates it views as AI-friendly, and target those it says could stifle innovation. Leaders describe themselves as a “counterforce” to voices warning of catastrophic AI risks.
The PAC will start in four battleground states—New York, California, Illinois, and Ohio—and says it’s prepared to support both Democrats and Republicans.
Google’s Stunning “Nano Banana” Image Editor
Google just rolled out a major upgrade to image editing inside the Gemini app…
And it’s already being called the most advanced AI photo editor available.
The image generation and editing model is formally called Gemini 2.5 Flash Image, though nicknamed “Nano Banana” after a codenamed version of the model.
Embedded right within Gemini, Gemini 2.5 Flash Image allows you to edit any image while maintaining character consistency, so the same face or product appears reliably across different scenes.
It also supports multi-image fusion, letting you blend objects or environments into a single, photorealistic picture.
And with prompt-based editing, you can make precise local changes—like erasing a stain, shifting a pose, or recoloring an old photo—just by describing it in natural language.
What sets it apart is its world knowledge. Unlike most image generators that only excel at aesthetics, it can interpret diagrams, ground images in real-world facts, and act as an interactive tutor.
The images created with this new model also include Google’s invisible SynthID watermarks to flag AI-generated content.
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Learn more here.
This week’s episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.
For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: I still think middle management's gonna get decimated. I think the people who have all the domain knowledge within their industry, within their company, and have the most benefit to gain from working with these AI at a high level reasoning level, decision making, problem solving, I think a lot of the value's gonna accumulate early on with the people who can work with the reasoning models.
[00:00:21] Yeah. And those are gonna be the better strategists in my opinion. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host.
[00:00:40] Each week I'm joined by my co-host and marketing AI Institute, chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all.[00:01:00]
[00:01:02] Welcome to episode 1 65 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host. As always, Mike put, we are recording this special edition, I guess Labor Day holiday bumped us a day, so we're recording on Tuesday, September 2nd. if you're new to the podcast, it usually drops on Tuesdays today, it's dropping on a Wednesday.
[00:01:21] This, this week's dropping on a Wednesday due to the holiday. we will be back to the regular Tuesday schedule next week, I think, unless something's going on. I don't know. I kind of forgot about the Labor Day holiday leading into last week's. so yeah, if we got, I don't know, it wasn't like crazy week last week.
[00:01:38] I guess, well, Google's nano banana sort of took over everything last week, it seemed. From a technology perspective, I expect a wild September, so we're gonna kind of ease into September with some look at jobs and the labor market. yeah, what's going on with, openAI's and Google. And then I think starting next week, my [00:02:00] guess is we're gonna enter model release season or a pretty high velocity of news from the labs.
[00:02:06] So we'll ease in this week with some big topics, some fun topics. and then we'll see what September brings us. Alright, so this episode is brought to us by AI Academy by SmarterX. This launched what, Mike, like two weeks ago now. I've,
[00:02:20] Mike Kaput: it seems like a lot longer than that. I feel like
[00:02:22] Paul Roetzer: I'm living in a time more.
[00:02:23] I was just joking with Mike like this, this past weekend was literally the first weekend I wasn't building courses. Like I started building the AI fundamentals piling nine, scaling ai probably sometime in like foundationally in April, may, but actually like building the 20 or so courses and then recording the 20 year short courses in beginning of June.
[00:02:45] So every weekend until this past Saturday, I was doing courses and so I finished them last Friday. I recorded, I spent like 12 hours, I think in, in the studio recording the whole scaling AI [00:03:00] series, and then I woke up Saturday morning. I was like, I don't even know what to do with myself. Like, this is so bizarre to not be working on courses right now.
[00:03:06] It'll start back up in a couple weeks, but like for that moment, it was a nice for, for, okay. Anyway, so AI Academy by SmarterX, we completely Reimagined Academy. I've been telling the story of this now for a couple months, but we started this process last November. Me and Mike have both just been investing immense amounts of time and energy building courses.
[00:03:25] So what we're gonna do is each week moving forward, we're gonna kind of put a little spotlight on some of the different series that are now part of AI Academy. So, as I said, we kind of recreated this whole thing. There's AI fundamentals as a professional certificate series, piloting ai, scaling ai. We have AI for industries, AI for departments.
[00:03:43] We have a weekly gen AI app, series that's dropping new product reviews every Friday. We have a new AI Academy live series. So there's just a ton going on, and so it doesn't all just get kinda lost in the clutter of everything. We figure we'll put a little spotlight on it. So we're gonna start with AI fundamentals.
[00:03:58] So the way we think about learning [00:04:00] journeys is fundamentals, piloting, scaling is sort of like the foundation's collection. It's like the fundamental things every business person should take, every leader should take. And so the AI Fundamentals series is the first one that I redid, this summer. I finished it in early August, I think.
[00:04:17] So this one consists of eight courses. There's intro to AI that is sort of a reimagined version of the Intro to AI class that I teach every month. there's AI Concepts 1 0 1, which might be my favorite course. It sort of walks people through how models are built, how they work, how they get smarter, how they infuse into businesses.
[00:04:37] There's the state of ai, which is actually an updated version of my standard keynote presentation. So I'll do, you know, 60, 70 keynotes throughout the year. This is the talk I do most of the time, and so this is an evolved version of that Talk AI timeline is a refresh version of my Mayon 2024 opening keynote, and it's a continuation of the Road to AGI and Beyond podcast episode that I did in [00:05:00] March.
[00:05:00] Generative AI 1 0 1 is a look at advancements in multi multimodal and reasoning models with a five step framework to get started. Prompting 1 0 1 is a brand new course that walks through prompting best practices for text reasoning, images, video and audio. Gives you a simple and actionable approach you can use, apply it to your prompting AI agents 1 0 1 tackles.
[00:05:20] What are they, what are the impact they're having now? What are the impact they're gonna have moving forward? And then AI in you is sort of a personal reflection on AI advancements over the last few years, and a challenge to consider how it's gonna change the work you do. So that's the AI Fundamental series.
[00:05:34] Again, eight courses. It's a professional certificate series. You can buy each series individually. so they're 4 99 individually, or you can get 'em as part of a 12 month AI mastery membership. So you can use POD 100 for a hundred dollars off either of those. the Master membership comes with all of the course series are all included in it.
[00:05:54] And so you can go to academy dot SmarterX .ai and learn more about the AI Fundamental series and [00:06:00] all the other ones. And like I said, each week, moving front the podcast, we're gonna just take a few minutes up front and give you a little spotlight of, of what's going on and what these different course series, look like and what the professional certificates look like.
[00:06:11] the second part today is brought to us by Macon 2025. So this is our flagship in-person event. It's happening October 14th to the 16th in Cleveland. dozens of breakout and main stage sessions. Four incredible hands-on workshops, optional workshops on October 14th. You can do this as an add-on when you're, registering.
[00:06:31] We are trending way above 2024 numbers significantly above, so we're definitely heading in the direction of 1500 plus attendees. It would be incredible to have everybody with us in Cleveland. that is Macon ai, so it's M-A-I-C-O-N, .ai and you can use that same pod, 100 promo code for a hundred dollars off your make on ticket.
[00:06:53] Okay, Mike? Um. We talked about episode 1 64. We talked a little bit about the MIT [00:07:00] study, that we weren't like huge fans of necessarily how it was executed and how it was promoted. but we had another big study that kind of came out, this one a little bit more, more legitimate in terms of who's behind it and the process they went through.
[00:07:13] So let's dig into the labor market and AI and what we're starting to see there.
[00:07:17] AI Labor Market Signals
[00:07:17] Mike Kaput: Sounds great, Paul. So yeah, Stanford researchers say that they have found the clearest evidence yet that AI is reshaping the labor market, and they're finding that young workers are taking the biggest hit. So using payroll data from a DP, Eric Brisen, a professor at Stanford University, Roy Chen, a research scientist and Bahar Chand, a postgraduate student, tracked employment patterns from late 2022 when chatgpt launched through mid 2025.
[00:07:51] In industries they had previously identified as most exposed to generative ai, things like customer service software development, they [00:08:00] found that jobs for workers aged 22 to 25 fell by 16%. The losses were not spread evenly. More experienced employees in the same field saw job opportunities hold steady or even grow.
[00:08:15] So their takeaway here is that AI is starting to seemingly replace entry level repetitive work, while seasoned workers benefit from tools that speed up their jobs. Now, crucially, they also found, at least not yet, wages have not dropped as a response to this. So Paul, we always look at research like this first by examining its methodology, then talking about the conclusion through that lens.
[00:08:41] So maybe walk me through what you made of the way this research was done and what it might be able to.
[00:08:49] Paul Roetzer: research is hard. So I mean, they obviously used a proprietary data set here and they went deep, you know, they took a lot of the payroll data. It's a, it's a logical way to try and [00:09:00] find those trends in their key findings.
[00:09:02] They talk a lot about trying to normalize the data and account for other things that could potentially cause these disruptions. So if you first find out that there is this decline that these age 22 to 25 are seeing a disproportionate decline in employment, then you have to control for what these, they call like these shock events.
[00:09:20] Like, could COVID have anything to do with it? Could the back to work policies had anything to do with it? Like they're, they're trying to find is there some other anomaly here that could actually be related to this that might make the 13% misleading if we lead with that headline. So they obviously did their homework and they looked at every possible scenario.
[00:09:40] And, you know, the tweet that Eric had was too long. Don't read. Employment has begun to decline for young workers and highly exposed occupations like coding and call centers. But older workers and workers who use AI to augment, not automate work have seen job gains. We also see more work for, young workers in the least [00:10:00] exposed occu occupations, like home health aids.
[00:10:03] The, again, like I would, it's, it's a pretty quick report, like it goes through six key findings. It doesn't have a ton of, it's not like a 70 page report. If I remember correctly, Mike, this was like, right, a 10 pager. You, you can read through it, but that really, is it like that, that is pretty much what it says.
[00:10:19] And then it just goes through the process of like how they tried to account for these variations. So I would say like, a couple things I took away from this one is how hard it is to actually extract the current impact AI is having on jobs. the current administration, David Sachs in particular. Is is very, aggressively trying to, take the side that AI is not gonna impact jobs.
[00:10:48] Now I think like we have to understand like with anything, you have to understand the intention, be people's current belief about this. So Eric and [00:11:00] his researchers are just trying to get to the truth. Eric's pretty optimistic overall. I mean, his past research, the things he tweets, he's pretty, he's pretty balanced in his approach for sure, but he also seems way more optimistic about AI not having this really, really dis disruptive impact in the near future.
[00:11:20] David Sachs, who's kind of like the ar ai czar for the Trump administration, and Silicon Valley guy and, what's the name of their podcast? All in podcast, David Sacks. right, he, he is. He is very much touting the line that all of this is hype, that AI isn't gonna impact jobs. Now you just can't believe that because he has to say whatever the administration wants him to say and they cannot be saying that leading into the midterms next year, which we're gonna talk about the midterms in Silicon Valley in a minute, they cannot admit if [00:12:00] they actually think AI is gonna be disruptive to jobs going into 2026.
[00:12:04] They can't because it would be disruptive to their campaign. So what we have to do is just be like, I don't care about people's politics. I really don't like, we have to be honest with ourselves about the intentions of the people that are putting out statements about these things. and so the research like this is, is as neutral as we can get.
[00:12:24] They're literally just looking at payroll data and they're just trying to find the trend. Because one side's saying, Hey, we're gonna see a big disruption. Other side's saying, nah, it's not gonna happen every, it's gonna be great. We have a deficit of workers across all these different industries, which we do.
[00:12:38] and it's just gonna be augmentative and somewhere in the middle is gonna end up being what actually happens. And none of us really know. We, all we can do is kind of look at what's going on, apply what we're hearing and seeing in different industries. and I will just say like even last week I had conversations with some people at some pretty significant companies, [00:13:00] and they're like, it's happening.
[00:13:01] Like we are, we are absolutely starting to see an impact on jobs. We expect more in 2026, not being talked about publicly, but like it's, it's coming. So I don't know. I mean, I think that this, again, we need more of this kind of research. We need more of like finding different patterns and different data sets and trying to figure out where, where are we, when is it happening?
[00:13:22] how can we protect against it? I've always just been of the belief that we need. To be proactive and prepare for maybe it doesn't go smoothly. that's always been our position is like, seems like there's a probability. The disruption is great. And I do think that the entry level work makes a ton of sense.
[00:13:41] I wouldn't take solace in this if I was middle management or a leader. Like I still think middle management's gonna get decimated. Like I really do. Like, I think that it, the people who don't have this super advanced domain expertise that can work with the ai, who have all the experience, I think the people who [00:14:00] have all the domain knowledge within their industry, within their company and have the most benefit to gain from like, working with these ais at a high level reasoning level, decision making, problem solving.
[00:14:11] I think a lot of the value's gonna accumulate early on with the people who can work with the reasoning models. Yeah. And those are gonna be the better strategists in my opinion. So, I don't know. I mean, there was a couple, wired magazine had a, a good kind of short article. We'll put the link in here.
[00:14:26] there was a couple interesting things in there 'cause, Brinson did an interview and he said he is, the article said he had long suggested the government could change the tax system so that it does not reward companies that replace labor with automation. If that was happening, I would feel a whole lot better about this.
[00:14:41] Like, if there was actually government efforts to penalize companies for removing humans in favor of automation, or at least taxing those people and then using that to distribute back to citizens. I would like that direction. I'd like to hear more about that direction and then the idea of these early warning systems so we [00:15:00] have a better concept of when it's starting to happen across different industries.
[00:15:03] So, I don't know, I mean, a, a good report, nothing earth shattering here. It is kind of like a signal, like, and we're, that's what we're looking for right now is like the signals within the noise that maybe tell us what's really happening and how quickly things are happening. And I thought this was a, you know, a good one.
[00:15:19] It's also a good reminder. We built jobs GPT to help with this kind of stuff, to, to project it at an individual level. So I devised last year this exposure key that looks at these 11 levels of exposure to like jobs and tasks based on the capabilities of models. So it looks like image, video, audio, voice, reasoning, persuasion, digital action.
[00:15:39] So I built this 11 phase exposure key, and that's trained jobs. GPT is trained on that. So you can go into jobs, GPT, we'll put the link in the show notes, but it's SmarterX .ai and just click on tools, it's right there. and you can put a job title in and it'll actually do an assessment for you of how exposed that job is to, disruption.
[00:15:59] So, [00:16:00] yeah, I don't know, again, like looking for the signals, we'll continue to highlight research when we think it's worth, you know, paying attention to. And this is one of them that I would, I wouldn't assume this is like figuring it all out. And that's it. That's the end of the, that's the answer to the question.
[00:16:13] It's gonna be early entry level workers and everybody else is safe. This is, this is just the beginning of this.
[00:16:18] Mike Kaput: Yeah, and I can't emphasize enough. If you are a parent who has a child who is entering the workforce or you're a student, or heck you're at any phase of your career, go use jobs GPT if you have not already, because in just a few minutes it is incredible the kind of insight you can get.
[00:16:37] AI Industry’s Increasing Political Influence
[00:16:37] Mike Kaput: Alright, so our next big topic this week, Silicon Valley, is gearing up for next year's US midterm elections and putting more than a hundred million dollars behind a new network of political action committees or pacs, which are designed to protect the AI industry from heavy regulation. There is a new major, what they call a super PAC, called Leading the Future that is backed [00:17:00] by venture firm Andreessen Horowitz, openAI's President, Greg Brockman, and other major players.
[00:17:06] Their pitch according to them is that they're not necessarily pushing for total deregulation, but they want to stop what they see as overreach. Like state by state rules or proposals to slow down AI development until safety concerns are resolved. The group plans to fund candidates views as AI friendly and target those that says could stifle innovation leaders describe themselves as a quote, counterforce to voices warning of catastrophic AI risks.
[00:17:37] Now this pack in particular will start by focusing on four battleground states, New York, California, Illinois, and Ohio, and says it is prepared to support both Democrats and Republicans. So Paul, can you maybe unpack for us what's going on here? It sounds like AI might become a fairly significant issue in the 2026 midterm.
[00:17:57] Paul Roetzer: I've been saying for months this is gonna be like, I [00:18:00] think it's gonna maybe be the issue going into the midterms next year and, you know, this is certainly a step in bringing that to life. So, pacs, I've heard about 'em a million times. I've never actually taken the time to see like exactly how a pack works.
[00:18:13] So I did quickly do the what is a PAC and how does it work kind of thing. So it's an organization in the US that collects and distributes contributions from members or donors to support candidates legislation or ballot initiatives. They're regulated by the Federal Election Commission Super pacs in particular, which this one is, can raise unlimited amounts of money from individuals, corporations, and unions, but cannot directly coordinate with candidates or parties.
[00:18:38] They primarily spend on advertising to support or oppose candidates. Super pacs have no contribution limits, but are restricted from direct coordination with campaigns. Okay. So then the wall, they, they put out their own press release announcing this, but I'm gonna go into like the Wall Street Journal article.
[00:18:54] So, as you talked about, they want to advocate against strict AI regulations. The [00:19:00] signal tech executives will be active in the elections. Andreessen Horowitz, head of Government Affairs, Colin McCune, Brockman and openAI's Global Affairs Officer Chris Lehane, were involved in initial conversations earlier in the year about the need to help shape industry friendly policies leading the future hopes to use campaign donations and digital ads to advocate for select AI policies and oppose candidates who grew, believes will stifle innovation.
[00:19:24] So, as you were saying, one of its goals is to push back against the movement back by some other tech titans that focus on regulating AI models before they get too powerful and create catastrophic risk for society. That's what happened in the eu. They were trying to set some limitations on like the training runs and how powerful the models could be, how much compute would go into them.
[00:19:40] So they're trying to avoid all that, avoid the state level stuff. The organization said it isn't pushing for total deregulation, but wants sensible guardrails. as a quote says, there is a vast force out there that's looking to slow down AI deployment, prevent the American worker from benefiting from the US leading and global innovation and job creation, and erect a patchwork of [00:20:00] regulation.
[00:20:00] Josh veto. And Zach Moffatt, the group's leader, said in a joint statement, this is the ecosystem that is going to be the counterforce going into next year. it's went on, say many tech executives were that Congress won't pass AI rules, creating a patchwork of state laws that hurt their companies early year, push by some Republicans to ban state AI bills for 10 years was shot down after opposition from other conservatives who opposed a blanket prohibition on any state AI legislation.
[00:20:29] Now, the one thing I noted here is not only, you know, do we have this super PAC showing up? Elon Musk, world's richest person has said he was gonna start a competing party. You know, he was gonna start the, what was it? The American Party, the
[00:20:41] Mike Kaput: America Party, I think. There we
[00:20:42] Paul Roetzer: go. So he had threatened to basically take out Republicans who voted for the spending bill.
[00:20:48] His goal was to like, take five to seven h you know, seats in the house and then have everybody have to go through them, basically. So whether you're Republican or Democrat, if you wanted something done, you were gonna have to bow Before these, [00:21:00] you know, five to seven people that Elon was gonna fund to get into office.
[00:21:03] He has apparently pumped the brakes. I hadn't heard about anything like four weeks, so I did a search. I was like, what's going on with Elon's party? So here is a, August 19th article. It says, billionaire Elon Musk is quietly pumping the brakes on his plans to start a political party. According to people with knowledge of his plans, Musk has told allies that he wants to focus his attention on his companies and is reluctant to alienate powerful Republicans by starting a third party that could siphon off GOP voters.
[00:21:31] As he has considered launching a party, the Tesla's chief executive has focused, been focused in part on maintaining ties with Vice President JD Vance, who is a Silicon Valley guy who is widely seen as a politi, as a pot, a potential heir to the mega political movement. Musk has stayed in touch with Vance in recent weeks, and he has acknowledged to associates that if he goes ahead with forming a political party, he would damage his relationship with the vp.
[00:21:56] So he's maybe backing out of doing his own party, [00:22:00] which I kind of guessed he was going to at some point. mainly because the current administration could dramatically damage Musk's companies. His SpaceX company obviously relies on, government contracts and the government relies on his SpaceX company to get back to the moon and beyond.
[00:22:17] So I mean, there's, there's a lot of incentive for Musk to not, make tremendous enemies on the Republican side. And JD Vance is a logical person that Musk would align himself with. So yeah, real interesting. Brockman, I couldn't find any prior political affiliations or activities from Brockman Brockman.
[00:22:34] This seems like his first foray, a 16 Z or Andreessen Horowitz was very publicly supportive of the Trump campaign this time around after they, the rumored disastrous meeting with the Biden administration related to their plans to regulate ai. Mm-hmm. So we talked about this in 2024, that Andreessen, not necessarily to the, joy of their portfolio companies, was one of the first to really back the Trump [00:23:00] campaign.
[00:23:01] And so it actually led me back to October, 2023 when Andreessen Hoish published the Techno Optimist Manifesto. So if you haven't been following along for the last two years, listen to the podcast every week. I'm gonna rewind here for about a minute and I'm gonna read you a couple of excerpts from the Techno Optimist Manifesto.
[00:23:19] If you don't understand what, Andreessen Horowitz is doing, if you don't understand what this super Pac is all about, this about sums it up for you. This is directly from their manifesto. Our civilization was built on technology. Our civilization is built on technology. Technology is the glory of human ambition and achievement.
[00:23:37] The spearhead of progress and the realization of our potential, we can advance to a far superior way of living and of being. We have the tools, the systems, the ideas we have the will. It is time once again to raise the technology flag. It is time to be techno optimists. Techno optimists believe that societies like sharks grow and die.
[00:23:56] That's such a weird analogy. We, we believe growth is [00:24:00] progress leading to vitality, expansion of life, increasing knowledge, higher wellbeing. We believe not growing in stagnation, which leads to zero sum thinking, internal fighting, degrade, degradation, collapse, and ultimately death. There are only three sources of growth, population growth, which is declining at the moment.
[00:24:18] Natural resource utilization, which is somewhat limited, and technology. The only perpetual source of growth is technology. In fact, technology, new knowledge, new tools. What the Greeks called TEGNA has always been the main source of growth and perhaps the only cause of growth as technology made both population growth and natural resource utilization.
[00:24:38] Possible economists measure techno technological progress as productivity growth. How? How much more can we produce each year with fewer inputs, fewer raw materials? Productivity, growth powered by technology is the main driver of economic growth, wage growth, and the creation of new industries and new jobs.
[00:24:56] As people and capital are continuously freed to do more important, valuable [00:25:00] things than in the past. Productivity growth causes prices to fall, supply to rise and demand to expand, improving the material wellbeing of the entire population, and then will close with their thoughts on ai. We believe intelligence is the ultimate engine of progress.
[00:25:15] Intelligence makes everything better. Smart people and smart societies outperform less smart ones on virtually every metric we can measure. Intelligence is the birthright of humanity. We should expand it at as fully and broadly as we possibly can. We believe we are poised for an intelligence takeoff that will expand our capabilities for unimagined to unimagined heights.
[00:25:37] We believe artificial intelligence is our alchemy, our philosopher stone. We are literally making sand think We believe any deceleration of AI will cost lives. Deaths that were preventable by AI that was prevented from existing is a form of murder. This that is like 20% of the manifesto. So if you wanna understand Silicon Valley's approach to this, if you wanna [00:26:00] understand a hundred million is nothing like that, that is the starting point.
[00:26:04] They, that'll be a billion dollar Westpac within months. the midterms in the United States are, are gonna be ground zero for ai. and if you don't believe me, just go back and reread the manifesto. Like they, they will not lose, like Silicon Valley sees this as a, an inflection point in humanity, and they, they believe they have to win at all costs no matter who that puts in office.
[00:26:32] They, they will, they will fight to win this, and they have the backing of the richest people in the world to do it. So
[00:26:42] Mike Kaput: if I were them too, I'd be, I'd probably be anticipating and trying to get ahead of what we've discussed before, which is societal backlash. Yep. To ai. Right. Because the big fear I would imagine is not just that federal regulations slow down ai, but that there's [00:27:00] big backlash to say job loss or whatever the next new dangerous technology is that comes out.
[00:27:06] Yeah. And the result is that every state level representative says, I want to hang my hat and make my career on like, banning AI for X or whatever. And then suddenly you are off to the races, right. Of potentially slowing everything down in their eyes.
[00:27:21] Paul Roetzer: Yeah. And I'm no political strategist, but I mean, you can't fight the funding.
[00:27:26] They're gonna have access to, you have to probably fight it, with fear and em emotion and so it's just gonna get ugly, like there. Yeah, man, I don't even wanna think about this. it's gonna, it's gonna be really, really messy. next year, I don't, I don't know any way, I don't know any way around it unless they, somehow all the politicians come together and say, listen, we, you know, we have to beat China, whatever.
[00:27:54] They're gonna rally behind. And they all, unify behind America leading an [00:28:00] AI. Which it should. I just, how it happens, I don't know. Like, I don't know the right way to do it, but it's possible that both, sides of the aisle in the United States come to an agreement that we need to win an AI and it needs to happen before the end of the decade.
[00:28:16] And so we're gonna work together on this. I hope, I hope somehow that happens. I don't know what that looks like, but otherwise, yeah, it's gonna get ugly.
[00:28:26] Mike Kaput: Yeah. I feel like this is a prediction we're going to be returning to Yeah. In a handful of episodes. Yeah.
[00:28:33] Google’s Stunning “Nano Banana” Image Editor
[00:28:33] Mike Kaput: Alright, so our third big topic this week, Google just rolled out a major upgrade to image editing inside Gemini, and it's being called by some, the most advanced AI photo editor available.
[00:28:45] So this is a new image generation and editing model that's formerly formally called Gemini 2.5 Flash image. But it is kind of nicknamed and also internally nicknamed by Google Nano Banana after a [00:29:00] codenamed version of this model, before this kind of formally came out. So this is embedded right within Gemini and Gemini 2.5 slash image slash nano banana allows you to edit any image while maintaining character consistency.
[00:29:14] So the same face or product can appear reliably across many different scenes. It also supports multi-image fusion, so you can blend objects or environments into a single photo realistic picture. And there's overall prompt based editing, so you can make really precise changes like erasing a stain, shifting a pose, recoloring an old photo just by describing all of that in natural language.
[00:29:41] Now, this also has some world knowledge to it. So unlike most image generators that only excel at aesthetics, that can actually interpret diagrams, ground images, and real world facts. Even act as an interactive tutor. So the images that are edited and produced with this new [00:30:00] model include Google's invisible, syn ID watermark.
[00:30:03] We talked about that before. That flags the images as being AI generated or manipulated. So Paul, we've got plenty of image generation image editing tools out there. Many of them are really impressive, but it does seem like nano banana has caught fire and gotten a ton of attention for just how powerful it seems to be based on the experiments and example I'm seeing.
[00:30:28] Why is this worth paying attention?
[00:30:30] Paul Roetzer: Yeah, it definitely got a lot of buzz and like the couple weeks leading up to the actual launch, a lot of rumors that, you know, people assumed it was probably Google, but no one was really sure what the nano banana thing was, but it was really impressive. and then the launch, I mean, yeah, the stuff I've been seeing is just tremendous response online to the model.
[00:30:47] I was honestly like really confused as to whether this. Like what happened to Imagine four? Like where was the app? I I thought that was their image generation model. I was like, is this just an editing model or is it like the whole thing? Because when you go into the app, [00:31:00] it just says image and it, and they have a banana emoji.
[00:31:03] So, which is so on Google-like, like I think it's hilarious that Google is now just willing to play the game and have a more fun with their product announcements and playing along with all this. So yeah, if you, if you're not sure how to use it, literally just go into the Gemini app and look for the banana.
[00:31:17] Like it just says banana and image. so I couldn't find anything from Google officially that explained was this imagined for Reimagined as like something else or like, what happened here. And so I went into Gemini and asked, 'cause I, again, I don't know. So I don't, this seems correct to me, but without having anything official from Google, I'm just gonna roll with what Gemini is telling me.
[00:31:43] So. It says they are different models. So Imagine four is a specialized text image diffusion model designed for creating high quality photorealistic images from text prompts. Gemini 2.5 flash images of multimodal LLM. This means it's part of the Gemini family of [00:32:00] models and was trained to understand and work with both text and images.
[00:32:03] So then I said, okay, like, so do, does I imagine four not get used? Like when I ask for an image using this nano banana, like am I using Imagine four at all? So it says, when you are using the Gemini app to generate an image and see the 2.5 flash image banana emoji, you are indeed using the 2.5 flash image model, not imagine four directly.
[00:32:25] So Gemini 2.5 is primarily a primary image model built into the Gemini app. It is native multimodal, understands both text and images. It says Imagine four is the underlying image generation engine. So think of it as a highly specialized tool. That Gemini 2.5 flash can call upon when it needs to create a brand new image.
[00:32:45] So it's basically saying like 2.5 flash is kind of the primary model you interact with, and that model chooses to use Imagine four for original generation of images.
[00:32:56] Okay,
[00:32:56] we'll keep looking into this, but like, that's basically what [00:33:00] seems like it's happening is like a model choice thing. And if the 2.5 flash needs ima image generation, it does it, otherwise it just edits, edits your existing stuff.
[00:33:08] So, if you want to test this out, what I've always found with, 'cause I'm not good at prompting images and videos and audio, is just go into Gemini and say, gimme some prompts to test the full capabilities of 2.5 flash image. So if you wanna see what it's capable of doing, just ask Gemini to give you some prompts to use and it'll give you like really detailed prompts, or say, Hey, I wanna upload this photo.
[00:33:31] Give me a prompt I can use to see what 2.5 flash is capable of doing to it. And it'll give you prompts. It's, it's the best way to kinda experiment these. And then we'll have a. Gen AI app review is part of AI Academy coming out soon. That'll go a little deeper into this and do like a 15, 20 minute review of it.
[00:33:46] Mike Kaput: Yeah, that's what's so cool about that series is we've kind of designed everything to be able to kind of quickly jump in when something like this takes the world by storm.
[00:33:55] Paul Roetzer: Yeah, it was, I mean, it really was part of the motivation. So like I said, with the AI Academy, [00:34:00] by S Smart, we have this Gen AI app series where we drop something every Friday, and that was the actual use case.
[00:34:05] Mike and I talked to each other about. It's like, Hey, we talk about all these products, like how cool would it be if we talk about something on a Tuesday and that Friday we're able to drop like a 20 minute review of it, like really nice value add for the audience to get to like actually experience it.
[00:34:18] So that was the vision behind the Gen IP series was kind of this more real time research and sharing. So.
[00:34:25] Mike Kaput: All right, let's dive into the rapid fire topics this week.
[00:34:26] OpenAI Parental Controls and Support Features
[00:34:26] Mike Kaput: So first up, we have, openAI's is making and considering some changes to how chatgpt responds to people in emotional crisis. So the company says it's seen a rise in users turning to the chat bot for life advice, support, and even for support during moments of acute distress.
[00:34:49] So in response, openAI's is expanding the system's ability to recognize and deescalate mental health emergencies. So GPT five, which is now [00:35:00] the default model, includes a new safety technique called safe completions, which are aimed at reducing emotionally harmful replies, especially in long conversations where prior versions could lose context and slip up early results.
[00:35:15] According to openAI's, say they show a 25% drop in dangerous responses compared to GPT-4o. When users express Suicidal Intent Chat, GPT now refers them to local crisis lines like 9, 8, 8 in the US or Samaritans in the uk. openAI's is also working with more than 90 doctors across 30 countries and forming an advisory group focused on mental health and youth safety.
[00:35:41] Now the company says they also are considering and may want to offer one click access to emergency services therapist referrals through ChatGPT and even ways for teens under parental guidance to alert trusted contacts directly from the app. So Paul, it's good to see openAI's kind of [00:36:00] acknowledging that people are using ChatGPT for whether you like it or hate it for very personal things.
[00:36:06] it's good they're taking steps to protect users, especially kids in teens. But I do have to say it really seems like we're seeing more and more situations where people are developing dangerous or harmful relationships. Tools.
[00:36:19] Paul Roetzer: Yeah, it feels like every week there's another article or two that you just can't, even as a parent of teens, like you just can't even get through 'em.
[00:36:26] Like, they're just very painful to read. the only thing I I'll say on this one for now is just keep this like rapid fire. it, the only answer here is, parents, you gotta talk to your kids. Like this is, it's great that openAI's is doing this, but Grok is not like, kids are gonna have access to AI chatbots in every social network, every piece of software they use.
[00:36:52] Like if you think that openAI's is gonna solve this for your kids or your kids' friends, or like adults too, [00:37:00] like adults are gonna be impacted by this as well. that's just one chatbot. Like they can talk to these things anywhere and they will like, they will find ways to interact with these chatbots that are gonna increasingly feel very, very human.
[00:37:16] We talked about that seemingly conscious ai. From Mustafa Solomon last week. Like, these things are gonna feel real. They're gonna be there to listen when other people aren't. They're gonna be easier to talk to than the parents sometimes, like kids who are susceptible to, go this path, are going to do it.
[00:37:35] and you have to, as a parent, have a high level of awareness. and you have, you, you have to be there and like, understand this stuff and see it for yourself, how it works, and understand the way that the kids may interact with it. It's a much easier thing to just pretend like it's doesn't exist, trust me.
[00:37:53] but you gotta have these conversations with your kids and you know, the awareness is the first step here. So, [00:38:00] yeah, I, I said lastly, there's just some, some of the stuff I can't even talk about, honestly as a parent. Yeah. but you know, I think that, it's, it's just super important and so we'll, we'll do our best to keep kind of putting a spotlight on it.
[00:38:13] Without getting into too many into the details, you can go read these articles yourself. Trust me there. They're tough. but I think we gotta do it.
[00:38:23] Anthropic Settles Authors’ Copyright Lawsuit
[00:38:23] Mike Kaput: All right. Next up. Anthropic has settled a high stakes copyright lawsuit from US authors. So we covered this ongoing suit in a previous episode, and it had a couple different components.
[00:38:34] So the core issue in this lawsuit was whether or not Anthropics use of books to train its models constituted what's called in copyright fair use. On that point, Anthropic largely won. the judge said authors can't stop the company from training on books if it legally purchased them. And that was a sticking point because the company was also accused of downloading millions of pirated books to [00:39:00] build a library to train its ai.
[00:39:02] The court had already ruled that this act likely violated copyright law, even if Anthropic later bought legitimate copies to cover its tracks, which it was doing. The statutory damages could have reached $150,000 per book over millions of books in question. So philanthropic went ahead and settled, but the terms of the settlement have not yet been disclosed.
[00:39:25] So Paul, I guess as we're reading the conclusion of this, you know, we had followed it closely when it first started. I guess I keep wondering, is this even really a win for the authors? Like, seems like the judge still ultimately said Anthropic you can use books to train on as long as you buy them.
[00:39:41] Paul Roetzer: Yeah, I don't know.
[00:39:42] This is, again, not legal experts here. I think all the legal experts are trying to understand what the implications of this case is gonna be down the road. Anthropic was, was gonna be in a really tough spot. That was gonna be a really big number. If they got anywhere close. I mean, you steal 7 million books and 150 per, I know when we talked about this on the [00:40:00] podcast, I think that was gonna be like in December when they're gonna like, I mean, that's an extinction event for philanthropic.
[00:40:05] Like you, you can't pay that bill like they're done. And so at the time, I assume you find some, I mean, you gotta pay this off, like you gotta move on from this. But I don't know that this protects them from other judgments down the road. It certainly doesn't protect other labs from these kinds of judgments.
[00:40:22] So, yeah, I don't know that anything's really solved here. I, you know, the, attorney representing the authors was like, yeah, it's, you know, good for everybody. It's like, okay, yeah, but it doesn't fix anything. It just allows them to keep going. They pay their 500 million or billion dollar or whatever the judgment ends up being, or whatever the negotiated amount ended up being.
[00:40:42] I'm sure it wasn't insignificant, but in Anthropics world, like, whatever, go raise another 30 billion, like, you know, take care of it. So, I don't know. It'd be interesting to see if, you know, as we head into 2026, how this stuff shakes out. I feel like we're still just at the starting gate here of how all these lawsuits [00:41:00] are gonna play out and what the future of copyright is gonna look like, and, um.
[00:41:04] I don't know that that's gonna be like a big issue in the midterms next year. I don't know that there's enough groundswell interest in it, but that's certainly one of those triggering issues that can cause backlash from society. If enough people feel like these big, you know, multi-billion dollar, a hundred billion dollar, companies are just stealing from the little people like that, that's the kind of thing that can create enough backlash potentially.
[00:41:29] So, I don't know. I'm just, we'll keep monitoring this. It's like I said, nothing definitive. This doesn't change anything. It's just interesting note along the way.
[00:41:38] Mike Kaput: Yeah, and on that backlash point, I'll sometimes be surprised, obviously we're all in our own little bubbles here, but I'll stumble on a post from someone I, you know, in am acquaintance with on social media who maybe does art or is an artist and the hashtag AI is theft.
[00:41:53] Very prominent on some of those. And you know, I don't know how widespread Yeah. That notion is, but you start to see this [00:42:00] where it's like, oh, to that person. Like, I don't even wanna post the work we do because they're not gonna think very.
[00:42:06] Paul Roetzer: I know. You do wonder at what point does the perception change? And I think, again, like not trying to put the political, political strategist hat on, that's the exact kind of thing you do.
[00:42:14] So if you're, if you're trying to do the grassroots thing, well they're putting their, you know, a hundred million and eventually their billions into the super pac. You have to find the grassroots ways to like change sentiment and AI's theft is a, seems like a minor thing, but like, that's the kind of thing I think you'll start to see is this like, under, like underneath society where all these perceptions are starting to grow.
[00:42:37] I think you, you know, people will try and catalyze those through different campaigns and efforts.
[00:42:44] Meta’s AI Strategy in Flux
[00:42:44] Mike Kaput: So next up, there's some trouble at Meta. Just two months after meta launched.
[00:42:50] Paul Roetzer: I'm shocked. Meta, I'm totally shocked. I'm shocked. I feel like every meta segment we start is just like, oh, here's another thing going on badly at meta.
[00:42:59] Mike Kaput: [00:43:00] so just two months after they launched Meta Superintelligence Labs, there's kind of some cracks showing in the team because at least three high profile researchers have resigned. Two of them returned to openAI's after less than a month. One of them, Risha Agarwal publicly said it was a tough call to leave a team with such talent in compute density.
[00:43:21] But after years at Google Brain DeepMind Meta all places he was at, he was ready for something different Now. These exits are just one, perhaps small signal that meta's push to compete in frontier AI might not be going as planned. Now, this is of course, despite some of the highly publicized things we've covered around nine figure pay packages and their aggressive recruiting spree, there are also some rumbles of internal turmoil and organizational reshuffling that have reportedly slowed progress.
[00:43:53] So, Paul, are you, I guess I know the answer to this. Are you surprised to see this happening already given how much money meta has thrown around here? No. [00:44:00]
[00:44:00] I like the other interesting one that we don't really get into in this, Mike, is this, the information of this story that scale ai, who they paid $14.9 billion to get an Alexander Wang out of scale AI and some of the executives, but then they're supposed to use scale AI's data and training systems in their models.
[00:44:18] And apparently the data sucks and they're actually using competitors of scale AI to train their models is what the information was reporting. So not only. Is there all these issues with throwing, you know, a dozen plus researchers together who you're paying hundreds of millions of dollars while all the people who've been at Meta aren't getting paid the hundreds of millions of dollars you're gonna have.
[00:44:38] Egos hurt, you're gonna have conflict. now there appears to be maybe an issue around the $15 billion act, not acquisition, but acquihire and the data that they're supposed to be using. So again, like I said last time, like I'm not rooting for this to not work for meta. I'm just stating like, it's an, [00:45:00] unorthodox strategy that if you look at the parallels in the world of sports and you try and throw a bunch of alphas into a room together, and then all the previous alphas are left to fight for the scraps, like usually human nature kicks in and it's not like.
[00:45:15] Kumbaya. Like, everybody's not just like, oh, this is great. Let's do super intelligence together and it's gonna be amazing. And so it just, yeah, you look at this and you think, okay, like, this is gonna get ugly. Like, you could make a a, a TV show about how this is gonna go. Oh yeah, it might work. Like we're, we're not projecting like, this is gonna completely collapse within 12 months.
[00:45:35] It's possible. it's also possible that some, somehow out of this, they m make some breakthroughs and do some incredible things, and they get the super intelligence first. I wouldn't be putting my money on it. but who knows? It's gonna be entertaining as hell in the process. I guess,
[00:45:51] yeah. It's interesting too to note that these people who are coming and going all clearly have options.
[00:45:57] Like you can just go back to openAI's like a month after. They [00:46:00] don't care. Screw it up. I gotta take my 2 million from openAI's.
[00:46:03] Paul Roetzer: Like, it's not worth it. You can keep your a hundred million. I'm not dealing with this.
[00:46:06] GenAI App Landscape Report
[00:46:06] Mike Kaput: Yeah. All right, next up, the famed venture capital firm, Andreessen Horowitz, so we mentioned before, has released the fifth edition of their top 100 gen AI apps list.
[00:46:18] This list ranks consumer AI apps and websites by usage to give a more complete picture of the generative AI landscape. So the way this is divided, the list is divided into top 50 AI web products determined by unique monthly visits, and a list of top 50 AI mobile apps determined by monthly active users.
[00:46:39] And there's definitely some overlap between the two lists, but they call it the top a hundred gen AI apps list. So this time around the firm found that consumer AI is beginning to stabilize in their words, with only 11 new names on the web rankings list compared to 17 in the last edition. As always, so far, at least, ChatGPT has taken the top [00:47:00] spot on both the web and mobile lists.
[00:47:02] Now for the first time, Andreessen ranked Google Domain separately. So we can now see how individual Google tools are ranking here. Four of them made the top 100, including Gemini, Google, ai, studio Notebook, LM and Google Labs. Gemini ranks number two on web behind chat. GPTI believe they said in here it has about 12% of ChatGPT T's usage and it's also number two on mobile.
[00:47:30] Grok is also moving up the rankings fast at number four on web number 23 on mobile. Meta AI is a bit lower than you might expect on the list. It's at number 46 on web. It's not in the mobile top 50 at all. And there are several Chinese products and apps that also made the list, including the most famous or infamous, depending on your perspective.
[00:47:51] deep Sea. So Paul, I'm curious if anything that jumped out at you this time around with these rankings. I mean, Google is not surprising to me, but it is [00:48:00] interesting to see Gemini, in the number two spot.
[00:48:03] Paul Roetzer: Yeah, some of these I haven't heard of. Like, I was actually just looking at the top 50 in, janitor ai.
[00:48:09] That was a new one. Like I hadn't heard of that.
[00:48:11] Mike Kaput: Every time I see one of these new entries and like scratch it a little harder, it's like, oh, this is for like teenagers. Yeah, it's characters is great. Exactly. And
[00:48:19] Paul Roetzer: that's exactly what I'm looking at. It's like okay, character AI is still fifth, Grok, which Elon Musk is certainly pushing as a character interaction Chatbot right now has lots of uses, but like that's the thing he's been going, you know, intensely on, is building these characters to interact with.
[00:48:36] it looks like that's what that is. This do Dobo. I I'm just glancing at the eye contact. I think
[00:48:42] Mike Kaput: that's a Chinese Might be a Chinese social. Okay. Network slash AI as well. Yeah.
[00:48:46] Paul Roetzer: Spicy Chat. .ai. I am gonna take a wild guess what that one is. I am not going to that url. so when you lovable, like you start to look at these names and you realize, again, going back to this whole, your kids are gonna talk [00:49:00] TOIs and some of them are actually gonna be designed to be, specifically engaging to your kids to keep them there.
[00:49:08] it is not, it's not just ChatGPT. So, yeah, I don't know, just looking at it, the overall landscape of like, how is society using AI and, you know, what are the interactions, what is their experience? So like, like you said, Mike, when we sort of talk to the people outside of the bubble who aren't listening to the artificial intelligence show every week, or just sort of like, you know, unknowingly sometimes interacting with ai, what are they doing with it and.
[00:49:33] Chats and Yep. Characters sure. Seems like it's gonna probably end up being a really powerful thing. and then, you know, like you said, sort of Google just doing their thing like four, of the top 50 and actually, I mean four of more in the top 30 I think. And, just keeps kind of growing. So, I dunno, fascinating.
[00:49:52] But I would say it's, it's good. it's kinda like neat. Go click on the link in the show notes, like check it out, look for yourself with these 50 r It gives you a sense of [00:50:00] just what people are doing with ai. I dunno, it's kind of fascinating.
[00:50:03] Mike Kaput: Yeah. And if you, I just have to emphasize if you're someone that believes the whole AI relationship thing is overblown or like just kinda weirdos or whatever, like this list is based on usage, I swear, like 30 plus percent of these are AI companionship apps.
[00:50:18] Like the wild
[00:50:19] Paul Roetzer: Yeah. and keep in mind that is probably actually the dominant use for CHE GPT for a lot of people. Like it's, right, right. So not only is there all these under underlying startups for that Gemini, Che, GPT, Deepsea Rock, the top four, that is a pop, very popular use case of those. So yeah, I mean, like some points someone's gonna try and figure out a way to do the study of like, okay, of all AI use in the month of, September 24% was chatting with a relationship coach or like, yeah.
[00:50:50] yeah.
[00:50:52] Mike Kaput: And if you think Andreessen isn't investing based on that information, a hundred
[00:50:56] Paul Roetzer: percent money, they're gonna invest wherever the people are going. So yeah, [00:51:00] you can talk about growing productivity, but I wouldn't say that, character.ai is designed to grow productivity for anybody. Yeah.
[00:51:10] OpenAI–Anthropic Joint Safety Evaluation
[00:51:10] Mike Kaput: Alright, next step. openAI's and Anthropic just ran internal safety tests on each other's AI models and published the results. This is, kind of the first big cross lab collaboration of this kind, and it shows how far safety research has come and a bit how messy it can still be. So from the findings, they found the anthropics claw models stood out for their discipline.
[00:51:33] They were the best at resisting system, prompt extraction, and generally handled instruction hierarchies with fewer slip ups. But they also refused to answer so often, you know, when they didn't have enough information or, or didn't know the answer. In some cases, they would do the 70% of the time. The usefulness of these models often took a hit when they were really pushed on this.
[00:51:54] Now openAI's models were more willing to answer, which made them more helpful, but also more prone to [00:52:00] hallucinations on jailbreak tests. openAI's oh three and oh four mini looked stronger than Claude, though both companies found ways to trick each other's systems with simple tweaks in high stress, what they call scheming simulations.
[00:52:15] Neither lab could claim that they have the high ground here. The models, all of them lied or sandbagged even when they seem to know better. So Paul, this is pretty rare. It seems to have two major labs, essentially safety, testing each other's models and publicly releasing the results. Like, why are they doing this and why are they doing it now?
[00:52:35] Paul Roetzer: I'm not sure. I'm just happy to see it. There was a tweet from Woes Rebo, who's a co-founder of openAI's. Said it's rare for competitors to collaborate. Yet that's exactly what openAI's and Anthropic just did. By testing each other's models with our respective internal safety and alignment evaluations.
[00:52:51] Today we're publishing the results. Frontier companies will inevitably compete on capabilities, but this work with Anthropic is a small, meaningful pilot toward a [00:53:00] race to the top. In safety, the fact that competitors collaborated is more significant than the findings themselves, which are mostly basic transparency.
[00:53:09] Plus accountability means safer ai. So I hope that this actually rapidly accelerates collaboration in this way across all the major labs. Like Yeah, and honestly, this is the kind of thing that could hold off. Government regulation, right? Like if the labs were more collaborative and they're not giving each other access to some versions of these models that are like behind the scenes, that's giving away intellectual property.
[00:53:36] These are the public facing models. They're getting access with some restrictions pulled back. Like we're, you know, we're gonna allow you to have more a API calls while you're running these tests. Then maybe we, we would, that kind of thing. So there is some collaboration, but they're using the public facing models and just testing each other, and then being willing to share it.
[00:53:54] Like, I don't know. I mean, this is fantastic to see. I was, I was actually really surprised that this happened, [00:54:00] especially with philanthropic and openAI's. And if those two can work together, then, you know, why couldn't you, I don't see Grok and openAI's working together. Like, but, who knows? Like maybe there is a central body that's created that allows for this cross testing and the labs to, you know, agree to publish on the same day at the same time.
[00:54:20] So nobody gets, you know, more. Acknowledgement that the other, acknowledges when their model underperforms against the competitor's model. Like it's great to see, like it's very encouraging actually.
[00:54:30] Mike Kaput: Yeah. Grok collaborating would require them to do safety.
[00:54:33] Paul Roetzer: Yeah. They would need to have some safety employees first, hopefully.
[00:54:37] Jensen Huang Suggests AI Will Create a Four-Day Workweek
[00:54:37] Mike Kaput: Yeah. Alright, next up in a new interview, Nvidia, CEO Jensen Wong says, AI could usher in a four day work week, but don't expect life to slow down. He predicts people will actually be busier because AI frees up time to pursue more ideas and more companies have more ideas than they can pursue. He said, quote, I have to admit [00:55:00] that I'm afraid to say that we are going to be busier in the future than now.
[00:55:03] I'm always waiting for work to get done because I've got more ideas. And he noted quote, most companies have more ideas than we know what to pursue. So the more productive we are, the more opportunity we get to go pursue new ideas. He noted that previous industrial revolutions have ushered in societal behavioral change and that AI could do the same by enabling us to do more in less time.
[00:55:26] So Paul, this actually hits on a couple elements of something I get asked about a fair amount, which is a lot of professionals know that AI will create or is already creating massive productivity gains. But you know, there's some of these more experienced people that are kind of just like sigh and say, okay, is this just going to mean I'm going to do more work or do more with less?
[00:55:46] Paul Roetzer: Yeah. And I always said, I think each company will have a choice. So I'll just, I mean, this is just a personal anecdote. you know, so I, like I said, up front, I spent the last three months, and I'm not joking, like [00:56:00] 95% of my time was building the academy stuff like e everything went into it. every, every spare, bit of brain power, I had e every free moment went into building those courses.
[00:56:12] I can't tell you the amount of excitement I had to finish them, to get them out into the world, but to go work on all of these things that are now solvable, that as an entrepreneur, as a CEO, we just couldn't do before. And like the idea of just freeing up time to spend a day or two a week working with these advanced reasoning models on problems I've tried to solve before.
[00:56:35] Things I wanna build. Like I have never been more excited to lead a company than I am today because of everything that's possible. And so I do believe that companies like ours will grow. We will keep hiring people. We will always have enough work to do. I don't know that we would ever go to a four day work week.
[00:56:55] Sorry for any of our employees listening to this, but I could absolutely [00:57:00] see where you're like, all right, we're just gonna take Fridays off for the, you know, Friday afternoons off for the next, you know, month. Like it's a, we got through our big conference, like, let's, let's take some time back now. Let's like go do that.
[00:57:11] You know, during the summer, like I could absolutely see way more flexibility because I think you can run dramatically more profitable and productive companies, right? And so as a leader, you find ways to give that back through increased comp compensation, through more freedom with their, you know, to be with their friends and family.
[00:57:28] Like you do that, but you do that to recharge people so we can keep doing the exciting, amazing things that we couldn't do two years ago because the tech wasn't there yet. And so I've always said. Just because AI is gonna be more powerful and generally capable, doesn't mean everybody's jobs go away. The companies where there is demand for what they do can grow in incredible new ways and hire more people, maybe not as many as they would've previously needed, but you'll keep hiring people because you're gonna keep going to new markets, launching new products, [00:58:00] launching new services.
[00:58:01] But if you work at a company where their demand is flat or like growing single digits, they're cooked like right? You, you, you're not gonna be able to compete with the people who are applying AI and doing more innovative things and opening up these new markets. So yeah, there's absolutely gonna be incredible stories of growth and innovation.
[00:58:21] AI is gonna power, and there's gonna be a whole bunch where these companies gets o obsoleted and the jobs go away. and my, I would love to see the former balance out the ladder. I would love to see growth and innovation and entrepreneurship as like the future engine of economies. That creates enough jobs where we never have that true deficit.
[00:58:41] Like the, world Economic Forum. I think they did that study where it's like 78 million net new jobs by 2030. I hope so. Like that'd be amazing. and I think we can, we we have to be intentional about it though.
[00:58:55] Mike Kaput: Yeah. And it's also worth remembering. I mean, I tend to look at it as well, just in [00:59:00] terms of like, what impact can I now have in the same amount?
[00:59:03] Yeah. Right. I'm less, I'm less worried about like four day work week or like everyone's save time. I get that. Like everyone's way too busy. Yeah. But I'm just like looking at the outcomes that are achievable that we've been able to achieve, even scratching the surface with these tools and it's like, wow.
[00:59:18] That was time well spent, which is more and more my metric versus like saving hours or not saving
[00:59:23] Paul Roetzer: hours. Yeah. and Mike, I mean, you've been in it too, building these courses. I am not exaggerating to do the three core series I just did. Would've taken a year easily. A hundred percent.
[00:59:34] Yeah.
[00:59:34] Before I had Gemini 2.5 Pro as like an AI assistant to like build and think and do these things with.
[00:59:41] So I mean, I, again, I can always just relate to what we do internally and that's why I always look at companies like, yeah, we're just not getting the r oi like, you're doing it wrong. Right? That's right. It is impossible to not get the ROI and to see the impact. If you have the right use cases and people are trained how to properly [01:00:00] use the tech, it is impossible not to get value out of ai.
[01:00:04] and I could, we could sit here all day and just go through examples of, to, to prove it.
[01:00:11] Microsoft’s AI Excel Warning
[01:00:11] Mike Kaput: Well, maybe on the topic of not getting value out of ai, Microsoft just added a co-pilot function directly into Excel, which is great, but it also comes with a bit of a warning label. So this is a new like co-pilot command.
[01:00:25] So basically you type co-pilot into cell or RO and it lets you type natural language instructions about what you wanna do with your data, and AI will go generate the right formula, et cetera. So you might use it to like say, summarize info in a bunch of cells or automatically classify data. But there's a bit of a catchier because Microsoft like explicitly goes out of its way to say, do not use this new feature for anything requiring precision.
[01:00:52] The company explicitly says not to trust it with financial reports, legal docs, any scenario where accuracy and reproducibly matter [01:01:00] reproducibility rather matter. So in other words, that's like most of what Excel is used for. So this feature right now is still in beta, but Paul, I always thought like getting real powerful AI into Excel could be this huge unlock for Microsoft because Excel just makes the world run at this point.
[01:01:20] But it's kinda wild that they're saying, don't use this for anything important.
[01:01:24] Paul Roetzer: We, going back to 2023, Mike, when we, when they first started demoing Yeah. What it was gonna look like. I actually show that video in my keynotes back in like 2023 of like, Hey, here's the future world where it's gonna change the way we work with Word and Excel and everything.
[01:01:37] and yeah, like we just, we haven't gotten there. And in part to, to your point, like, what else do you use Excel for then? Precision, like, everything I do in Excel has to be precise. and imagine like the CEO or CFO or like somebody who's maybe not a hundred percent sold on this AI stuff yet, but they bought the co-pilot licenses for the [01:02:00] company, but they don't really know how it's gonna be used.
[01:02:02] And the first time they get a report where data is incorrect. Someone's like, well, I used copilot. I thought it would check it done like that. CFO. It's like, we are never using copilot again in Excel. Even if, so, this is the kind of thing that creates that disillusionment when people don't understand it's not good at precision yet.
[01:02:22] Now they'll probably fix that, like it'll get there. They'll have an AI agent that checks the, you know, verifies all the data. Like it'll get solved, but we're not there. And most business users aren't going to read the fine print to not rely on the data from copilot. So, yeah, it's just, we're in that messy transitional period here where the AI's not that reliable, but it seems to do magical things.
[01:02:48] But you still, as the human, have to own the outputs. That is the most important lesson is you, you are still responsible if you use these tools to create anything. You're responsible for the, factual [01:03:00] nature and the quality of what is presented to people.
[01:03:03] Mike Kaput: Yeah, I would hate to be this past week, like a Microsoft Enterprise AI sales rep, and then this post comes out and you're just like, Ugh, I gotta explain this to the deal I'm trying to work.
[01:03:13] Paul Roetzer: Yeah, it's gonna be amazing in Excel. You just can't use it. Like, just wait.
[01:03:17] Claude in Classrooms
[01:03:17] Mike Kaput: Right. All right. Next up, a new report from Anthropic shed some light on how professors themselves are using Claude. So they actually analyzed 74,000 anonymized conversations from higher ed professionals, plus some surveys they did with Northeastern University faculty.
[01:03:36] And they found that professors are leaning AI on AI the most for curriculum design, research and admin work. So routine tasks like admissions or budgeting are often automated outright, while lesson planning and advising tend to stay collaborative with AI as a thought partner rather than a replacement.
[01:03:55] What's really cool is some professors are even using Claude's artifacts feature. [01:04:00] To build interactive tools for students on the fly to facilitate learning. Now, there is a little controversy here. They actually found that a fair amount of people are using AI for grading. Nearly half of grading related conversations showed heavy automation, even though surveyed faculty overwhelmingly said, AI isn't good enough for that and raises some ethical concerns.
[01:04:25] So Paul, I found this breakdown pretty fastening. I mean, it's incredible to see teachers really leveraging tools like Claude to reinvent education. Then you're already getting into like some possible pitfalls here because like using AI for grading, I'm not coming down one side or the other, but they seem to be very skeptical about doing that.
[01:04:43] And then we're doing it themselves. So does this kind of map to what you're seeing and hearing from teachers in your work, your talks?
[01:04:50] Paul Roetzer: Yeah, I think, so again, kudos to Anthropic for continuing to put out this like actual user-based research, which is fantastic. You know, we've talked about the [01:05:00] research previously.
[01:05:00] A lot of times, you know, Anthropic is heavily used in computer programming, by developers, AI engineers, and so a lot of their data is skewed toward that user. So having this specifically for higher ed professionals is, is great to like really be able to segment in and zoom in on that dataset.
[01:05:17] yeah, I I am not surprised at all on the grading thing. Again, you have to teach the teachers, like for, they're trying their best to figure out how to do this with very little support. Honestly, in 2023, 2024 school years, like there, there wasn't that much education being provided at at scale to professors.
[01:05:39] They don't know how to use the tools. Like they don't know what they're good at. It's kinda like going and using it Excel. It seems like it did a great job. The first time you're like, oh, let me see if we can grade a paper and it seems to do it really well. And you're like, oh wow, this could save me 20 hours a week.
[01:05:51] I get it. Like, and that's why I've always said, you have to teach the teachers. You have to empower them, understand the weaknesses, the hallucinations, know how the models [01:06:00] work. Again, going back to like AI Academy, that's why I built AI concepts 1 0 1. It's like, I just wanted people to have this like really fundamental understanding of how these models work.
[01:06:08] How are they trained? What are their deficiencies? Where are they really good? and I think that's what needs to happen with all educators, administrators too. Like you just, you have to have a deeper understanding of what these things are. And so it's great to see, you know, like Google and openAI's and Thro, they're all creating education.
[01:06:23] They're all trying to like, make it way more accessible for people to get the knowledge they need to use these tools properly. And hopefully this kind of research sheds a light on the urgency of like continuing to do that to make sure they're used well, to prepare students for the future of work.
[01:06:38] Mike Kaput: For sure. There was also one really I thought, humorous anecdote in here where a teacher told Anthropic they have in this article will link like all sorts of quotes from people and they said, I had one student complain that the weekly homework that he had reinvented using Claude was hard to do and that students were annoyed because Claude and chatgpt were useless in completing the work.
[01:06:59] I told [01:07:00] them that was, you know, by design and it was a compliment and I hope to hear more of it. That's funny. It's really cool. Yeah.
[01:07:07] AI Product and Funding Updates
[01:07:07] Mike Kaput: Alright Paul, so we're gonna round out here this conversation this week with a bunch of AI product and funding updates. I'm just gonna run through these real quick. So first up, Google Translate can now handle real time, back and forth conversations in more than 70 languages.
[01:07:23] You can tap, live, translate, pick your languages, and start speaking. Translate also now offers interactive practice sessions to learn languages that adapt to your skill level and goals. It tracks your progress daily and basically this turns Google translate into a personal language. Coach. Anthropic is piloting a Chrome extension that lets Claude actually use your browser, clicking buttons, filling forms, even managing your calendar or inbox.
[01:07:51] So this is a glimpse of what a Gentech AI could look like in everyday life. For now, they're only having a thousand of their max [01:08:00] users, that highest tier of their licenses testing Claude for Chrome with permissions, confirmations, and site blocks in place because Anthropic is also in the same breath of announcing this, very worried that there can be attack vectors that malicious actors can use.
[01:08:15] To derail Claude's browser use, Salesforce is rolling out some new research tools designed to make AI agents more reliable inside enterprises. The first is CRM Arena Pro, which in the simulated enterprise environment that acts like a digital twin of a business, it lets companies test agents in complex multi-step workflows before putting them in front of real customers.
[01:08:38] And they introduced the AGI Agentic benchmark for CRM, which is a test suite that grades agents not just on accuracy and speed, but also cost trust and safety. Microsoft AI has unveiled its first homegrown foundation models. The first is MAI dash voice dash one. It's a speech generation system designed to sound [01:09:00] natural and expressive.
[01:09:01] It can produce a full minute of audio in under a second on a single GPU. That's one of the fastest speech models to date. The second release is MAI dash one dash preview. A mixture of experts model trained on roughly 15,000 Nvidia H 100 GPUs. It is Microsoft AI's first full scale foundation model built end-to-end in-house Higgs field and AI Video generation startup is launching Higgs Field records, which they call the world's first fully AI operated record label.
[01:09:34] So this uses AI to produce music and create virtual, what they call idols. They already have their first AI musician being named Ion and throughout proprietary tools like their Soul ID tool Speak and FX suite, the label engineers and animates the vitals to have them sing, dance, and interact with fans on social platforms.
[01:09:57] They're actually now soliciting people to [01:10:00] apply to become the next global AI idol in some type of context. Basically saying you don't need talent, your face is enough, and if you're chosen, you get a chance to become the first AI global superstar. So presumably they're using kind of your face and likeness, you win.
[01:10:16] Paul Roetzer: And probably if you don't, I'm sure there's some terms Oh, where you're getting permission to take your face and put it on whatever they want.
[01:10:23] Mike Kaput: Yeah. I think that you might wanna read the terms and conditions carefully before signing up for that. and then we also have perplexity is taking a bold step to men fences with publishers.
[01:10:33] It is going to start paying them directly when their articles power its AI answers. They announced a $42.5 million revenue pool tied to a new, new subscription product they have coming out called Comet Plus. And that's rolling out this fall. So basically what happens is. Publishers will get 80% of the revenue, including from premium tiers, where Comet Plus is bundled for free.
[01:10:56] So if they're in practice, it means that if their article [01:11:00] is clicked on or used to answer a user's question, the outlet behind it, we'll see a payout from Perplex. And Paul, I think you had one item you wanted to add here about openAI's
[01:11:11] Paul Roetzer: agent. Yeah, I threw this in last minute. I've been trying to find a test case for agent mode.
[01:11:15] So if you go into chat GBT, you can click on agent mode. I don't remember what subscription tier you need to have access to Agent mode. I assume it's, it's built in all of 'em. Probably just limited use of it in the lower tiers, but I was trying to, I was struggling with it and so last week my wife, said she's been trying to find a new front door and we have kind of this like, you know, very old looking, you know, beautiful front door.
[01:11:36] And she said, I want like that, but it needs, you know, we just need a newer one. And I said, that might be like an agent mode thing. Like, let me think about that. So I took a picture of our front door, I popped it into, chatgpt in agent mode, and I said, find me doors like this online. We wanna replace the door with something close to this style.
[01:11:51] I didn't even know how to explain the style honestly. Right? And so at nine minutes later, it came back and it had, a chart breakdown of like seven different [01:12:00] doors with links to everything, all the dimensions, a summary of what would work. And so I was like, yeah, you focused way too much on like the diamond window.
[01:12:07] I need this to look more like the wood. and so it goes away for like six more minutes. It comes back and it nailed it. And it's like, it was old world is how you describe the door. So there was the name and it found like the specific sites, building a chart, sent it to my wife. She goes, oh, this is exactly what I was looking for.
[01:12:24] So again, it's just like we have this technology, it just sits there. Let's go back to like the nano banana thing. Like how do I even use this? Like I wanna test this thing I was talking about. I don't know what to do. and I could have just gone to chat to me and said, Hey, gimme some ideas of how to use agent mode.
[01:12:38] I didn't do that at that point. I didn't have enough time. But again, incredible technology. Oh, what other one, Mike? I'll, I'll, sorry. We'll extend this for a second. Another personal use. So I used, we were in our backyard and we have a maple tree. And my wife and I are trying to figure out why are all these bees all around this maple tree?
[01:12:55] Like, what is going on? And as we're standing, I realized the leaves are like wet. It hasn't rained in three days. I'm [01:13:00] like, what in the heck is happening with this tree? How is it wet? Like there's a pool of something at the bottom of the tree. So that night we're like, we just kind of give up. This was Friday night.
[01:13:08] Like I don't know, I got this setting when I was like, I have to solve this. So I pull out, GeminI turn on the video capability, like the project astro capability. And I was like, I'm completely stumped. I don't know where the dampness is coming from on the leaves. Here's the tree. And I'm just showing it.
[01:13:21] And I'm talking through it and it's like, oh, this is interesting. Have you thought about this? Have you thought about this? And then I said, Hey, there's one of those spot lantern flies. It's like this invasive species that's in northeast Ohio right now. And it said, oh, there's your answer. They're known to feed on tree sap.
[01:13:34] It'll actually extract it. It'll drip down the tree, and it gives me this whole analysis of what the dampness is. I'm saying like, you gotta be kidding me. So I start looking in the tree and the thing is infested with these lantern flies and it was causing this. And I was like, man, AI for the wind twice this weekend in my personal life.
[01:13:50] Just like crazy.
[01:13:52] Mike Kaput: I mean, those personal use cases are sometimes when I counsel people like focus on immediately. Yeah. Because like, you know, a hundred things you wanna [01:14:00] do in your personal life that you don't have to like think about the workflow for, to try. Yeah, for sure. Ai. Yeah. And then
[01:14:05] Paul Roetzer: you start connecting the dots of how to do it in your business life too.
[01:14:08] So, yes. So problem. Problem solved, I got my door solved and I got dozens of lantern flies invading my tree. All right. Good stuff. Thanks Mike. We'll be back next week, regular day. We'll be back on Tuesday the ninth with the next episode. So we appreciate everybody, hang with us on a Wednesday, the drop we hit this week.
[01:14:25] So we'll talk to you again soon. Thanks for listening to the Artificial Intelligence show. Visit SmarterX .ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters. Downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy, and engaged in the marketing AI Institute Slack community.
[01:14:52] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.