Would you trust a synthetic version of yourself to teach your audience? One CEO just did, and it’s raising questions about authenticity, attention, and the future of thought leadership.
In this week’s episode, Paul and Mike examine OpenAI’s new tentative deal with Microsoft, the deeper implications of its “People-First AI Fund,” and why Microsoft, Oracle, and OpenAI might be creating value out of thin air. They also analyze Replit’s Agent 3, a next-gen AI dev tool claiming 10X more autonomy, and why it may hint at what’s coming across industries. Plus, stay tuned for commentary on AI’s impact on jobs, the economy, and a controversial AI podcast startup.
Listen or watch below—and see below for show notes and the transcript.
00:00:00 — Intro
00:04:51 — OpenAI and Microsoft Partnership
00:18:31 — Replit’s Agent 3 and What It Means for the Future of Agents
00:30:15 — AI Avatars for Executives
00:42:36 — OpenAI and Oracle Compute Deal
00:47:00 — Anthropic’s $1.5B Authors Settlement Under Scrutiny
00:51:17 — Internal Tensions at Meta
00:54:52 — AI and Jobs: Labor Market Signals
01:02:11 — Will AI Crash the Economy?
01:07:55 — AI Podcast Startup
01:14:13 — FTC and AI Companions
01:17:25 — Retail AI Case Studies
01:20:09 — AI Product and Funding Updates
OpenAI and Microsoft Partnership
OpenAI is further along its path to becoming a for-profit company. This week, OpenAI and Microsoft signed a deal that clears one of the biggest hurdles to the transition: Microsoft’s approval.
After a summer of fraught negotiations, the two AI giants have agreed to extend their partnership. According to a statement from OpenAI:
“OpenAI and Microsoft have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership. We are actively working to finalize contractual terms in a definitive agreement. Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety.”
Microsoft’s tentative blessing seems to give OpenAI the green light it needs to present its for-profit restructuring plan to state regulators.
That plan would shift OpenAI from a nonprofit-controlled subsidiary into a for-profit entity, one where Microsoft and the nonprofit would each hold roughly 30%, and the rest would go to employees and investors.
But the overall plan to transition to a for-profit company is facing fierce pushback. California and Delaware attorneys general are investigating whether this shift violates nonprofit law. Critics, including Elon Musk and Meta, claim OpenAI is abandoning its mission and enriching insiders. Some have even cited tragic incidents involving ChatGPT to question the company’s priorities.
The pressure has been so intense that OpenAI’s execs at one point reportedly discussed leaving California altogether.
Replit’s Agent 3 and What It Means for the Future of Agents
AI coding platform Replit just raised $250 million and tripled its valuation to $3 billion.
But the real headline? It launched something called Agent 3, a next-gen AI developer that can build apps almost entirely on its own.
Agent 3 doesn’t just suggest code. It tests it, fixes bugs, and even clicks through your app like a real user to make sure everything works, all without needing constant human input.
Said CEO Amjad Masad in a post on X: “Agent 3 is 10X more autonomous—it keeps going where others get stuck. The 'Full Self-Driving' moment of software.”
He also says it has 10X longer fully autonomous runs than its predecessor, Agent 2, being able to run over 3 hours straight.
Behind the scenes, Replit’s growth has been staggering: annual revenue jumped from $2.8 million to $150 million in under a year, now with 40 million users and enterprise clients like Zillow and Duolingo.
AI Avatars for Executives
Should your company’s next thought leader be…an AI avatar?
That’s a question we’ve been thinking about after seeing how Databox CEO Peter Caputa debuted a new video course taught entirely by his AI double.
The avatar looks and sounds like him, but it’s powered under the hood by popular AI video creation tool HeyGen. Caputa wrote the script, trained the model, and after some trial and error with lighting and camera angles, the AI now delivers hours of content on his behalf.
He says it saves time, while preserving the unique value he has to share because it’s been trained entirely on his human content and expertise.
And plenty of people seem to agree. HeyGen just raised $60 million at a $500 million valuation. More than 40,000 businesses are using it.
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off either an individual purchase or a membership by using code POD100 when you go to academy.smarterx.ai.
This week’s episode is also brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.
For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: You can either be the hero or the villain here. This technology is going to disrupt a society that is inevitable. You have to get out ahead of this. Your tech is going to disrupt people's jobs. It's going to do a lot of negative things, but it's also gonna do a bunch of whole like incredible things. And you have to be leading in that way.
[00:00:17] You have to be proactive in that way to be viewed as someone doing good for humanity. While your technology might be doing the opposite sometimes, welcome to the Artificial Intelligence show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer.
[00:00:35] I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute, chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:55] Join us as we accelerate AI literacy for all.[00:01:00]
[00:01:02] Welcome to episode 167 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording September 15th, 9:15 AM we are Mike, only like four weeks for me. I like, I was thinking about this over the weekend, like, oh my gosh. We gotta start building some, some stuff for makeup.
[00:01:24] so we are, we are coming up fast in our big conference, so, I'll actually, I guess I'll lead with MAICON, since I'm talking about already. So October 14th to the 16th, if you know what I'm talking about, if you're new to the podcast and haven't heard us mention this before, our flagship in-person event MAICON is happening October 14th to the 16th in Cleveland.
[00:01:41] We started this conference in 2019. this year is trending in incredible direction. We are looking at probably 1500 plus attendees. The majority of the, agenda is live at Macon ai. That's MAICON.ai . We actually [00:02:00] made some really exciting progress on some, new keynotes last week, so I can't announce anything yet, but, stay tuned.
[00:02:07] Hopefully by this time next week we might actually be able to, announce the, some of the, might be a final agenda. There's a couple moving pieces still on that main stage, but we're getting real close. So go check it out. Again. It's MAICON.ai Dozens of sessions of, you know, I think 40 plus speakers.
[00:02:27] Just an incredible lineup. I can't wait. I just need to create my opening keynote. So I'm doing the Move 37 moment for, knowledge workers. Basically, it's, it's probably the most excited I've ever been to create a keynote. I've, every year I love doing my MAICON Keynote. it's always an original talk.
[00:02:44] This is probably the one I've, I'm most excited to create. It's something I've been working on for years, and I just sort of forced myself to say, okay, I'm creating this talk for, for MAICON this year. So, it is, completely original because it doesn't exist yet, and I have four [00:03:00] weeks to create. So Mike is a session.
[00:03:03] We both got workshops, just, it's gonna be awesome. So we would love to see our community there in person. Again, it's Macon ai. And then this episode is also brought to us by AI Academy, by SmarterX. We've been talking a lot about that lately. It is a big focus of me, of my time, of Mike's time. we've built out our staff to, to build out AI Academy.
[00:03:23] So through a lot of our energy and resources are going. We've talked about some of the, new series. So today I was just gonna mention the Scaling AI course series. This is the third one that I created for the launch. it's a seven core series with professional certificate. It's built for, I don't know, I have to say like director level and above, but kinda like existing leaders or emerging leaders, people want to understand and take a leadership role in the adoption and scaling of AI within their organization.
[00:03:49] So it's got the AI Forward Organization is course one ai. Academ is course two. Course three is the AI Council. course four is generative AI policies. Five is responsible AI [00:04:00] principles. Six is AI impact assessments, and seven is the AI roadmap. So it really takes you on a journey of our five step framework for scaling AI within an organization of any size.
[00:04:09] You can learn more about that at Academy do SmarterX dot ai and for both the, for Macon and the AI Academy Mastery membership, you can use POD 100 as your promo code. That'll save you a hundred dollars off of either of those. So again, POD 100 for Academy and, makeup. Alright, let's get into it. We had, well, I guess we ended up with a new model ish last week.
[00:04:37] New agent three from Rep, which we're gonna talk about and some progress on the openAI's Microsoft Partnership. that might set the stage for some pretty wild stuff this fall, but, all right, Mike, let's, let's get into it. The OpenAI, Microsoft thing to start.
[00:04:51] Mike Kaput: All right, Paul. So openAI's is a bit further along on its path to becoming a for-profit company because this week, openAI's [00:05:00] and Microsoft.
[00:05:01] struck a deal that clears one of the bigger hurdles to this transition, which is Microsoft's approval. So after a summer of fraught negotiations, the two AI giants have agreed to extend their partnership. According to a statement from openAI's, quote, openAI's and Microsoft have signed a non-binding memorandum of understanding.
[00:05:22] For the next phase of our partnership, we are actively working to finalize contractual terms in a definitive agreement. Together we remain focused on delivering the best AI tools for everyone grounded in our shared commitment to safety. So if this kind of goes through Microsoft's tentative blessing, seems to give openAI's the green light, it needs to start presenting its for-profit restructuring plan to state regulators.
[00:05:47] Now, that plan would shift openAI's from a nonprofit controlled subsidiary into a for-profit entity, one where Microsoft and the nonprofit itself would each hold roughly 30%. The rest would go to employees and [00:06:00] investors. Now this overall plan to transition to this for-profit company is facing some fierce pushback.
[00:06:07] California and Delaware Attorneys General are investigating whether the shift violates nonprofit law critics, including Elon Musk and meta claim. openAI's is abandoning its mission and enriching insiders. And some have even started citing these tragic incidents. We're hearing more about involving ChatGPT and relationships with people, to question the company's priorities.
[00:06:30] Now, the pressure has been so intense that the reporting showed that at one point openAI's execs even reportedly discussed leaving California altogether. So Paul, there's a lot going on here and a lot of implications even of a short memorandum of understanding what does this actually mean for openAI's going forward.
[00:06:53] Paul Roetzer: At a really high level, if you're new to all of this, we, you know, again, if you've been a long time listener to the podcast, this is a recurring [00:07:00] topic. every few episodes it seems. There's something else related to this, the openAI's Microsoft relationship and Open eyes efforts to, of all of the company structure.
[00:07:08] But if, again, if you're new to all of this, the basic premise here is Microsoft in controls, a lot of interest in OpenAI. they've invested over $13 billion into them. There is a contract that was created originally where Microsoft got access, the most advanced OpenAI technology, but that if OpenAI determined they reached AGI, then Microsoft would no longer get access to this.
[00:07:33] It was a big sticking point, but Microsoft has a bunch of leverage as well. So the challenge with this relationship is openAI's needs to raise insane amounts of money unparalleled in human history, amounts of money. They think trillions of dollars never before done. But the only way they're going to do that is to go public.
[00:07:52] They, they, they're gonna eventually not be able to do this as a private company, especially under the control of, the nonprofit as it was previously [00:08:00] structured. So all of this is all about moving openAI's to a place where they can raise the amount of money needed to pursue their vision for omnipresent intelligence throughout society, basically.
[00:08:11] So that's like synopsis of what's going on. So there's all kinds of legal give and take that needs to happen. behind the scenes. This is not easy. And then like as you called out that there's not even like a given that they're gonna get approval from California and Delaware or that some lawsuit, like for Elon Musk isn't gonna muddy all this up and just make this go on for a while.
[00:08:34] So I think everyone kind of assumes, this'll just work out that like this'll all get solved. Somehow the lawyers will do what they do and we will find a way to like move on and open. I will eventually IPO and, you know, become one of the most valuable companies in the world. It is not a given though. Like there's, there's all this stuff happening behind the scenes.
[00:08:53] So the joint statement, Mike, that you read was, I mean, literally we'll put the link in the show notes. That's it. It was a paragraph it insult thing. Yeah. That, that is the [00:09:00] post. It's like the, I don't know, 50 words that Mike read. now it does link to a post from Brett Taylor, who we talked about, I think last week on the podcast, if I'm not mistaken.
[00:09:10] . Who is the chairman of the board, CEO of Sierra. We were talking about his AI agent, startup Sierra, former, board chair of Twitter. So Brett Taylor put out a, a, a little bit more extensive post about Open Eyes, nonprofit and Public Benefit corporation, vision. So I'll, I'll read a couple of excerpts from his post and we'll again include this in the show notes.
[00:09:33] At open's Planned Evolution, we'll see the existing openAI's nonprofit both control a public benefit corporation or PBC, which is also how Anthropic is structured by the way, and shared directly in its success. OpenAI started as a nonprofit, remains one today and will continue to be one with the nonprofit holding the authority that guides our future.
[00:09:54] This new equity stake would exceed $100 billion, making it one of the most [00:10:00] well-resourced philanthropic organizations in the world. This recapitalization would also enable us to raise the capital required to accomplish our mission and ensure that openAI's as openAI's, PBC Public Benefit Corporation grows.
[00:10:14] So will the nonprofits resources, allowing us to bring it to historical levels of community impact. We'll come back to that in a minute. That's an important part of this. The structure reaffirms that our core mission remains ensuring AGI a bene AGI benefits all of humanity. We continue to work with California and Delaware Attorneys General as an important part of our, strengthening our approach, and we remain committed to learning and acting with urgency to ensure our tools are helpful and safe for everyone.
[00:10:41] While advancing safety as an industry wide priority. So that sentence there is very intentional based on what Mike alluded to of the growing concerns, including, I think it's the FTC we'll talk about in a minute. That's, exploring the use of chat GBT as a companion and the impact it's had on some recent cases of suicides.
[00:10:58] so [00:11:00] there's, there's a lot of very, very strategic language in this post from, Brett Taylor. And then the final exert I'll, I'll lead, read, which leads into the other point I wanna make here. so Brett continued to write as part of this next phase, the OpenAI nonprofit has launched a call for applications for the first wave of a $50 million grant initiative to support nonprofit and community organizations in three areas.
[00:11:24] And now we're about to get our preview of what this a hundred billion dollars stake in openAI's is going to be for the three areas. Brett calls out AI literacy and public understanding. Community innovation and economic opportunity. He writes, this is just the beginning. Our recapitalization would unlock the ability to do much more.
[00:11:43] Now, the $50 million grant initiative, they allude to links to another post from September 8th on the OpenAI website. I don't remember if we talked about this, Mike. This is the first time I recall seeing the name of this fund. But this fund that's gonna provide the $50 million grants [00:12:00] is a people first AI fund.
[00:12:01] That is the name of it, people First AI fund. So what is the People first AI fund going to do? Which by the way, this is part of the reason we on the messaging, we are very intentional about not always saying AI first because it implies people aren't first. So it's interesting to me that openAI's is sort of like leaning in this people first direction with their messaging.
[00:12:21] okay, so this is now excerpt from this post, which we also will link to the P, literally the URL is People First AI fund. We believe AI should help solve humanity's hardest problems, and that we sh we should listen to and learn from organizations already leading that work on the front lines.
[00:12:39] Today, we are excited to share that applications for the first wave of grants are open. Grants will be unrestricted reflecting our commitment to support the expertise of non-profit and community-based organizations. Application window will close on October 8th, 2025. So if this is you, like if what we're reading here fits you, you, you got three weeks to get your application in for the [00:13:00] first slug of 50 million, but there's a hundred billion more coming, so don't worry about it.
[00:13:03] grants will be distributed by years end. So what they're basically doing here is they're racing to distribute money to show their positive impact on society and people. So as they're going to California and Delaware begging for the permission to do what they need to do, they're already taking action to say, look it, if we have access to this money, this is the kind of thing our nonprofit will be able to do.
[00:13:25] We've already done it. Now mind you, there's probably like an actual, like human good intended behind this, but this is all very intentionally being accelerated to show a positive impact on society in, in my opinion. so the People First AI fund will support organizations directly working in the three areas that we, we called out from Brett Taylor's post ai, literacy and Public Understanding.
[00:13:48] They now, this is from their post. We seek to support organizations that help communities build the knowledge, skills and confidence to navigate the age of artificial intelligence. This includes education programs, [00:14:00] media initiatives, and opportunities for people to engage with and better understand the technology.
[00:14:05] They're specifically interested in equipping people with practical skills, that skills that may involve training local leaders such as educators, faith leaders, youth mentors and artists. The community innovation side, they say the priority is to take, is to back efforts to ensure AI strengthens civic life and helps people stay healthy, connected, and thriving.
[00:14:25] And then economic opportunity. This could include programs that prepare people, especially young people for the jobs, jobs of the future, tools that support caregivers and local businesses and initiatives that help workers build economic security. They do say the fund is the early step in a larger vision to ensure the intelligence age is shaped by listening, learning, and building with not for communities.
[00:14:46] We look forward to working with our grant partners and learning from it, and then they do call again at the end that the People First AI fund is intended for US based nonprofits with valid 5 0 1 C3 status. Organizations may only apply [00:15:00] once to be considered for the fund. So a lot going on here. like I said, the big biggest issue is they have to change structure to IPO to raise the amount of money they envision needing.
[00:15:15] to do that, there's a whole lot of stuff that happens, has to happen behind the scenes, including a lot of politics that has to happen behind the scenes. And they need to be seen. Like I, I've, I've used this, this line, I don't know if I've said this publicly, but I've used this line, with some technology companies.
[00:15:33] You can either be the hero or the villain here. Like this technology is going to disrupt a society that is inevitable. It will be viewed as a negative by large portions of society, as they are impacted by it. The way, as a technology company, again, part of this is putting my communications marketing hat on.
[00:15:51] You have to, you have to get out ahead of this, like your tech is going to disrupt people's jobs. It's going to do a lot of negative things in [00:16:00] society, but it's also gonna do a bunch of whole like, incredible things in society. And you have to be leading in that way. You have to be proactive in that way to be viewed as someone doing good for humanity.
[00:16:11] While your technology might be doing the opposite, sometimes. So this is where lobbying efforts are gonna be massive. It's where building this kind of program where you're giving tons of money away, the economic opportunity, I can almost guarantee you has some element of universal basic income envisioned into it where they're gonna pay people to not have jobs like it.
[00:16:29] This is, we're leading off with this episode because this is a very, very far reaching topic that will, if you understand what's going on with openAI's, you, you will have a greater grasp of what's gonna be happening in society for the next like 10 years basically.
[00:16:44] Mike Kaput: Not to mention this might be going a bit out on a limb here, but it is not simply, in my opinion, a story of a massive high growth company trying to get ahead regulation or government overreach.
[00:16:58] At some point, the [00:17:00] scale of this gets so large, this is intimately entwined with the government at some point. Yeah. we've talked about that a bit in the situational awareness from Leopold Dash and Brenner. I'm not saying that that openAI's gets nationalized, but what you're talking about when it comes to universal basic income that inherently becomes, whether it's openAI's doing it, or a consortium of companies that becomes very intimately linked with society and civic life, not just market.
[00:17:28] Paul Roetzer: Yes. And and we've talked a lot about jobs and the economy recently. We've got another topic today related to this. All of this is intertwined. Yes. You're a hundred percent right, Mike. And the executive order to prioritize and fund AI literacy through the government. you're, you're going to see, you know, an avalanche of, of state and federal initiatives around AI literacy and re-skilling prof professionals because again, they, they all now know what's happening.
[00:17:57] jobs numbers are starting to [00:18:00] indicate it, and that's what we've been waiting for. Well, not we, it's what they have been waiting for is like actual data to prove this is all happening. So. Yeah, I mean, I think we're, we're kind of entering this point of no return where all the people in power now realize the disruptive nature of AI and how quickly it could happen.
[00:18:17] And so there's gonna be this massive effort, both private and federal, to, you know, private and government to, to try and solve for this before it becomes like a, a runaway train basically.
[00:18:31] Mike Kaput: Alright, our second big topic this week, AI coding platform replicate just raised $250 million and tripled its valuation to 3 billion.
[00:18:42] But the real headline is that it also launched something called Agent Three. This is a next gen AI developer agent that can build apps almost entirely on its own. So Agent three doesn't just suggest code, it actually tests, it fixes bugs and clicks through your app like a [00:19:00] real user. To make sure everything works all without needing constant human input.
[00:19:05] So CEO Amjad Mossad said in a post on X quote, agent three is 10 x more autonomous. It keeps going where others get stuck, the full self-driving moment of software. He also says it has 10 x longer fully autonomous runs than its predecessor, agent two. So agent three can run autonomously, fully autonomous for over three hours straight, which is about 10 x more than agent two could do.
[00:19:34] Now also behind the scenes rept growth has been honestly insane. Annual revenue, this is not a typo, jumped from 2.8 million to 150 million in under a year. There are now 40 million users and they have enterprise clients, like companies like Zillow and Duolingo. So Paul, first maybe talk to me about agent three.
[00:19:56] About Rept claims here about it being 10 x more [00:20:00] autonomous. Like, is that legit? And what does that actually mean? Has there been some breakthrough in agents here?
[00:20:07] Paul Roetzer: So this is a really important concept to understand and also to delineate between some of the ways we've talked about previous measurement of autonomy and how they're kind of positioning it.
[00:20:19] So at a, at a really high level, like the main takeaway here is the way these labs are thinking about the future of AI development and the impact it'll have on society and the economy is how long can these things work reliably without human intervention? And so, if you remember, again, if you've been listening to the podcast for a while, episode 1 52 in, it was June 10th of this year, and then earlier in episode one 40, which was in March of 2025 to.
[00:20:49] We talked about this seven month rule, which is meter, METR, it's model evaluation and threat research. So it's an organization CEOs, Beth Barnes, [00:21:00] and they, this, what they have is the seven month rule, which is, they look at the model's, ability, like a 50% chance of successfully completing a task of how long it would take a human to do it.
[00:21:12] And what they're saying is every seven months it's doubling. So right now, what with the latest meter research was, in, in March of this year was that AI models had the 50% chance of successfully completing a task that would take an expert human one hour. So now this was specific to coding, and that's, that's a really important criteria.
[00:21:33] Anything we're talking about right now, whether it's related to Replic agent three or this meter rule is all related to like computer programming. In rep's case it's like building apps, you know, writing software, basically not. Doing legal work or doing marketing work or things like that. So we're like separate out that we're talking specifically about software at this point in coding.
[00:21:52] So, in the, in the meter research, so it was a, an hour in March, seven months prior to that, it was 30 minutes, [00:22:00] seven months. Prior to that, it was 15 minutes. So with their finding, and it's kind of like a, a potential scaling law, is that every seven months it's, it's doubling. So, you know, the theory then would be by August you should be at basically two hours, 50% chance.
[00:22:15] Two hours. So now interestingly when, GPT five came out, they were given meter, was given early access to it a couple weeks in advance, and they did find that it was basically continuing to double. So they found that GPT five is capable of executing in their specific testing, any of three main threat models.
[00:22:35] They used their time horizon evaluation and this 50% chance of success had jumped to two hours and 17 minutes. So it went from one hour to two hours and 17. So it was, the scaling law was now continuing. So we had like four different milestones within it, but they, they did this going back six years. and so it has held that is different than what we're seeing with agent three.
[00:22:56] So again, the meter is saying 50% [00:23:00] chance of it completing something that would take a human one hour. What rep isn't telling us is how long would it take a human to do what you're saying? It's doing all, all rept giving us is runtime, meaning the agent does this thing. I don't, I don't think, I didn't see anywhere they said reliability, like 50% chance.
[00:23:20] It was just, it can do something for 200 minutes straight without kind of losing its thought process, losing its planning structure, things like that. Now, is that 200 minutes of agent work equivalent to. 20 hours of human work. Like what would it have taken a human to do it? That's what they did not share.
[00:23:36] . And it was interesting because Amjad actually tweeted about the meter research. He retweeted a post from like March about the meter research and said, yeah, we're actually seeing something different. So his tweet was, the meter paper says that the length of tasks AI can do is doubling every seven months, radically undersells the scaling that we're seeing at Rept.
[00:23:59] It might [00:24:00] be true if you're measuring one long trajectory for a single model class, but this is where an agent research Labs Alpha is at. We build multi-agent architecture and use different models from various providers to tap into their latent abilities across various tasks. So I'm gonna zoom out and explain what that means.
[00:24:17] What he's saying is Meters research, which is the best we've seen publicly so far, that projects every seven months. It doubles. so if you take a human and they, it takes 'em one hour to do something. In seven months, it'll be able to do two hours of that human work. what he's saying is we actually rapidly scale beyond this by using multiple agents.
[00:24:40] . So we're gonna have an agent that verifies things. We're gonna have an agent that creates the plan we're gonna have. So they're using multiple model providers. 'cause rep doesn't build their own models. They're, they're basically using, let's say they imagine they're using like an Anthropic model for something, an opening eye model for something.
[00:24:52] Maybe they're using Gemini for something. I don't know what the architecture is, but they're able to use different agents to do all of these things [00:25:00] simultaneously. So what they've, their, their agent won in September, 2020. four did two minutes of runtime. Agent two in February, 2025 was 20 minutes of runtime.
[00:25:13] Agent three in September, 2025 is 200 minutes. So they're kind of following more, what is this? Every six months? It's 10 Xing. 10 x. Is that, is that, am I doing my math right, Mike? Roughly, yeah. Yeah. So they're, they're on a whole nother level of scale of runtime. Now, the missing piece of this equation is what is that 200 minutes equal in human time.
[00:25:35] . All of that extracted, the, if you go to their website, it is 10 x autonomy, like they are all in on, we are automating human work. In this specific instance, automating computer programming is what they're actually doing. the key for the economy is when do we see these kinds of scaling laws in healthcare, in finance, in marketing [00:26:00] and sales and customer success and operations.
[00:26:02] We don't have that research right now. There aren't evals that are looking at that. That is the thing, and Mike and I have had, I don't think I've publicly said this before, Mike and I have had these conversations internally of like the need for those level of evals at a, at a industry level or at a job specific level.
[00:26:20] Because everything we're able to share with you today is coming from people who are doing this for AI research and computer programming. It's when you translate it over to your job and you say, wow, Mike spends two hours a week on this task.
[00:26:34] Paul Roetzer: This AI agent is able to do that in, in two minutes with 90% accuracy.
[00:26:40] Things change when those, the kinds of statements we can make. and so that's the significance and why, again, like some of you, if, again, if you're newer to all this, like Repli is not a, a, a household brand in, in enterprises. Like that's not probably, it's not on the openAI's level, the Anthropic level where you've probably heard of them.
[00:26:57] They matter though, and like [00:27:00] what they're doing matters. And Amjad is a very intelligent CEO and they're doing incredible research. And this is the kind of stuff you will see. Google's working on the same kind of thing, opening eye. Everybody's working on this. They just kind of seem to be the one that's like publicly leading the way talking about it.
[00:27:16] And they've found ways to deal with the inefficiencies of computer use agents from Anthropic and openAI's. . And Google that aren't working very well. They claim they've found ways to fix that. So very, very interesting research direction and potentially important breakthroughs when we like look back on this a year from now.
[00:27:34] Mike Kaput: And really important to note, I was digging into where did all that growth come from? Because 2.8 million to 150 million in less than a year is crazy. It's largely driven by agents. So that can tell you A, I think they know what they're talking about, but B, there's a vested interest here. Right. In promoting what agents can do.
[00:27:52] Absolutely. Because that is driving their business. I think at one point their business was really on the rocks before all this, so. It's [00:28:00] amazing to see. I have nothing against rep flip, but just also important to note, like before we go tweeting that chart where they show 10 x autonomy. Right. Keep in mind that we are all, you have vested interests in certain things and theirs is an agent's being made.
[00:28:14] Paul Roetzer: Yeah. And it doesn't mean it's that for your job, right? Like Right. And again, like is it, the whole problem with autonomy is when it's talked about people and even agents, you know, more specifically they assume it's transferrable. Mm. That like, oh man, if they're 10 xing autonomy, that means my job as, you know, an HR professional or a lawyer or a CEO, I can 10 x my, my performance, my productivity if I go get repli.
[00:28:41] No, that is not what we're saying at all. but it's a prelude to the breakthroughs happen where the greatest value will be created. Right. Which right now is in AI research and engineering and programming. And that's why all the labs are starting there. But when they saw how to do it there in [00:29:00] reliable ways, then it trickles out into all the other industries.
[00:29:04] Mike Kaput: And I'm super bullish on agents long term like we've discussed. I would just say if you are in a non-coding role, go try out openAI's agent mode on three different, very different tasks and then come back and you'll have a much better, a much more sober look at what's possible and what's not. And it's amazing.
[00:29:21] I like it. But it's, you'll see the limitations pretty quick and
[00:29:24] Paul Roetzer: deep research is another great example. Like it's incredibly impressive, but it still just has its flaws. And that's a great way to experiment with an agent is like, go build a deep research project and Gemini are in Jet GBT, and you'll see an age network, you'll see it do its planning, you'll see it go through a chain of thought, you'll see it, but it's not doing those like self-correcting and verifying the outputs, but that's all gonna come.
[00:29:45] . And that's, . That's the example here. That's basically what they've solved in programming is it can do the planning, it create the, you know, the plan to follow. It can go through a chain of thought. But now they have agents within this architecture that, that then look at the work and find flaws and then self-correct the work.
[00:29:59] [00:30:00] And then they like keep going that when they hit a BA blockade, they fix it themselves. . You don't have that in like deep research right now in Gemini and ChatGPT pt, but it's coming like all of this is a prelude to what will happen to the general tools the rest of us use.
[00:30:15] Mike Kaput: Alright. Our third big topic this week is answering the following question, should your company's next thought leader be an AI avatar?
[00:30:24] We've been kicking around that question internally because, someone in our network who know well, a data box, CEO, Peter caputa has debuted a new video course taught entirely by his AI double. So he posted about this on LinkedIn pretty extensively, both his experiments with AI avatars and now that one is going to be teaching a course from him.
[00:30:45] And this avatar looks and sounds like him. It's powered under the hood by the popular AI video and avatar tool HeyGen. Caputa wrote the script and it's trained all on, you know, not only his visuals, but all of his knowledge and [00:31:00] expertise that he's teaching in the course. So he said, after some trial and error with lighting and camera angles, this AI avatar now delivers hours of content on his behalf.
[00:31:10] And he says it saves time while preserving that unique value he has to share because it's been trained on his content, his expertise, it's not just coming up with it on its own. Now, what's interesting is plenty of people seem to agree that using, HeyGen for this kind of thing is useful because they just raised $60 million at a $500 million valuation and they've got more than 40,000 businesses using the technology.
[00:31:34] So Paul, like the questionnaire is like, just because you can do this doesn't mean you should.
[00:31:40] Paul Roetzer: This is an interesting one. So for context, Peter and I go back, a really long time. It's actually hard to imagine, but it's like, eight, 18 years we've known each other. Oh, wow. So Pete was the architect of the HubSpot partner program.
[00:31:53] So anyone who doesn't know my agency that I sold in 2021 was HubSpot's first partner back in 2007. So we [00:32:00] were the origin of their HubSpot partner ecosystem. Pete was the guy internally within HubSpot that pushed heavily to build around outside partners that he, he believed that agencies could build, you know, be a value added reseller network.
[00:32:13] There were other people within HubSpot that believed this also. This is, I mean this is like year two or three of HubSpot. This is the very, very early days. And so I would sit in meetings and we would talk about these things and like they would look at what we were doing at my agency and think, wow, could we scale that to like thousands of agency partners that can resell and then add value resell, re add value added services to the software.
[00:32:34] So. Pete was the guy who had an agency, then he came to HubSpot and so he saw the potential for this. So in the very early days of HubSpot, like 2007, 2008, Pete and I worked very closely on what we were doing at my agency and how that could scale. And then my first book, the Marketing Agency Blueprint in 2011 became kind of a catalyst for the growth of that HubSpot program, in part due to Pete's efforts to like push me to share what we were doing.[00:33:00]
[00:33:00] So backstory, I know Pete very well. so Pete has been the CEO of Databox now since I don't know. I mean, I feel like it's like seven, eight years. I kind of lose track of time these days, but he's been there for a while. So I see this post from Pete on LinkedIn and I'm, I'm like, I don't, I don't, I don't know.
[00:33:18] I don't know. I'm like sure how I feel about this. But to Pete's credit, like Pete is a, he loves to stir things up. Like he's very comfortable with like creating conflict and like letting people kind of argue out a point. And so knowing that about Pete, I'm like, ah, right. I'm just kind of, I'm either gonna get drawn into like posting a comment on here or whatever.
[00:33:37] So I didn't comment. but then I was thinking about it, and so when I went to write the editorial for the exec AI newsletter this weekend, the SmarterX exec AI newsletter, I was like, you know what? I'm gonna, I'm gonna address this. Like, I think this is a really important topic, and my premise here is, I don't agree with him.
[00:33:55] Like I actually, I feel the opposite. I think that the [00:34:00] human component of the course is, is maybe the most important part. And so I can't even fathom using an AI avatar in my place to teach a course. And as someone who just spent hundreds of hours of my life as a CEO who doesn't have the time to spend hundreds of hours on this, but I just spent hundreds of hours creating 20 courses and recording them myself.
[00:34:22] Like, no way. I have avatar involved. I can't even imagine having used an avatar to do it. Right. That doesn't mean Pete's wrong. and so this is where I kind of land on this, and this is why we wanted to address this as like a main topic. All of us have to make this choice. the technological limitations of avatars are going away.
[00:34:44] So the nuances of like the hands, the, you know, the blinking of the eyes that just kind of like, uncanny valley feel where it's like we're kind of there. Like it feels like it might actually be Pete, I'm not sure right now if it's Pete or not, [00:35:00] that's going away. Like we're going to get to the point where you just don't know and you can't know, like unless you have access to the metadata and you know where the thing came from.
[00:35:09] Videos in the very near future are, are going to just be in, in discernible from reality. you could argue in some cases there argue. . There was, and actually part of the reason I decided to do this is the newsletter last week was there was a a point last week with the president. Yeah, where there was a video in the Oval Office and it went crazy on, on X because people thought it was a AI avatar of him.
[00:35:33] And I'm not convinced, it wasn't like there, there was some nuances to it where you're like, yeah, that, that actually might be. and so now we're in this realm where even in politics, we're not sure, like we have to question it and you have to really analyze to know whether it was or not. So my whole point was in, in, in my newsletter editorial, I'll, I'll just read kind of the end 'cause it kind of makes the point here.
[00:35:56] so for me, per personal connection, authenticity is essential in [00:36:00] communicating with my audiences. That doesn't mean I disagree with Caputo's choice and strategy for him and his brand. It is a subjective decision. There isn't necessarily a right or wrong. If you go to the LinkedIn post, you'll see the comments as like, 50% Love it.
[00:36:13] 50% are like, ew, like this feels wrong. But the point is, as you alluded to, Mike, like, Hey, gen is blowing up. Yeah. 40,000 business customers worldwide. So more brands and business leaders are choosing the AI avatar path. I, well, I think we'll talk about, do we have the podcast, topic later, Mike, where we're like, the AI is using to be generated.
[00:36:35] Podcast. Yes. Podcasts.
[00:36:37] Mike Kaput: Yeah.
[00:36:37] Paul Roetzer: So the tech is getting there. We are all going to have to choose how we use AI to create our content, our, our thought leadership, our expertise. Can't fake me standing on a stage like, you know, that's me. but you're going to, as a brand or as an individual creator or leader, have the choice to, to fake the other thing, and I'm not saying [00:37:00] fake in a bad way, like deep fake the thing.
[00:37:02] And I know of brands, education brands that are choosing to do this, that are scaling up content with AI avatars. I actually think, well, I don't, it wasn't part of Coursera's announcements, but Coursera really recently made a bunch of like AI. Powered announcements about how they're infusing it into their platform.
[00:37:20] And as someone who's the CEO of an AI education company, this is like, we have to think about this. We have to address this with our own team. Like, will we use AI avatars? The answer is no. in case our team is wondering, but like, this is going to become part of what you're doing and then you as the consumer of that content have to decide if you're okay with it.
[00:37:41] Yeah. Like is knowing that Pete, like he said, li literally like, I spent nine months writing these scripts. This is from 25 years plus of scale, you know, scaling businesses. It's all my experience. That is all true. Like, no one's gonna take that away from Pete, that you spent a bunch of time on the scripts and you put 25 years into this content.
[00:37:57] But at the end of the day, it's the eye avatar that's presenting it to me. [00:38:00] Right. And so if you go into the comments on LinkedIn, some are like, yeah, the fact that you couldn't take the extra like five hours just makes me not wanna take the five hours to watch it kind of thing. . Like. And so again, you're gonna have people who are just, I want the information.
[00:38:13] I don't care if it's an AI avatar, whatever. Like just gimme the information and then you're gonna have people fe feel like, no, this doesn't feel real. Like I don't, I'm, I don't have that same connection with you. No right or wrong. Like again, the whole point of this conversation is to bring it up as a conversation topic that people might not be aware they have to solve for.
[00:38:33] Right? Or might not even know that they could create an AI avatar their CEO and save a bunch of time. So I, well, how do you feel about, Mike, you and I actually haven't had this conversation of, should we use ai? I've just said we aren't, but like, you know, should we be using AI avatars in our academy at
[00:38:48] Mike Kaput: all based on how much time and energy it takes to record courses?
[00:38:52] I would love to, but I'm like really against it personally because of what you hit on. This is not a knock on [00:39:00] Pete. I'm sure his course is amazing, but. If you couldn't bother to like show up to the studio, it's like, what am I paying for now? Again, the context might matter. Like what if it's a quick team onboarding training?
[00:39:11] Okay. Maybe totally. Like you could tell me on that. That's like, oh, okay. Mike's out for two weeks or something. Years. Yeah. And even that would feel weird to me though. I could certainly grow to accept it because I think that is where we're going. I think it gets a little murky just because I'm so focused on the course thing recently and the value we're creating.
[00:39:30] I would feel the same way as if someone sent their AI avatar to a meeting. Yeah. If you're not taking the time to bother to engage with me, even if it's asynchronously via on-demand course, I don't have time for it. That's just my personal perspective. I would love to, I know that I feel confident in that perspective because I'm looking for a reason to want to be able to do this, given how much time it takes to record courses.
[00:39:53] Paul Roetzer: Yeah. Yeah. And I think I was going back to the human to machine scale for writers. Like we [00:40:00] talked, I don't remember what episode that was on. Yeah. But I shared like in, was that in March or April? This year we did REI for Writer Summit. Yeah. And I did a keynote on like, when should you use AI to write?
[00:40:09] And my whole premise was sometimes it's fine, like product descriptions, things like that. Like who cares? They, people just want the information landing pages. But when it's like a keynote presentation or an editorial piece, like you wanna know that that's coming from the person. . And so there's, there is a scale of like when it's cool to use it, but again, it's not, it's the same for everybody.
[00:40:34] It's not prescriptive, like it is up to you to decide where that comfort level is. And a lot of it comes down to what does your audience expect? And if your audience expects you to show up and be authentically there, and to have put those extra two hours in to record the thing. You gotta show up and do it.
[00:40:51] Mike Kaput: Yeah.
[00:40:52] Paul Roetzer: If it's something that, like your point about like, I don't know, I don't even know what use case we would do internally, but if someone, like if someone came to me and said, Hey, we've got [00:41:00] this onboarding thing, we got this other thing. We want to like, you know, teach how to you build game plans or use Asana.
[00:41:04] Is it cool if we create an AI avatar of you that tells the story? It's like, I don't know, like I would step back and think about, well, do the employees feel like it's actually supposed to be me? Like, is that an instance where I could see it being valuable? but again, I, people are making the case for this every day of, hey gen, seeing that kind of growth, it's, it's becoming a real thing.
[00:41:24] So, yeah. I have, I shouldn't even talk before now, so I have, I haven't talked to Mike about this. I, I'll, I'll probably see this, my team's gonna want to kill me. so I want to actually start doing, like an AI pulse is what I was thinking. It's gonna be, we, we'll do like, survey our listeners on stuff.
[00:41:43] And it's interesting 'cause Sunday morning I was, I was sitting my front patio like thinking about the how to do this and what it would look like. And I was trying to think like, what would be like the ideal things to ask in these pulses that basically it's like real time research that Mike and I would then turn around and share the next week.
[00:41:57] This might be actually the perfect thing to kick it off [00:42:00] with Mike is, how do you feel about AI avatars? Like, would you, so I'm not saying we're gonna do that survey today, but we may, in the next week or two. 'cause I was actually gonna schedule a meeting with, with Mike and our team this week to talk about this idea.
[00:42:13] we may do kind of these quick one to three question surveys of our audience and then turn around and like share the results the next week. Because I would actually be fascinated to know, yep. How do people feel about AI avatars and like, would you create one of yourself, would you allow it to teach a course?
[00:42:29] So maybe, maybe that'll be our, our first AI poll survey.
[00:42:32] Mike Kaput: That'd be awesome. Yeah. All right. Let's dive into some rapid fire this week.
[00:42:36] Mike Kaput: So first up, openAI's just signed what might end up being one of the bigger cloud computing deals in history. They have a $300 billion commitment with Oracle. To buy computing power over five years, starting in 2027.
[00:42:51] Now, to put that number in perspective, openAI's currently brings in about $10 billion a year. So it's committing to spending six times that every year. On compute [00:43:00] alone, the Oracle deal would require 4.5 gigawatts of power, which is roughly equal to what two Hoover dams generate. Oracle stocks soared by as much as 43% on the nudes, which briefly pushed Chairman Larry Ellison into the top spot as the world's richest person.
[00:43:19] But this is also a massive gamble. openAI's is losing money. They don't expect to turn a profit until at least 2029, and Oracle is likely to have to take on significant debt to build out this infrastructure. So Paul, these numbers are pretty staggering. Like how big a win is this for openAI's and for Oracle?
[00:43:39] Paul Roetzer: I mean, stock wise is great for Oracle. I, so I have to laugh about this one. So, ironically this morning I'm driving my kids to school as I do every day. And somehow the national debt comes up. My, my kids are seventh and eighth grade, so they're 12 and 13. I have, I have no idea of how we got on the national debt topic.
[00:43:58] it's only a 10 minute ride, so this [00:44:00] is like a pretty new topic I have to cover and explain to them. So my son asked the question about, you know, how does that work? and then my daughter simultaneously is like looking up what the national debt is on her phone and everything. And so I'm explaining this, this idea that like, we're, you know, basically, spending more than we make, and like at some point you gotta kind of pay that off.
[00:44:21] And then I actually started explaining the whole Doge initiative with Elon Musk and cutting, you know, spending. and then the big beautiful bill shows up and throws it all into chaos. And then he fights with Trump. So like, all of this was like a seven minute conversation on the way to the school. So.
[00:44:34] I'm laughing though because there was this great tweet that I thought encapsulated this openAI's Oracle deal so well. It's from Chen Chin, who's the co-founder and CTO at Hyperbolic Labs. So we'll put the link in the show notes. This is his tweet, how Money Works. One openAI's signs $300 billion GPU deal with Oracle.
[00:44:55] Now, I'll keep in mind openAI's doesn't have $300 billion, so they signed the 300 billion with the Oracle two. [00:45:00] Larry Ellison gains a hundred billion net worth no GPUs shipped. Three, Larry invests an openAI's, $1 trillion round. Now this is hypothetical, they haven't had a trillion dollar round yet, but, four Sam uses 300 billion to pay Oracle.
[00:45:13] Five Oracle stock pumps again. Six. Larry makes another a hundred billion seven. Larry invests in openAI's. So basically we're creating $300 billion out of thin air by like, and so that, like, I had to then explain to my kids like the idea of the stock market and how stocks like move. And I was giving these examples like say, you know, a hundred dollars and then you announce this deal and now you have $130 per share.
[00:45:35] And then that 30 all I, I've actually kinda shocked me at this conversation now that I'm thinking about it. and they seem to actually understand it, which is the incredible part. But that's kind of what this feels like, is like this is literally manufacturing $300 billion out of nothing by signing a deal that pumps the Oracle stock, which then makes Larry Ellison richer, which allows him to then invest in the next round of funding for openAI's, [00:46:00] which then raises the valuation of openAI's, which then gives them the money, which then they pay to Oracle, which raises their stock again, which they then invest in.
[00:46:08] It's hilarious. Like, this is how this stuff works, like. so how, how you create the money out of nowhere is, it's just part of capitalism, I guess. Yeah.
[00:46:18] Mike Kaput: and interestingly, and if they follow through on this, all of this made up imaginary money could result in huge amounts of real world physical Yeah.
[00:46:26] Infrastructure,
[00:46:28] Paul Roetzer: actual infrastructure. Like, oh man, how, how the world works is amazing. When you understand this is like how it all works. Like you just look at the world differently.
[00:46:37] Mike Kaput: Yeah. I mean, it's easy to, to understand all this and then be like, eh, everything's kind of made up, isn't it? None of
[00:46:43] Paul Roetzer: it makes sense.
[00:46:44] Mike Kaput: Right.
[00:46:44] Paul Roetzer: And that's how, how end the conversation. So I drop 'em off. We're like walking to the front of the school and Bain's like, so the money doesn't really exist, but it comes. I'm like, all right buddy, just have a good day. Yeah, right. We'll talk later.
[00:46:58] Mike Kaput: I love that. [00:47:00] All right, next up.
[00:47:00] Mike Kaput: Anthropic has agreed to pay at least $1.5 billion to settle a landmark copyright lawsuit, which is the first major AI class action case in US history.
[00:47:10] So we've talked about this a couple times on the podcast. The company allegedly downloaded over 7 million pirated books to train its Claude Chatbot. Authors argued this amounted to industrial scale copyright theft. Anthropic denies wrongdoing, but is going to end up paying around $3,000 per infringed work.
[00:47:28] Roughly 500,000 titles have been implicated so far. So a federal judgment we talked about this last had ruled that the training on copyright books was probably fair use by his interpretation. But the issue was they acquired all these books from piracy websites and libraries. So basically that was theft in the judge's estimation.
[00:47:49] however, now what's happening is. Though they have settled this same federal judge US district, judge William Sep also criticized the settlement [00:48:00] saying he might actually reject it because he is questioning whether half a million pirated books is truly the final number and whether the claims process will fairly reach all the eligible authors.
[00:48:12] So he demanded basically a final list of works by today when we're recording September 15th and a reviewable claims process by the 20 seconds. These aren't resolved as issues. This could collapse and could go to trial, or the bill could be even larger. So Paul, when we last talked about this, the latest update we had had a few episodes ago was the settlement had been reached, but we didn't know how much it is now we see a massive 1.5 billion, but that number sounds like it could go up, or this case could even go to trial.
[00:48:43] I don't know how likely that is, but the fact they're questioning this already seems tough.
[00:48:48] Paul Roetzer: Yeah, the 3000 per seemed really low because wasn't it like 150 per is what they were potentially on the hook for? I think so, yeah. Yeah. So we had talked about it at the time as like an, like a potential like, you know, [00:49:00] extinction risk event for Anthropic if they had to pay 150,000 per stolen book.
[00:49:06] So the 3000 just seemed like a slap on the wrist, especially given the numbers that they're currently raising at. so yeah, I don't know. It'll be interesting to see where this goes. There was like two other, you know, cases related to this stuff that we didn't even put into the show last week and this week.
[00:49:22] And I noted for Mike, maybe next week we do a summary of some of these high profile cases. There are a bunch of these things going on. Yeah. Like this one's pretty, you know, far along, but I don't know, like this whole space is crazy right now with like, what's going on and now it seems like we're getting to the point where the next six to 12 months we may see some like landmark cases come through that start to play out where this goes.
[00:49:45] Now, the other variable we've talked about before is the federal government currently, is not on the side of the copyright holders when it comes to, this argument. And they want the AI labs just to take whatever they want, basically to train and build these [00:50:00] models. So there's always that, you know, risk that the government steps in and assert some leverage here.
[00:50:06] So, I don't know. it is interesting to follow. I kind of like, I was laughing to myself when I was thinking about this one. It's like, all right, so they're gonna, you know, let's say the fine goes up to 10 billion and, you know, 10 X said, okay, every chocolate gets 30,000, whatever. all right. Just go do a deal with Oracle and then, and then, right.
[00:50:22] Their stock will go up and then they'll invest in your next round and they'll pay the fine for you. And like, that's how this all works. Like, I can't see Anthropic going under as a result of whatever this number is at the moment. They appear to be able to raise whatever they want. And so if it looks like they're gonna get hit with a $10 billion fine, you just raise the extra 10 billion in the next round and you take care of it.
[00:50:43] Like, I don't. it is a pessimistic view, I think of like how the legal system works. . But I, that's kind of how I've always felt about this, is what I've always said is they're going to pay fines. They stole the stuff. It is late this day. They all did it. Meta Google, openAI's [00:51:00] and Thro. Every single one of them stole copyrighted material to train their models.
[00:51:04] We all know it. They all know it. The courts know it. It eventually just gets solved somehow, is what I truly believe happens though, and probably through a bunch of really large payments to creators.
[00:51:17] Mike Kaput: Alright, next up, after a high profile recruiting spree, meta has, as of right now, poached more than 50 AI researchers from rivals like openAI's, Google, apple, and Xai.
[00:51:28] Many of them were lured with huge pay packages and promises of abundant compute. But now some of those hires are already leaving frustrated by status battles and internal politics. So this, the company's secretive TBD lab, which they've created as part of their. Kinda meta Super Intelligence Labs initiative, is now working just steps from Mark Zuckerberg's desk and it's become a flashpoint because it requires special badge access.
[00:51:55] It's not listed on any internal org charts, and it's seen as a really [00:52:00] important part of their super intelligence ambitions. So this is sparking resentment according to some reports we're seeing, especially in the Wall Street Journal, like legacy employees are demanding raises or threatening to leave. meta says that their moves here were already planned.
[00:52:14] It's not about, you know, poaching talent or anything or kind of comparing to legacy talent. but we are starting to see kind of here, Paul, these cracks where like all these all stars that they've recruited and paid a ton for. There are some ripple effects here. Not only are they not getting along with everybody, but now your legacy employees are starting to demand more.
[00:52:35] Paul Roetzer: Yeah, I mean. Maybe they didn't see this coming. I don't know, like I I do this in any organization. Take any department and say, like, Mike, say we did this to you. Like right, alright, we're gonna go get, some new podcast hosts and some people who can create courses and like, create content. And Mike, you've, you've been amazing, but we're gonna pay them seven times more than you and it's gonna be in the [00:53:00] media that they're getting seven times more than you and they're gonna get offices closest to mine.
[00:53:04] We're gonna move you just, just a little bit over there, maybe about a hundred yards away so I don't have to see you. And like, oh, Mike's me. Like, yeah, that sounds great. Like, let's do that. I feel really good about this situation. Like, of course you're gonna have resentment and frustration and the people who are there have been building the thing and like, they're gonna want more money and they're gonna wanna be paid like those people.
[00:53:24] I don't comprehend how this wouldn't have been foreseen, but it, I don't know, like they didn't. Prepare for it properly. But yeah, the fact people are already leaving and I don't know, like we've talked about this openly on the podcast for, for the last couple months. Like this just seems like a train wreck waiting to happen.
[00:53:46] Like you, you don't put 10, 12, 50 people in a room who are all now paid ridiculous amounts of money, who are all the best of the best and think they're all gonna get along and work really well [00:54:00] together. And all the a players who are already there met a stack with talent. Like, they're just gonna be like, all right, cool.
[00:54:05] Like, we're now not important. And like, I don't know. it's, it's pretty predictable. it's, I guess, entertaining to watch to a degree. but yeah, I don't know. We'll see what comes of it.
[00:54:20] Mike Kaput: Yeah, I guess I can't say I'm shocked as, as amazingly competent as Mark Zuckerberg is. The fact he didn't.
[00:54:27] Account for personalities and feelings and interpersonal dynamics does not really surprise me. Maybe they don't care. That's,
[00:54:34] Paul Roetzer: I don't know. I don't know. Yeah, it's just a, yeah, I mean, that was the Wall Street Journal article. It was like, this is meta is facing a quintessential management problem, how to recruit and retain top talent while keeping remaining employees satisfied and maintaining harmony across the organization.
[00:54:48] Like, I don't know how you do that in that environment.
[00:54:52] Mike Kaput: All right, next step. The job market may currently look fine on paper, but for millions of workers, it's becoming a real problem. So according to the [00:55:00] Atlantic, there's low unemployment and rising wages, which is great, but behind the scenes, hiring seems to have nearly frozen.
[00:55:07] They document how for job seekers, especially young ones, the experience trying to get a job is kind of hellish, like applicants are flooding the market with AI generated resumes. Employers are overwhelmed and using AI to read them, and as a result, nobody's getting hired. For instance, one graduate applied to 200 jobs, got zero responses.
[00:55:27] They have a bunch of anecdotes like that in their reporting, and it's kind of this vicious AI driven loop it sounds like, where job seekers are using things like Chacha PT to sound professional. So companies deploy AI filters and chat bots to weed out that noise. Now, the reason we mentioned this, and we've talked about that phenomenon a little bit in past episodes, is because on top of this, at the same time, we're getting potentially a new preliminary revision from the Bureau of Labor Statistics saying that job growth through March, 2025 has [00:56:00] been overestimated by about 911,000 positions.
[00:56:04] So if this goes through, this would mark the largest annual downward revision in US history. They're confirming if this will happen next February now. On top of all this researcher, Eric Brisen, we've talked about the past few weeks, who studies AI's impact on labor had this to say about this phenomenon.
[00:56:23] He said, quote, the big revisions in the jobs numbers of two important implications. One, the economy is in the midst of a bigger disruption than most people realize. Two, productivity is growing faster than most people thought. Because productivity equals GDP divided by hours worked. If there's a smaller denominator, it means productivity growth may be 0.5% faster than previously estimated.
[00:56:45] So Paul, we don't know why the job numbers necessarily, were actually so much lower in reality, but BRI of sins kind of proposition here saying that we're in the midst of a bigger disruption than many people realize, and productivity is [00:57:00] growing faster than most people thought. That's kind of almost exactly what we've been waiting or expecting to see show up in economic numbers as a result of AI's impact, isn't it?
[00:57:09] Paul Roetzer: Yeah, it's definitely what you would expect if AI was starting to have that impact. and to remove the politics from this. 'cause I know this has become a hot button issue in politics. It is a broken system. . Like these revisions are weird. Like if you don't follow along how this works, you can all of a sudden think that one side is maybe doing this to the other side.
[00:57:27] They're making, you know, jobs political so that we go into midterms next year. it creates some friction. That being said, it's how it's always worked. Like, it wasn't like this system was invented three years ago and now it's like, you know, hurting the current administration or anything. This is the flawed system America has been using for a long time to, to do it.
[00:57:49] It's a hard thing to get accurate numbers, so it is a process. They go through these regular revisions. So, CNN there's an article we can link to that talks about these, you know, [00:58:00] revisions coming. it says, prior to Tuesday's release, economists predicted that a large downward revision was likely due to three primary factors.
[00:58:07] Weaker than inferred job creation at new firms, which could be AI related. . They're not saying it is, but that, that could be sampling errors resulting from declining survey responses. So they, they depend on people who run companies to report this stuff. And if they're not getting the same reporting rates, then there's higher error rates basically in the data.
[00:58:28] And to some extent adjustments for asylum seekers and other undocumented workers. So that could all play a role. So nowhere are at least this article saying specifically ai, but that is certainly the undertone of all of this. Is that like maybe it is starting to have its impact. And then there's a Atlantic article that we'll link to that is the job market as Hell is the title.
[00:58:51] Yeah. And I'll just read a couple of quick excerpts on this one and then we'll kind of move on to the next topic. It says, right now millions of wouldbe workers find themselves in a similar [00:59:00] position. Corporate profits are strong, the jobless rate is 4.3%, and wages are climbing in turn. Payrolls have been essentially frozen for the past four months.
[00:59:10] The hiring rate has declined to its lowest point since the jobless recovery. Following the Great Recession. In a recent survey, chief H HR officers told the Boston Consulting Group that they're using AI to write job descriptions, assess candidates, schedule introductory meetings, and evaluate applications.
[00:59:27] In some cases, firms are using chat bots to interview candidates too. Prospective hires log into a Zoom like system and field questions from an avatar. Going back to our, I have avatar, think their performance is taped and an algorithm searches for keywords and evaluates their tone. I got two other parts I'm gonna read, but I'm gonna zoom out for a second.
[00:59:45] If you haven't been in the job market or if you're not involved in the hiring process, you may have no idea that this is how HR is working right now. . And that's why we thought it was an important thing to sort of bring to the forefront here is this is what's happening in the world. I have talked with [01:00:00] people recently who've been looking for jobs and this is the kind of experience that they're seeing.
[01:00:03] So the Atlantic article continues still a lot of job applicants never end up in a human to human process. The impossibility of getting to the interview stage spurs jobless workers to submit more applications, which pushes them to rely on Chatgpt to build their resumes and respond to screening prompts.
[01:00:23] and the cycle continues the surge. In same, same AI authored application prompts employers to use robot filters to manage the workflow. Everyone ends up in a tenderized job search hell, which I thought was a pretty funny way to describe it. And then the final, for months, the economy has been in a low hire, low fire equilibri
[01:00:42] Virtually every sector of the labor market, except for healthcare, has been frozen. The amount of time a worker has spent looking for a job has climbed to an average of 10 weeks, meaning that Americans are spending two weeks longer on the job market than they were a few years ago. The share of American workers quitting a job [01:01:00] has fallen to its lowest level in a decade.
[01:01:02] Because of concerns about rising prices and jitters about slowing growth. So again, you, everything may be gravy to you. Maybe you're working in the AI space and you're in high demand and you're, you know, you're seeing your salary going up and like things are good and maybe you live in a bubble where things are really good, but when you get outta that bubble, things aren't so great.
[01:01:22] . And it is really hard to find work, especially in that younger worker level. We talked a couple weeks ago about ages 22 to 25, like unemployment rates at like, what, 13%? Was it Mike? Yeah. And then underemployment, who knows? Yeah. Like, it's probably in the twenties. It's a very, very delicate market right now, that could sway.
[01:01:43] And again, the politicians are very, very well aware of that. So anything you hear related to jobs, the economy, you have to understand where that information is coming from and what the intention of the distribution of that information is. [01:02:00] Because we are entering midterms in the United States in like three months.
[01:02:04] Basically, we're heading into that, that cycle. This is going to be a major, major topic,
[01:02:11] Mike Kaput: somewhat related. We had a listener of the pod kind of ask a question to us on LinkedIn that we wanted to address because, and this was
[01:02:20] Paul Roetzer: public, by the way? This was public. Public post thing. Yeah.
[01:02:23] Mike Kaput: So, Matt Brooks, basically after listening to us talk through some of this stuff in episode 1 66, kind of was asking, look, if companies replace too many human workers with ai, which some people are projecting who is left to buy what they're selling?
[01:02:39] He said, what's interesting to me is that we're predicting tremendous value for companies as they adopt AI and reduce human workers. But if we replace humans with ai. Who is going to buy the products they're trying to sell? I mean, if they're losing income or being, let's say underemployed, even making less due to ai.
[01:02:57] How did Paul, how are you looking at that [01:03:00] question?
[01:03:00] Paul Roetzer: Yeah, so I thought this was a, a really smart question and, I actually did comment on this one. So like, sometimes I see these questions and I don't have the brain power available to like, think deeply and provide a good response. So Matt's I thought was a, a great question and something that we're seeing a lot of, and I get this question in like private talks, so I'll go and do talks for executives and I will get these kinds of questions quite often.
[01:03:23] so I thought it was great that he was, you know, put it out there. So again, like for our listeners, if, if you post stuff on LinkedIn, or even on, actually there's a pretty good chance Mike and I do see it. sometimes I don't have the time to like engage with it, but like, we appreciate these kinds of commentary and the whole idea that this podcast is intended to help drive these kinds of questions throughout society, like get people thinking about these topics and then asking the hard questions.
[01:03:48] So people in their network start to think about these things. So, you know, kudos to Matt. It is a great question. So I'm just gonna read my response because it's probably the best I can, I can do to respond to something I said. So you're asking one of the most important [01:04:00] questions. The first challenge was that economists were largely in denial about the impending impact on jobs because the data didn't support it yet, as we've discussed in this one, I feel like that mindset is shifting.
[01:04:11] We are now seeing economists asking the hard question. The second challenge is that the AI labs and leaders building the tech that will cause the disruption didn't see it as their jobs to address the impact and help solve for it. That was, in their view, the job of philosophers, sociologists and economists.
[01:04:29] They preferred to talk about abstract ideas like universal BA basic income, but with no actionable details or roadmaps. As we've talked about today, Mike, the, people first fund from openAI's is a shift in the mindset here.
[01:04:42] Mike Kaput: .
[01:04:42] Paul Roetzer: So two years ago, you know, openAI's is doing a study on universal basic income.
[01:04:47] That's gonna be the answer. They realize, oh yeah, that's probably not gonna be the answer. We, we can't just like assume that that's gonna show up and solve everything. We need to invest heavily in AI literacy in the economy. Let's build a people AI first fund and let's start [01:05:00] doing something. So again, we're now seeing the shifts.
[01:05:03] The third challenge was that the government leaders didn't care because AI wasn't moving the needle on votes. As the impact on jobs accelerates, politicians are realizing AI may become a central topic to upcoming election cycles. So in short, we have no idea the answer to your question, Matt, but the people who need to care to create a sense of urgency, seem to be coming around to the idea that it could be a near term reality they all have to solve for.
[01:05:28] So as, I mean, again, I didn't, we didn't set up the podcast this way to build up to this topic, but as you will see, all three of those things are things we've already talked about on the podcast, that all three of those stakeholders are now realizing what's coming and they're all now trying to do something about it.
[01:05:45] Whether or not they're successful, I don't know. And I don't know what this looks like when people aren't making the money to buy the products and services, buy the goods so we can increase the GDP, but like, who's buying it? If they're not, they don't have jobs. That is the great [01:06:00] question. If, if unemployment overall were to reach 13%, like what we're seeing at entry level workers, if, if like across the economy with 13%, we got major, major problems.
[01:06:12] and that's the stuff that I think economists are starting to realize. which I, I've talked about this, but I've had leading economists laugh at me like three years ago when I was talking to 'em about this and saying, Hey, maybe you guys should be doing more to model this and prepare for this. They, I literally had one Lean economist tell me this isn't in his top 10 concerns like 18 months ago.
[01:06:31] And I was like, I don't think you should be on a stage right now. yeah. So
[01:06:38] Mike Kaput: yeah, it feels like at the very least we need more robust conversation around it because I'm just so tired. Seeing people say, well, AI's gonna create jobs, or like UBI or whatever. Like, okay, great. What's the next, what's the next sentence?
[01:06:53] What's the next question?
[01:06:55] Paul Roetzer: That was the la la and we talk about this on the AI forward CEO memo. Yeah. Was it last week on the podcast where [01:07:00] it's like, that's fine, you can state that. How like, and so what the labs have done is I mean, I've lit, listen, literally listened to interviews with Des, who I respect more than any of them.
[01:07:11] Yeah. And Sam, who are like this, that's not our job. Like that's, that's what philosophers do. That's what sociologists do. That's what economists do. That's not us. we build the tech and like they could sit there and like think about it, but that's not what they're going to work to think about. They're going to work to think about how to build the next frontier model.
[01:07:30] And so it's true, like it's really not Demi's job to think about this, but Yeah. Hopefully the labs are gonna continue to do more around this.
[01:07:39] Mike Kaput: Well, like you alluded to, they now have the incentive to, because it may not be your job to think about it, but when there's real world societal impacts, your job is to figure out how not to get regulated out of existence.
[01:07:50] Yes. Because of the backlash around this, right? Correct. Yeah. All right.
[01:07:55] Mike Kaput: So another interesting topic this week, a startup called Inception Point AI is betting that its business model, which is flooding the internet with AI hosted podcasts more than 5,000 shows, and over 3000 new episodes a week will pay off.
[01:08:09] So they're using AI to produce podcasts at an extremely cheap rate. Each episode they do costs about a dollar to make, and the dollars make sense if 20 people listen, they said, according to an interview in Hollywood Reporter, the episode turns a profit. Thanks to the programmatic advertising attached to the episode.
[01:08:28] And that's basically their entire pitch. Ultra cheap, endlessly scalable, fully automated audio content. So like I mentioned, they've done more than 5,000 different shows. They claim to be churning out more than 3000 new episodes per week. Each of those episodes takes like an hour to create. The company's digital hosts are AI personalities and they're assigned to shows ranging from really, we like mundane weather updates and niche biographies, and then also more in-depth, longer form shows.
[01:08:58] They actually use AI to [01:09:00] generate topics based on trending searches, build out the scripts and customize the voices. They also say they've racked up 10 million downloads across this network since 2023. They're experimenting with short form video and influencer style social media as well. And the CEO of this company said to Hollywood reporter quote, I think that people who are still referring to all AI generated content as AI slop are probably lazy Luddites, because there's a lot of really good stuff out there.
[01:09:27] So Paul, I'd be love to get your take on this. I'll be honest, this sounds like a terrible idea to me. Like, I'm not offended at all. Like I know that's where this is going as a podcast host. Like, I'm not even, I'm a realist. Like none of this surprises or offends me. I'm just more like, I already think there's like crappy podcast hosted by humans out there, and I don't even have enough time to listen to the good stuff.
[01:09:48] So like, I don't need more noise.
[01:09:50] Paul Roetzer: Yeah. I hate this so much. So contextually, you know, I was talking about my days with HubSpot. So back in, you know, I started my agency in [01:10:00] 2005 and we were a content agency, like that's, Mike and I started working together at that agency. Mike joined us in when was 2002?
[01:10:05] Mike Kaput: 2012.
[01:10:06] Yeah.
[01:10:06] Paul Roetzer: 12. Yeah. Yeah. So you'll, you'll remember this era very well then Mike. So Mike came in as a content specialist. . Like he was a writer by trade and then, you know, developed into, you know, a leading strategist and eventually like, you know, played a key role in building everything we're doing with ai, coauthored books, all these things.
[01:10:23] But in 2011, 2012, when I was building what we were building with HubSpot, when we were scaling a content agency, content farms were the thing. . Where like, let's say that, you know, if you never hired a freelance writer, you know, it could be, let's say a dollar a word, $2 a word, you're hiring them to like create a research report or write an article, write a video script.
[01:10:44] You would pay roughly on like a per word basis. Like that was kinda a common way to do it back then. So then these content farms show up in like, you know, late, you know, 2009, 2010, 11, and you could pay two pennies a word, and people did it. [01:11:00] And it was because there was SEO value, like Google hadn't caught up to the slop that was being created yet.
[01:11:06] And so you were rewarded in organic search results for the crappy content you were putting out. And so we were talking with companies at that time, we were like, Hey, listen, we. I know paying you guys 5,000 a month would, would help. But I mean, I can get like 10 x more crappy content from these people and they tell me it's gonna help my SEO.
[01:11:28] So instead of the four posts with you guys that are like really high quality that humans create, we're gonna have these other humans who are willing to do it for 2 cents a word, do it, and we're gonna create 40 crappy posts. and you're like, okay, well that's gonna crumble eventually. Like that's not a model that's sustainable.
[01:11:44] That is, you know, cutting corners. It'll might, might work for six months, 12 months, eight, I don't, I don't know. . But usually if there's like an ickiness to the strategy, it probably eventually falls apart. And I think this is one of those you're looking at, it's like, God, [01:12:00] this just feels gross. Like, yes, you could do this.
[01:12:03] You could create a podcast, you can put it on YouTube and Spotify. You can juice it with like a, you know, a thousand dollars a month in YouTube ads, which you can then turn into $3,000 a month in, in ads like ad, ad dollars in totally a viable model. Doesn't mean it should happen. And then I can almost guarantee you there's no AI verification process in this.
[01:12:23] You have these models creating this crap that like is a script that the podcast ai, you know, hosts read. Nobody's ever verifying if any of it's actually true. It all sounds good and maybe 80% of it's viable. Like, I hate this inevitable, but it's just a crap business model. And, I understand that people do things to make money and that's cool.
[01:12:47] Like we live in a capitalistic society that allows that to happen. Yeah. Doesn't mean we have to like, like the fact that they do it.
[01:12:53] Mike Kaput: Yeah. And I, you could interest me far more in a, a larger conversation about how AI generated content can be good. It [01:13:00] can resonate, it can be interesting. There's nothing saying that that's not what they're doing.
[01:13:03] But this business is going to zero. It's an arbitrage point. Of like, the moment the platforms wake up to it gonna be like Google's penguin update, like those content farms went to zero.
[01:13:13] Paul Roetzer: Yeah. You're at number two and then all of a sudden you, you can't even be found in the search results. Yeah. And, but again, it takes the fortitude of the distribution channels to do something about it.
[01:13:23] So it takes YouTube being able to verify and say, okay, yeah, AI generated podcasts just are not gonna get, you know, shown up in the search results or Google or po you know, apple Podcasts, Spotify. It's gonna take them saying, oh wait, this is ruining our platforms by having all this AI generated. And again, I don't know this co I don't know this company, maybe they do have some verification process and this is actually like really valuable stuff that people want and maybe this company is legitimate, but it's just a blueprint for somebody else who isn't, who just wants to make money Right.
[01:13:54] To show up and do it. So. it's one of those where you look and say, okay, if that's where this is gonna go, it's [01:14:00] gonna ruin the fund for everybody. It's what, it's what marketers do, entrepreneurs sometimes do. It's like you get a good thing and then like people show up and like, oh, I can make a bunch of money on this and then they ruin it and I don't wanna see podcast podcasting Ruined.
[01:14:12] Right.
[01:14:13] Mike Kaput: Same. All right, next step. The FTC has launched a sweeping investigation into AI chatbots designed to act like AI companions, especially when used by kids and teens. So the Federal Trade Commission is demanding answers from seven major AI players, including Meta openAI's and character.ai, about how AI is being used to form relationships with users.
[01:14:36] So obviously, we've talked about before, bots from these companies can simulate emotions, intentions, and friendship, and sometimes convincingly enough that users trust them. Like real people. Though we've talked about people from all walks of life, not just teens. Forming deep attachments to AI chatbot companions, whether that's platonic or even romantic.
[01:14:58] So the FTC wants to know [01:15:00] what steps companies are taking to prevent harm, especially to kids. So Paul, we've talked about this on a lot of episodes. I fear we, we talking about it a lot more when I saw this, even though I thought like, this is still very early and we'll see how it actually plays out. I did have this kind of initial visceral reaction saying like, thank God someone with actual power is starting to take this seriously.
[01:15:23] Paul Roetzer: Yeah. And again, this might be one of those you live in a bubble where this isn't a thing. you know, the kinda like the jobs thing, like maybe everything's cool. Maybe you, you don't personally use them as a companion, and maybe you don't even know people who do, but it is like, what are the top three most popular use cases for ChatGPT?
[01:15:41] Yeah. and I thought Allie Kay Miller tweeted a, a, a good anecdote here. that I'll just read her tweet 'cause you know, I think it's a representative kind of what's going on. Again, if you're not familiar with how this is working. So she said a weird story about AI companions and spouses. A friend was chatting with a woman, let's call her Steph, at the gym, about mental [01:16:00] health.
[01:16:00] Steph has been married seven years and loves her husband, but has found a second companion in chat, GPT. She would have long live voice chats with a male voice in chat g ChatGPT first about basics like her workouts, then eventually about her whole life, including mental health support and discussing her husband.
[01:16:16] Her husband finds out she's doing this. They argue about whether she should continue and he demands. Steph switches the voice to a female voice. Steph understands his points of point of view, and switches the voice, but feels like she lost her friend. We will hear a lot about parent children dynamics as it relates to ai.
[01:16:34] We're going to hear a lot more, from life and romantic partners. So I shared that and I said, reality is increasingly fee feeling surreal. Yeah. So again, just if you're not aware of what's going on, if you have teenage kids, this, this is a reality, like the next generation, it's gonna be normal to have these kinds of relationships with an ai.
[01:16:58] And again, [01:17:00] like there's no right or wrong here. We're not, you know, judging this person for their relationship with it. We are just presenting this as like, this is where society is going. And you need to be aware of that for many reasons, depending on where you are in your life and where your family and friends are.
[01:17:16] things that sort of seem like they're straight out of a movie are going to be part of your daily life.
[01:17:22] Mike Kaput: Yeah, no kidding. All right. Next up,
[01:17:25] Mike Kaput: AI is transforming retail and we've recently come across some fascinating case studies in this industry. So we wanted to quickly share those, just because they're great examples of kind of what's possible here.
[01:17:36] These come from a recent report in Fortune, so three different companies using ai. First is Walmart. They've rolled out real, real-time AI systems across the us, Canada, Mexico, and Costa Rica. These tools now spot consumer trends as they emerge, forecast, demand shift inventory before products run low.
[01:17:54] One standout system is called trend to product. It tracks signals from social media and search data, turns them [01:18:00] into mood boards and feeds those ideas directly into product development. Amazon has also deployed new AGI agentic AI systems that can forecast demand, map global logistics, and coordinate robotics.
[01:18:12] They unveiled a tool called Wellspring, which is a generative AI that maps, logistics networks, and an AI forecasting engine was also debuted that helps balance global inventory. Finally, the grocery chain Albertsons, which has over 2200 stores built predictive models that estimate how many shipments are arriving each day, and then sync staffing levels to meet them.
[01:18:34] So the result is less overstaffing, fewer delays. They stock shelves 15% faster during peak seasons. They've also started using AI to scan messy supplier emails and PDFs, extracting delivery changes or risk factors that might otherwise slip through the cracks. So Paul, these are the exact types of case studies and leaders we've been featuring in our AI for Industries core series and other academy content through AI Academy.
[01:18:59] [01:19:00] Great to see more of these being published and publicized, in my opinion. Did anything in here jump out at you in particular about how retail's using ai?
[01:19:08] Paul Roetzer: I just love to see the tangible examples. You know the whole premise behind the name SmarterX is you can build a smarter version of any business.
[01:19:15] So X is the variable. Like it was funny, my son was actually asking me, what does the X stand for smarter. And the X is the variable. You can build a smarter organization, you can build a smarter marketing department, sales department, you can build a smarter version of your career. Like that was the whole premise behind the name years ago when I created it.
[01:19:30] And so I love to see these transformations occurring and we get asked all the time, Mike, for those examples of Companies that are doing this well. this is actually the kind of thing that's the inspiration for AI transformation course series we're gonna be launching as part of AI Academy Live.
[01:19:44] And then we're actually planning on doing, an AI transformation series for the podcast, where we go interview the leaders from brands that are doing this kind of transformation. So we're, we're gonna make an effort to do a lot more of this kind of applied AI on the podcast and through our courses so people can, you know, get [01:20:00] inspired by examples they're seeing of other organizations and realize the almost infinite amount of use cases there are for AI and business.
[01:20:07] Mike Kaput: Yeah, no kidding.
[01:20:09] Mike Kaput: Alright, to wrap up here, Paul, we've got a few quick AI product and funding updates I'm gonna run through and then kind of bring us home here. Sounds good. So first up, cognition. The startup behind Devon. The AI software engineer just raised $400 million at a $10.2 billion valuation. This company is barely a year old In a year, they've grown from 1 million to 73 million in annual recurring revenue.
[01:20:31] They also acquired Windsurf, which is an AI powered development platform, which doubled a r again perplexity. the AI powered search startup has locked in another 200 million in funding this time at a $20 billion valuation. This is just weeks after its last raise at 18 billion. They pulled in about a billion total in funding every few months.
[01:20:51] So far, 11 Labs Treat, have talked about in the past is letting employees sell shares at a $6.6 billion [01:21:00] valuation. Again, double what it was just months ago. The a hundred million dollars tender offer is led by Sequoia and gives longtime staff a chance to cash out without waiting for an IPO. In some kind of product lawsuit related news, a couple things that we'll be tracking, like we talked about on future episodes.
[01:21:18] Perplexity is facing a lawsuit from Encyclopedia Britannica and Merriam Webster. They claim perplexity has scraped their websites, plagiarized their definitions, and even misused their trademarks. Midjourney, one of the leading AI image generators is facing a serious legal challenge from Warner Brothers Discovery, which is suing the company for copyright infringement, specifically for letting users generate images of its iconic characters.
[01:21:42] It is not alone. We talked about previously how Disney and Universal filed similar suits earlier this year. This next one might be something we end up revisiting. We'll kind of see how it plays out. A new wearable called Alter Ego just dropped, and it's being described as near telepathic. So this is born out of research at [01:22:00] MIT, and what it does is it picks up the silent signals your brain sends to your speech system before you even say a word.
[01:22:07] So it's not really your thoughts exactly, but like what you intend to speak, which means using this device, you can type search or interact with apps, we using nothing but silent intent. The breakthrough that they're kind of touting is something called silent sense that captures everything from mouthing words to pure motionless intention.
[01:22:26] So it's like AI as a mind extension that could eventually let you have a full conversation without making a sound. And then finally, notebook. LM Google has given its AI research Assistant Notebook, lm another upgrade, turning it into a full on study partner. It now generates flashcards and quizzes from your notes, lecture slides, or research papers.
[01:22:47] You can set the difficulty share sets with friends, and even ask follow-up questions to understand what you missed and why. It's also helping you create smarter reports. You can upload a research article and it might suggest a blog [01:23:00] post glossary, or even a character analysis tailored to the content. Not just the format.
[01:23:05] They've also got a new learning guide feature that pushes deeper understanding, asking open-ended questions instead of just spitting out answers.
[01:23:13] Paul Roetzer: Mike, I'm gonna add a request for our academy team, the Gen AI app reviews. So we do these like every week. I know we've got a full calendar coming up, but I would love to see app reviews of the learning guide and the guided learning that's available in Gemini.
[01:23:26] Yep. And because here's, here's my use case. I've been using the guided learning to help me help my kids. Yep. And I want to actually take it to their school and say, listen, you all should be integrating this like you, but I don't have the time right now to like build the deck to like pitch them on, Hey, you should actually be very proactively integrating this because you have students who could benefit from this.
[01:23:51] it's just different than a standard AI assistant. Like it's a, but the perception at schools is. That AI is [01:24:00] ai, yeah. This is different. And so I would, I would love to see more schools vary aggressively exploring, learned the learning guide within Notebook and guided learning within Gemini and Chatgpt because my personal experience has been, it is a game changer for working with your kids through problem solving, not giving them answers, but actually showing them how to solve something so they learn in the process.
[01:24:26] so these are, they seem like really small announcements, like really small features, but when you understand the implications to them, they're potentially like massive transformations of the educational system. And I don't think it's being talked about enough.
[01:24:40] Mike Kaput: Yeah, for sure. We'll get those on the docket.
[01:24:42] I'm eager to explore them myself. We put requests. Yeah. All right Paul. Well thanks for breaking down another action packed week in AI for us.
[01:24:50] Paul Roetzer: Good stuff as always. And we will be back regular time next week. everyone have a good. Thanks for listening to the Artificial Intelligence show. Visit [01:25:00] SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy, and engaged in the Marketing AI Institute Slack community.
[01:25:21] Until next time, stay curious and explore ai.