OpenAI says it's aiming to build a totally automated AI researcher by 2028...
And it has completed its transition from non-profit to for-profit company, paving the way for an eventual IPO.
This week, Paul and Mike talk about those stories and more, including a warning from the Fed chair about AI's impact on hiring, a new index measuring how well agents do remote work, and Nvidia's $5 trillion valuation.
This week's episode also covers a new report on corporate AI adoption from Wharton, the concerning rise of AI "nudify" apps, and much more.
Listen or watch below—and see below for show notes and the transcript.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:05:48 — OpenAI Sets Automated AI Researcher Goal
- X Post: Sam Altman Teases OpenAI Livestream
- X Post: Livestream Highlights and Reactions
- X Post: Altman Summarizes Livestream
- X Post: Roundup of OpenAI Livestream
- Seizing the AI Opportunity - OpenAI
- Paul Roetzer’s LinkedIn Post
00:18:06 — OpenAI Completes Restructuring and Eyes IPO
- Built to Benefit Everyone - OpenAI
- Microsoft Gets 27% Stake in OpenAI - The Information
- What OpenAI’s Restructuring Means - The Information
- Next Chapter: Microsoft–OpenAI Partnership - Microsoft Blog
- X Post: Jason Calacanis on OpenAI Deal
- X Post: Robert Wiblin on OpenAI Concessions
- OpenAI Preps Trillion-Dollar IPO - Reuters
- X Post: Altman on IPO Speculation
- X Post: Sam Altman and Satya Together
00:31:49 — Is AI Responsible for a New Wave of Layoffs?
- Navigating the AI Jobs Apocalypse - Axios
- Jerome Powell says the AI hiring apocalypse is real: ‘Job creation is pretty close to zero’ - Fortune
- X Post: Jobs Data and AI Commentary
- Amazon Plans Major Corporate Job Cuts - Reuters
- UPS Q3: Earnings and Layoffs - The Wall Street Journal
- Amazon Layoffs and AI Impact - Business Insider
00:41:45 — Remote Labor Index Project
- Remote Labor Index Project - Remote Labor
- X Post: Dan Hendrycks on Remote Labor
- Remote Labor Index: Measuring AI Automation of Remote Work - Remote Labor
- Ep. 176 of The Artificial Intelligence Show
00:47:46 — Mercor Quintuples Valuation
00:52:40 — Nvidia Valuation
00:56:54 — Wharton AI Adoption Report
- Wharton 2025 AI Adoption Report - Wharton Knowledge
- X Post: Ethan Mollick on AI Adoption
- X Post: OpenAI COO on Strategy
01:02:01 — Nudify Apps and Public Figures Getting Deepfaked
- Rebecca Bultsma’s LinkedIn Post: Warning on Nudify Deepfake Apps
- X Post: Michio Kaku on Deepfakes
- X Post: Brian Cox on Deepfakes
- Neil deGrasse Tyson says ‘the earth is flat’ as he reveals terrifying AI deepfake video - Independent
01:06:55 — Google Labs Introduces AI Marketing Tool
This episode is brought to you by our MAICON 2025 On-Demand Bundle.
If you missed MAICON 2025, or want to relive some of your favorite sessions, now you can watch them on-demand at any time by buying our MAICON 2025 On-Demand Bundle here. Use the code AISHOW50 to take $50 off.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: We've just entered a very different phase in society where the things we've always worried about being possible are now possible and most of society still doesn't know it's a thing. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:20] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:42] Join us as we accelerate AI literacy for all.
[00:00:49] Welcome to episode 1 78 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Hoot. Your recording on Monday, November 3rd. Oh, it's 9:50 AM for me, Mike, [00:01:00] but I'm on the West coast, so,
[00:01:02] Mike Kaput: oh no, that's, it's 9:50 AM for you. It's six, yeah. 9:50 AM here. The morning's already gone here
[00:01:08] Paul Roetzer: it is.
[00:01:09] My, my computer barely doesn't update. It is 6:50 AM where I am. I'm actually in San Diego for, speaking at the Cisco Partner Summit this morning. So we had to kind of move this up so I couldn't make it to my talk in time. And then I've got another one I'm actually having from San Diego to Orlando this week to keynote the Site Core Symposium event.
[00:01:27] So this is the event circuit week for me. yeah, so we were actually gonna try and do this on Friday and there's just, there was no physical way to do it. I'm glad we waited. 'cause it's like, I always prefer to do it on Monday. It, it's like, then you can catch to whatever news ends up happening like late Friday.
[00:01:44] So, okay. So this episode is brought to us by MAICON 2025 on demand. We've been talking a little bit about this. We've had a, a ton of sales of this. So we've, I know people have been you know, taking advantage of catching the 20 featured sessions from the main [00:02:00] stage and the general breakouts, from MAICON 2025 just a few weeks ago.
[00:02:05] as I mentioned, there are, 20 of these sessions. You can get them all right now from, becoming an ad driven leader by Geoff Woods. To Mike's talk on 30 AI tools shaping the future of marketing. Andy Crestodina on better prompting, Michelle Gansle on empowering teams in the age of ai, human side of AI inside the leading labs.
[00:02:26] Just amazing stuff. So go check it out. We'll put the link in the show notes. you can use AISHOW50 to take $50 off of that on demand package, and then you get immediate access so they're ready to go. And I know people have been enjoying 'em. I've been hearing a lot of, from people who've been taking advantage of that and a lot of like, people where at may kind of just kind of reliving these things.
[00:02:46] So, yeah. check that out. And then, exciting news. We've, we've talked a little bit about this idea of this AI pulse survey, and it is here. So Mike over the last week has put this together. you can kind of come on this maiden [00:03:00] voyage with us. We're gonna kind of fine tune this process as we go, but the first AI pulse survey is live.
[00:03:05] So, as a recap, if you, if you weren't listening in the last couple episodes, we kind of tease this idea. The basic premise here is we're talking about stuff all the time in this podcast, and it would be really fascinating to hear what our listeners think about it. And, you know, podcasts are generally speaking, unless someone reaches out and like sends you a message, you don't, you don't really know anything about your audience.
[00:03:25] It's pretty hard to learn who they are and how they feel about these different topics. So we thought, why don't we start asking a, a couple of questions each week and learn a little bit about how people are generally feeling about this. We're gonna give this a go. they're just through Google Forms. It is.
[00:03:39] We do not collect email addresses. This is not an effort to build a database and start marketing to our audience. If you're logged into Google, it'll show your email, but we're not take, we're not getting that email. So this is not a marketing play, this is purely a research play. We will ask at times for like, title, industry, size of company, things like that only so that we can actually, [00:04:00] connect that to the responses and try and kind of segment the response a little bit.
[00:04:04] We have no idea how many people are gonna respond. So we may put this out and get, you know, 50 responses. We may get 500, we may get 5,000. We have no idea. the podcast has about, I think right now we have about 115,000 downloads a month. So it's a pretty good audience size. So we feel like probably get a pretty decent response on some of this stuff.
[00:04:22] So today's is live. we'll put the link in the show notes. It's just what's SmarterX ai slash this will be on Pulse.
[00:04:29] Mike Kaput: This will actually be on the podcast site. So the, oh, it's on the podcast. The actual, yeah, the actual link will be podcast.smarterx.ai/ai-pulse.
[00:04:39] Paul Roetzer: Okay. So we, and one include all notes.
[00:04:40] Yeah. Alright, so this week we're gonna ask what, which of the following statements best describes your current personal feeling about AI's impact on job security? That's gonna be a topic, that we're gonna touch on again today. And then we've got which statement best describes your personal day-to-day use of AI tools in your professional work?
[00:04:58] So we're trying to kind of get a [00:05:00] sense of how people are actually using these things. I would assume, I, this is always the fun thing, is like you try to play these guesses of like, how our audience might respond to some of these things. Yeah. Right. I'll be really fascinated on the job security one, Mike, that that'll be very intriguing.
[00:05:13] okay, so that, that is that again, we don't collect the emails. If you want to stay up to date on these responses, we will share them the following week. So next week on episode 179, we'll share the results from this one. We will also include them in the exec AI insider newsletter that I do each Sunday that comes out.
[00:05:31] So you can subscribe to that on the site and we'll put a link to the show notes for that as well. Okay. So, we will be efficient with our time today, Mike, since I gotta go do a keynote about an hour and a half. but, let's dive in. OpenAI was very, very busy last week.
[00:05:48] OpenAI Sets Automated AI Researcher Goal
[00:05:48] Mike Kaput: Yes, they are busy as always. So our first big topic is there was a live stream this past week that was, featuring OpenAI, CEO, Sam Altman, Chief Scientist, Jakub Pachocki, and co-founder Wojciech Zaremba.
[00:06:02] And they unveiled what they are saying is maybe one of the most consequential updates they've done this entire year. And so during this live stream, they announced internal goals to create an AI research intern by September, 2026, and a fully autonomous AI researcher by March, 2028. In other words, they said they're aiming to develop AI that does scientific research specifically in the realm of ai.
[00:06:28] First by assisting human researchers with their work and then by doing that research on its own. So Pachocki explained that the 2026 system they're aiming for should quote meaningfully accelerate OpenAI's human scientists by running vast experiments across hundreds of thousands of GPUs two years later.
[00:06:47] The goal would be a model that can autonomously deliver complete research projects, which is a step Altman called tremendously important if it works. Now, Altman said that though they may fail in this approach. They [00:07:00] chose to reveal these internal gut timelines because given quote, given the extraordinary potential impacts, it's in the public interest to be transparent about this.
[00:07:10] Now, the livestream also set the stage for OpenAI's recent corporate transformation, which we will talk about more in our second topic. And this one we're going to focus on the AI researcher, but they also talked about their shift to a public benefit corporation under control of a nonprofit foundation.
[00:07:26] So I guess, Paul, my first big question here, AI researcher, like, it sounds like this is real. This is happening. They're at least attempting to get there. Why are they announcing this now? What are they actually trying to aim for here?
[00:07:40] Paul Roetzer: Yeah, this is something that we've talked about numerous times on the podcast.
[00:07:44] We've known that openAI's and other labs have been focused on this idea of building these research agents. on Sam's tweet about this, from last week, he said in 2026, we expect that our AI systems. May be able to make small new discoveries [00:08:00] in 2028, we could be looking at big ones. This is a really big deal.
[00:08:04] We think that science and the institutions that let us widely distribute the fruits of science are the most important ways that quality of life improves over time. So, Mike, I was going back and looking through some previous episodes last night in, in preparation, and one that I landed on was our episode 1 45 from April, 2025.
[00:08:24] And in that episode we talked about an article from the information that was openAI's Forecast Revenue Topping 125 billion in 2029 as Agents and new Products Gain. So this actually becomes really important to one of the topics we're also gonna talk about today on this, the idea of restructuring of the company and the potential IPO.
[00:08:45] So in that article, they said by the end of the decade, the company has told some potential and current investor investors. It expects combined sales from agents and other new products to exceed its popular chatbot ChatGPT [00:09:00] lifting total sales to 125 billion in 2029 and 174 billion the next year.
[00:09:07] they even project that by 2029 agents alone could bring in 29 billion a year. And this is the really important part, selling high-end AI workers ranging from 2000 a month for what they were calling knowledge agents to 20,000 a month for research agents. So what is an AI researcher? basically scientists who study and invent new ways to build increasingly intelligent AI systems that can think, understand, reason, and take action.
[00:09:36] So, they invent new algorithms, theories, insights that push the field in the area of, of AI research forward. So, traditionally they published at academic papers. that doesn't happen very much anymore. A lot of AI labs have sort of locked up all of their innovations to put into their own products.
[00:09:53] But pre-ChatGPT, AI researcher published a lot of papers. They prototype novel model architectures. They [00:10:00] sort of take shots on goal for, I guess lack of a better word, saying it. They're, they're testing things, so they don't do full blown training runs of models. They're trying to find new innovations.
[00:10:09] Like sometimes they're very small innovations like we saw with like the deep seek model outta China. trying to find small ways that might scale. And so they run these tests and they design experiments and they come up with a hypothesis, things like that. So they're basically creating these new ideas and algorithms, and then the engineers build them.
[00:10:26] They, they turn these ideas into real products. So why would openAI's be focused here? It is a highly competitive space, so the researchers are constantly moving between labs. As we've talked about many times, top researchers can earn millions or hundreds of millions a year, as we've seen recently with meta.
[00:10:43]
[00:10:43] Paul Roetzer: Their breakthroughs can create billions or potentially trillions of dollars. So the, eight researchers are listed on the attention as all you need Paper from 2017 that invented the transformer, that created all of this, that, that created the generative AI age. [00:11:00] So by automating Im per, by automating the work, the AI intern could help with things like reading and analyzing papers, generating hypotheses, and then maybe like conceiving of simple experiments to run.
[00:11:11] And then by the time they build the full blown automated system, it can now design, execute, and analyze research projects with minimal human oversight, if any is even needed. So, if opening eye hits these milestones, the speed and scale of research could shift dramatically. Projects that can currently take human research researchers months or years might be done in hours or days.
[00:11:34] new insights could emerge and feels like medicine, material sciences, physics, which is all the areas, open Eyes said they're interested. And as is Google, and I'm sure Google is doing the same thing. Yeah. Zuckerberg has said they wanted to have an AI researcher, you know, by next year. So the idea of this AI researcher means the role of human researchers could shift to where they're supervising and curating, and sort of redefining research agendas that the agents then go execute.[00:12:00]
[00:12:00] And this likely ties to plans for energy and compute power that we've been hearing a lot about with openAI's. So once you automate research, you basically need infinite compute because there's no longer human limitations to discovery. So now you can say, Hey, pick any industry, pick any problem, anybody who's willing to pay us like enough money.
[00:12:20] Then you just say, all right, what, how, how would we solve this if we had the top AI researchers in the world? And if you've now automated what a top AI researcher does, you can just throw them at these problems. So for commercial sectors like biotech, marketing, engineering, access to these research level ai, automated agents means you can accelerate product innovation, reduce r and d costs, explore novel business models, and even the intern model on its own could already start showing, up in these enterprise tools as smart assistants that do the research and things like that.
[00:12:53] And so I actually posted ironic, I wasn't even thinking about this when I posted this, but as I was flying on [00:13:00] Sunday to San Diego, I was in the airport and I saw a quote from, Coursera's, CEO. So Coursera, e e-learning platform, e-learning company, publicly traded and. The CEO, quote caught my attention because we're building an e learning platform, like I envisioned building SmarterX into a company that has hundreds of thousands, if not millions of learners in our platform.
[00:13:20] And so he was talking about AI skills becoming essential and demand has accelerated, and we're seeing 14 enrollments per minute in our catalog of more than a thousand generated AI courses. So I'm like, oh, this is really interesting. I had to find this transcript. So I literally, as I'm standing in line to get on the plane, I'm like searching for the transcript, I'm reading the transcript.
[00:13:37] And then I realized like, ah, if I had an, like a research assistant that could do this for me every quarter, and it could go pull these quarterly earnings calls and then analyze them and look at the commentary from the CFO and the CEO and like the analyst questions, and this would be great if I could summarize this and then assess it versus our business model.
[00:13:53] So I put on LinkedIn, like this whole workflow of like how this would work. I was like, I'll just share this, like this might inspire other people to like, think [00:14:00] differently about the problems they're facing. So in a matter of literally like five minutes, I created this entire what could be a system prompt for an agent to do this.
[00:14:08] And so you realize like what they're doing, it's, it's no not any different than what I was thinking about doing with our stuff. They're just focused on doing it for AI research 'cause that's what they do. But I think of the AI researcher as a bit of a canary in the coal mine because if you have the ability to build these agents and you don't even need technical ability to do them, then people like me as the CEO of a company can just like envision an agent for something and just go build it.
[00:14:34] And so I think the disruption that could create in, in their world is significant for me. That one little example of a, a earnings call research analyst, which by the way, I tagged DMEs Shaw friend at co-founder of HubSpot and I said, Hey, could I build this on agent.ai because he's got this, you know, secondary initiative he's been building there, sort of a professional network for agents.
[00:14:56] And he said, yeah, actually, I built something like that. He has an earnings call [00:15:00] analyst agent that he had built. So you said you might be able to kind of adapt this one for what you're looking for. so yeah, this is a big deal because it unlocks so much more. If you think of all the innovation we've experienced in the last 15 years, in particular with artificial intelligence and really the last three years when we drill in, it's driven by AI researchers who come up with novel ideas of how to improve these models and then they experiment on them.
[00:15:25] If you can run those experiments 24 7 and you can have enough data centers to where there's no limitation, it's literally just coming up with an idea and then going, and if you have a symphony of these things, hundreds, thousands of these things, it changes
[00:15:39] Mike Kaput: everything. And correct me if I'm wrong, but the reason, one of the reasons they said they were doing this and making this announcement is because it's in the public interest due to the transformative changes that would happen is the implication there, like what you would call like a fast takeoff scenario.
[00:15:55] It's like. If AI researchers become a real thing, the rate of [00:16:00] AI improvement starts to grow exponentially. If they unlock new discoveries, new advancements, it's like we could get a very exponential curve in the growth of AI abilities, which would have huge societal implications.
[00:16:14] Paul Roetzer: Yeah, I think that is certainly part of it.
[00:16:15] It's also fits in their iterative deployment. So as they come up with new capabilities in these models, they want society to have time to get used to them and adapt to them. So yeah, in this scenario, I think it's gonna have a dramatic impact on a lot of different areas. And then again, if they've been able to build an AI research agent, they're going to be able to build all these other agents.
[00:16:35] So that $2,000 month knowledge agent. So imagine, you know the example I gave of this earnings call analyst. if they have a $2,000 a month version of ChatGPT, that is a knowledge agent that can basically do anything, I imagine, I could literally just put the workflow in I created, that I shared on LinkedIn, drop it into ChatGPT and say, build me an agent for this.
[00:16:58] Yep. And then it would, it would [00:17:00] build it because I actually did an MVP of that concept. I took the earnings call, I gave it to ChatGPT, my Co-CEO GPT, which is trained on our business model. And I said, analyze this and find anything relevant for me. Build me a strategic brief, and then find anything that would affect, like our own pricing model, course roadmap, things like that.
[00:17:20] And it did it while again, I was standing in line to get on the plane. Mm. So I know it'll work. It's, I would just have to automate some components of it. So yeah, imagine that agent mode. And I would I pay $2,000 a month for it? Hell yeah. Like if I could build agents on the fly by just having ideas that actually could go do work, that would've taken me five to 10 hours per earnings call.
[00:17:41] I'm saving $30, 30 hours a quarter. Yeah. That I just wouldn't have done otherwise. Yeah. So I think that they, again, to get to the amount of revenue they want to get to, to IPO, the idea of a studio where you can build your own agents based on simple prompts [00:18:00] makes a a ton of sense. And I think they think they can do that within the next two years.
[00:18:06] OpenAI Completes Restructuring and Eyes IPO
[00:18:06] Mike Kaput: Wow. Alright, so let's talk about that second piece there that IPO. Because in our second big topic, as I kind of alluded to, openAI's has completed its long anticipated restructuring. They officially converted from a capped profit model into a traditional for-profit company and have set the stage. Now, according to some reports we're hearing of.
[00:18:28] Having one of the largest IPOs in tech history. So as part of this completed transition from a nonprofit to a for-profit company, Microsoft has actually taken a 27% ownership stake in openAI's. So they've kind of resolved their, next chapter of their partnership there. and they have also extended their exclusive access to OpenAI's models through 2032.
[00:18:51] And basically this move, now that it's completed, ends OpenAI's hybrid nonprofit structure that was originally, they said designed to balance [00:19:00] public benefit with investor returns. CEO Sam Altman said this change ensures the company can scale safely while pursuing its mission to benefit humanity. So basically with, the nonprofit owning stakes in openAI's, they say that would be the most well-resourced nonprofit in history, but then it frees them up to actually go raise a bunch of capital, some observers, like investor and podcast host Jason Cal.
[00:19:24] So we've talked about a lot in the All In Podcast. Argue is that this is like a, just a cynical business strategy to like start out as a nonprofit and get tax advantages and then switch to a for-profit structure. But regardless, transitioning away from the nonprofit model likely clears the way for OpenAI to go public Reuters reports that OpenAI is already preparing for an IPO that could value the company add up to $1 trillion.
[00:19:51] So Paul, we've been talking about this on and off for months, if not a year. Now the deed is done. Like what do you think was this maybe give us [00:20:00] some more details on like what the structure's gonna look like. Like was this necessary given how they started off as a nonprofit, then evolved into this like incredibly successful consumer company?
[00:20:11] Or is this more of a cynical play? Like some of the critics are saying?
[00:20:16] Paul Roetzer: Yeah, I mean it is essential to raise the kind of money they wanna raise. So this is like Elon Musk big beef with Sam and it is funny 'cause they had a Twitter interaction on, On Sunday. Yeah. Where, you know, Elon was again accusing him of basically stealing all this.
[00:20:31] And, and Sam was replying, like you tried to take it all and put it into Tesla. Like, can't we just move on with our lives basically? So there was a number of dominoes that needed to fall for this to happen. We've talked about them, again over the last year, but in episode 1 67 in September, the memorandum of understanding between openAI's and Microsoft had been signed.
[00:20:52] So, at that time it said openAI's and Microsoft have signed a non-binding memorandum of understanding for the next phase of our [00:21:00] partnership. We are actively working to finalize contractual terms in a definitive agreement. Together we, we remain focused on delivering the best AI tools for everyone grounded in our shared commitment to safety.
[00:21:12] So Microsoft's tentative blessing gave openAI's the green light it needed to present its for-profit restructuring plan to state regulators 'cause they also needed the attorney generals from California and Delaware to bless the plans. So again, if you're kind of new to all of this, Microsoft has invested about 13 billion into, openAI's, and they could absolutely have been a blocker in allowing this to happen.
[00:21:33] But Microsoft obviously stands to benefit from this happening as well. and then if we rewind back to May of 2025 on episode 1 47, that's when Sam had published a letter to employees saying, this is the direction we're going. So, in that letter called evolving openAI's structure, he said openAI's was founded as a nonprofit and is today overseen and controlled by that nonprofit.
[00:21:57] Going forward, it will continue to be overseen [00:22:00] and controlled by that nonprofit are for-profit. LLC, which has been under the nonprofit since 2019, will transition to a public benefit corporation, which is a purpose-driven company structure that has to consider the interests of both shareholders and the mission.
[00:22:16] The nonprofit will control and also be a large shareholder in the PPC, giving the nonprofit better resources to support many benefits. Our mission remains the same and the Public Benefit Corporation will have the same mission. So that is, that is what happened. So they obviously, as of May, we're sort of on this path, they needed Microsoft's blessing, they needed the Attorney General's blessing.
[00:22:37] I did see a note last night from Financial Times that said the Delaware Attorney General has made it very clear that she is, willing to take legal action against openAI's. That if, if they do not, live up to the Public Benefit Corporation, if they don't put the interest of society ahead of, their profits, that basically for her to agree to this, [00:23:00] Sam had to make legally binding concessions about what they were going to do.
[00:23:04] Yeah. And so that is something to watch. I would not be shocked at all if they, they've kind of fall short of what the Attorney General believes to be the expectations. It does pave the way for the IPO. one thing to factor in is open eyes losing a, an, an insane amount of money. So we got our first real look at what might be those losses based on Microsoft's earnings reports from last week.
[00:23:31] So Microsoft owns 27% of openAI's. As of the restructuring of this, it stands to reason under equity accounting. So this is from the register that it bears 27% of OpenAI's losses. Microsoft's admission in its earnings reports that it shaved 3.1 billion off its net income to account for its share of openAI's losses.
[00:23:51] Therefore suggests openAI's lost about 11.5 billion during the quarter. So if we just played that out times, you [00:24:00] know, times four, they're, they're losing like 40, 50 billion this year. So the losses are mounting, but it's more about what they want to do from here. And if they can get toward a hundred billion annually in, in revenue, even if we're forward looking a hundred billion.
[00:24:14] They could in theory, go through IPO at a trillion dollar valuation. And just for context, that would make them, as of this morning, the, what 11th largest company in the world. Wow. 12th largest company in the world. So JP Morgan Chase is 846 billion. Walmart's 800 billion. Eli Lilly 7 73, Oracle 7 48, visa 6 58.
[00:24:40] So they would be right at the level of Berkshire Hathaway, which as of this morning is just over 1 trillion. Tesla's 1.5 TSMC, 1.5, and then you got meta Saudi, Aramco, Broadcom, Amazon Alphabet, Microsoft, apple, Nvidia, and only Amazon, [00:25:00] Google Alphabet, Microsoft, apple, Nvidia are over 2 trillion. I mean, we're, we're talking about, and at the scale they're going, you know, you're looking at that kind of escape velocity of getting to 2 trillion, probably within 18 to 24.
[00:25:12] So this matters, like this is a very significant company. And then Mike, the one thing I'll just kind of wrap here with my thoughts. I did listen on the flight here to the BG two podcast. Yeah. So this is Brad Gerstner, you, you had, I think, mentioned him. So he sat down. It was, honestly, it was a very awkward interview.
[00:25:30] It was Satya, Nadella from Microsoft and Sam Altman. And Brad is an investor in both, I assume, but I know he's a big investor. His company's a big investor in, openAI's. So it was talking about this restructuring their agreement, things like that. And it got really awkward. So to Brad's credit, like he, he was, he was asking hard questions and Sam was not happy about it.
[00:25:55] Like it was very, he did not hide it well. so a, a couple quick [00:26:00] notes and I would suggest people go listen to the whole episode.it is a fascinating listen. So they, they talk about AGI. So if anybody's been following along for the last couple years, one of the big stipulations in the Microsoft openAI's deal is that it's basically voided if openAI's achieves AGI.
[00:26:16] And historically the openAI's board dictated when that occurred. So Microsoft's access to openAI's tools, technology, all these things was contingent on an agreement of what a AGI is basically. So Brad says, given that both exclusivity in the rev share and early in the case, AGI is verified, it seems to make AGI a pretty big deal, as I understand it.
[00:26:39] If OpenAI claimed AGI, it sounds like it goes to an expert panel and you guys basically select a jury who's got to make a relatively quick decision whether or not AGI has been reached. And this is Brad still, Satya, you said on yesterday's earning call that nobody's even close to getting to AGI and you don't expect it to happen anytime soon.
[00:26:58] You talked about the spiky and [00:27:00] jagged intelligence. Sam, I've heard you perhaps sound a little bit more bullish on when we might get to AGI. So I guess the question is, do you both, do you worry that over the next two or three years we're going to end up having a call in the jury having to call in the jury to effectively make a call on whether or not we've hit AGI?
[00:27:18] Sam, I realize you've gotta try and make some drama between us here. I think putting a process in place for this is a good thing to do. I expect that the technology will take several surprising twists and turns and we'll continue to be good partners to each other and figure out what makes sense. That was an non-answer.
[00:27:35] So Satya then says, well said Sam, and that's one of the reasons why I think the process we put in place is a good one. And at the end of the day, I'm a big believer in the fact that intelligence capability, is going to continue to improve. And so, Sam now starts to sound annoyed and this is pretty early in the interview, and obviously we got a complete non-answer from both of them as to like what AGI is.
[00:27:59] And like, [00:28:00] again, it's like a really fundamental thing. So then Brad says, open, okay, so he kind of switched gears. Open Eyes is one of the fastest growing companies in history. Satya, you said on the pod a year ago. that, that the BG two pod, that every new phase shift creates a new Google. And the Google of this phase shift is already known in its openAI's.
[00:28:18] and none of this would've been possible had you guys not made those huge bets. With all that said, you know, openAI's revenues are still reported at 13 billion in 2025 with we now know a loss of probably 40 to 50 billion. And Sam, on your live stream this week, you talked about this massive commitment to compute 1.4 trillion over the next four or five years with big commitments from 500 million to Nvidia.
[00:28:42] So this is openAI's spending this kind of money with these other companies. 300 million to a MD in Oracle, 250 billion to, did I say 300 billion? Yeah. that, I think that's supposed to be billion, 250 billion to Azure. So I think the single biggest question I've heard all week hanging over the market [00:29:00] is how can the company with 13 billion in revenues make 1.4 trillion of spend commits?
[00:29:06] You know, and you've heard the criticism, Sam, Sam is now visibly pissed. Like, I watched this clip on YouTube and then I listened to it on the flight here. He, he's, he's very upset. and so he says, we're doing well. More revenue than that, first of all. Second of all, Brad, if you want to sell your shares, I'll find you a buyer.
[00:29:25] I just enough like, you know, people are, I think there's a lot of people who would love to buy openAI's shares. I don't think you want to sell, let's see who talk with a, okay. A lot these people who talk with a lot of breathless concern about our compute and stuff. That would be thrilled to buy shares.
[00:29:43] So I think we could sell your shares or anybody else's to some of the people who are making the most noise on Twitter, whatever about this. Very quickly. We do plan the revenue to grow sharply. and then he starts talking about the revenue takeoff and how it's actually probably way more than the 13 [00:30:00] billion.
[00:30:00] And so it started getting just super uncomfortable. Sam's obviously very upset. Brad is sort of like, if you're watching his facial emotions, like, oh man, I kind of hit the horn, just nest on this one. And then he tries to backtrack, Satya steps in and try to diffuse the situation. Basically saying, Hey, I've never looked at a business plan that they haven't beaten.
[00:30:17] Like, Hey, I, you know, I would believe in Sam kind of thing. I thought Sam was gonna like, bounce from the interview. Right? Then he, he ends up dropping a little later on. He stuck around for another like 20 minutes. But then they got into the IPO. And so now Sam's still, you know, obviously not enjoying this interview.
[00:30:32] Brad says, Sam, Reuters was reporting yesterday. openAI's may be planning to go public late 26th or 27. Sam? No, no, no. We don't have anything specific. I'm a realist. I assume it'll happen someday. I don't know where people write these reports. I don't have a date in mind, Brad said, but it does seem if you guys were, doing an excess of a hundred billion in revenue in 28 or 29, that you would at least be an option.
[00:30:55] And Sam said, how about 27? So he's basically like, [00:31:00] I, I'm like, they might've moved up their timelines to get that number. So I do think that there's a chance that if by, you know, late 2026, they're showing a trajectory that would get them to that a hundred billion number in 2027. You could see an IPO in 2026.
[00:31:17] Still, it seems like 27 is maybe more likely, but it does certainly seem inevitable that they would IPO.
[00:31:23] Mike Kaput: Yeah, I saw that interview blowing up on X. Wow. For how awkward.
[00:31:28] Paul Roetzer: Well, and Brad even tweeted like, Hey, people are misreading the situation. It was actually joked about it after I was like, no man, he's pissed.
[00:31:35] Like you can't watch that and not think that. It is almost to the point where I could almost imagine they get on that call and Brad's like, Hey, I'm gonna ask you like a couple of questions about this. And Sam's like, I don't really wanna talk about that. And then Brad asked, anyway. He did Anyway. That was the vibe you get from it.
[00:31:49] Is AI Responsible for a New Wave of Layoffs?
[00:31:49] Mike Kaput: Alright, so our third big topic this week, the Federal Reserve chair, Jerome Powell, is warning that the US labor market may be entering a new kind of [00:32:00] slowdown and it may be powered by ai. So at a press conference this week, Powell said that once you account for statistical over counting and strip that out of the data quote, job creation is pretty close to zero.
[00:32:13] And he then pointed to what CEOs are now telling, the Fed and conversations they're having as well as with investors. Which is that AI is allowing them to do more with fewer people. So you basically just called out how AI is creating kind of a policy dilemma. On one end, it's boosting productivity and corporate investment, but it's also weakening hiring.
[00:32:35] Now, this has some pretty big effects because despite a 4.3% unemployment rate and steady spending, Powell said the surface strength of the economy hides what he calls a quote, bifurcated economy, where higher income workers are benefiting from AI driven productivity while lower earners are struggling with rising costs.
[00:32:56] Now, some recent headlines seem to bear this out. We heard this [00:33:00] past week Amazon plans to cut roughly 30,000 corporate roles, including 14,000 middle managers as it restructures around automation and AI systems. UPS has also announced workforce reductions tied to both weak demand and some automation.
[00:33:15] The news outlet, Axios echoed concerns about these layoffs and others calling this a potential white collar quote, AI jobs apocalypse, and the outlet wrote quote, why this matters. All of this amplifies publicly what we keep hearing from CEOs privately. Almost every company is planning to slow hiring in the short term and operate with much smaller human workforces in the future.
[00:33:37] Now, Paul, there's certainly more going on than just AI with these layoffs. I've seen interesting commentary, commentary about how a lot of these companies Overhired say during the pandemic, or they just need to get fit and AI as a piece of that story. But I don't know. I found it pretty sobering that amidst all these layoffs you're seeing the Fed actually admit that CEOs are [00:34:00] telling them that AI allows them to do more with fewer people, and that as a result, job creation is pretty close to zero.
[00:34:06] Did the Feds comments strike you as particularly noteworthy here since they're finally kind of saying something about this?
[00:34:13] Paul Roetzer: Yeah. Noteworthy and encouraging that they're finally accepting that this is what's going on. So, a again, if you've been a long time listener to the podcast, this is a topic we've been talking about for a couple years.
[00:34:24] I was sort of on my soapbox, like, why aren't economists talking about this more? Yeah. because I had met with leading economists, at least three top economists who basically blew me off in 2024 and 25, when I presented this as something that we should be thinking about. They did not think AI disruption to the job market was, I literally had one say, it's not in my top 10 things I'm worried about, and that was in late 2024.
[00:34:48] So I have been waiting for economists to, sort of accept that this is something we need to be but much more proactive about. So when you see these, the, you know, [00:35:00] the media headlines, pulling quotes from Paul in, you know, I always like hesitate to run with that stuff. So I actually went and found the actual transcript from his, federal Reserve meeting.
[00:35:11] So I would encourage people like, you know, just do a keyword search for AI or artificial intelligence, and you can go see it for yourself. But I, I'll just read a couple quick excerpts. So an analyst asked him ab about this and he said, the things we're watching very, very carefully to start with the layoffs.
[00:35:26] You're right. You see a significant number of companies either announcing that they're going to be doing, are not going to be doing much hiring or actually doing layoffs. And much of the time they're talking about AI and what it can do. So we're watching that very carefully. And yes, it could absolutely have implications on job creation.
[00:35:43] so, you know, I think that, they're seeing these signs of the job losses mounting. You can put them under whatever category you want. Like they can say, we just overhired during the pandemic, or whatever th. Again, I, I, all I can say is [00:36:00] I have sat through many meetings with executives over the last 18 months and been told point blank that we are going to reduce workforces.
[00:36:09] because of ai. Like, we may not say AI in the press release or you know, in the media stories, but it's absolutely because of that, because they envision that it can drive efficiency and productivity to where you just need fewer people. So it's not a, a, again, like big picture and we're gonna talk about this more in, in the next, rapid fire topic.
[00:36:31] AI can't replace people yet, like it can't do full jobs. We heard the story up front. AI researchers, like, we're not at the point where it can do the job of an AI researcher and they're spending billions to do that, to like automate an entire job of an AI researcher. We aren't there yet, but if you take a team, let's just like, let's make this as tangible as possible.
[00:36:52] Take a, a team of 10 people. It could be a small business that's only 10 people. It could be a marketing team with 10 people on it. It could be a team within a marketing [00:37:00] department with 10 people on whatever it is, just to vision 10 people. And you go get ChatGPT GBT licenses and you build custom GPTs for each person to help them with like three to five of the things that they spend 50 or percent or more of their time on whatever.
[00:37:13] And you increase their efficiency. So they get things done faster by 20%. Do you need as many people anymore? Like it, it's just math. If and if there isn't demand for them to do more than what they're doing. So if your company is stagnant or it's growing modestly, like single digit growth each year. So there's not more work to do, there's no more products to sell, no more services to provide.
[00:37:37] 'cause there isn't demand for what you're doing. So let's say your growth stays roughly flat and you do the work 20% faster, or you produce 20% more in the same amount of time. Do you need as many people? The answer is no. You don't. Now you can carry that salary, you can carry that payroll because you want to do right by your employees.
[00:37:58] But if you're in a publicly traded company, [00:38:00] private equity backed or VC funded, you don't get that choice. You reduce staff. This is the, this is the equation I've basically been talking about for two years. This I, this isn't, I don't even know how you debate this. So if you're in a small business that says, Hey, we're just gonna like reduce our profit margin or whatever, like, we're gonna keep these people great.
[00:38:19] But to me the only answer is you have to grow, otherwise you don't need as many people. So again, it's not that the AI can do the full job, it's that the AI can do a bunch of tasks, an increasing number of tasks that saves time or allows you to do more work in the same amount of time. So you can go to 30 hour work weeks, you can, like, there's all these things you can do as a result of that.
[00:38:40] But most companies that are publicly traded, VC backed or private equity owned will not. Go to 30 hour work weeks. You're right. They, they will just demand more of their existing people. So that is the fundamental challenge we have here. And it seems like the Fed is now acknowledging that this is maybe a much larger problem than they [00:39:00] were aware of or acknowledging.
[00:39:01] I, again, I don't understand how they didn't see this. I'm just glad that they are and hopefully that leads to some efforts to like solve for this.
[00:39:12] Mike Kaput: Yeah. One interesting little kind of corollary I would add here is anytime we talk about this, I feel like I get people kind of like, fair enough, like arguing the point and like, I like those debates, but you know, an argument that people kind of come to and say, look, the US economy is driven by consumption.
[00:39:30] If people don't have jobs on mass, it's gonna break the economy. How on earth are these companies going to, produce products that nobody consumes, like. It's a valid point, but it's kind of an argument for like, you must be missing something. This isn't gonna happen. 'cause a consumer economy would break.
[00:39:46] And I'd just like to leave this stat for people. This is from a story in the Washington Post last month that was talking about data points around these trends. And they actually mentioned that top 10% of Americans who make [00:40:00] $250,000 or more a year now account for a record 49.2% of all consumer spending from about 35% in the 1990s according to Moody Analytics.
[00:40:10] So the stark truth unfortunately, I think is like we don't need everyone to have jobs to keep the consumer economy humming based on those numbers. The stock market can rip consumer spending can and reach totally new heights. And a good amount of people could be out of work due to ai. That all those things can be true at the same time, which is really complicated to me.
[00:40:29] Paul Roetzer: Yeah, it great points, Mike. and the other thing that we've mentioned numerous times in the last couple months is underemployment too. Underemployment. Yeah. Yeah. So like. You know, all these college grads who can't come out with a hundred thousand or more in, in debt from, you know, their four or five years of education who are taking jobs in retail because they can't, they can't go get a job in the industry that they, you know, wanted to be in or that their major was tied to.
[00:40:54] And there's nothing wrong with, with the, with the retail industry and working in the retail industry, if, [00:41:00] if that's what your career path is. But if you went to school to do something where you assumed you were gonna come outta school, making 120,000 a year, whatever that number is, and you're nowhere near that and not in the career path you intended to be in, that has dramatic effects, not only on the economy, but just people's mental wellbeing as well.
[00:41:18] And there's that whole other side of like, you know, our fulfillment as people and you know, it's great if we have like universal basic income from a financial perspective, right. If that was a viable thing. But like, does that help me feel fulfilled as a person? Like I, so yeah. I mean this starts really, the dominoes start to fall here.
[00:41:37] It took the Fed, accepting that this is something they needed to be more focused on.
[00:41:45] Remote Labor Index Project
[00:41:45] Mike Kaput: Alright, so you had referenced our rapid fire topics this week. We're gonna dive into those. And this first one is related to this because we're seeing a new paper from the Center for AI Safety in partnership with scale AI that introduces something called the Remote [00:42:00] Labor Index.
[00:42:00] And this is what they say is the first benchmark to measure how well AI agents can actually perform paid remote jobs. So they take these projects drawn directly from freelance platforms that span everything from game development and architecture to data analysis to video production. And they basically found this work that represents more than 6,000 hours of human work worth over $140,000 in wages.
[00:42:25] You would pay out on these freelancers platforms, freelancer platforms to do that work. And they try to see how AI agents can do. Now what they found is that right now they can't do this work totally autonomously very well. So they tested a number of top AI agents, none of them scored very well. They found that Manus, which is an AI agent platform, led with being able to automate 2.5% of all of this work.
[00:42:53] It was followed closely by GR four and sonnet 4.5, which were able to automate 2.1%. Fully GPT [00:43:00] five managed 1.7% fully automated, and Gemini 2.5 PRO came in under 1%. So the researchers found here that the AI agent failures mostly stemmed from incomplete or low quality deliverables. They missed. They were missing different assets.
[00:43:15] There were broken files, there was work that just wouldn't meet a client's standards. And so, while absolute automation remains minimal, they do also note that there is steady, measurable progress in the AI agent space. So. I guess Paul, like what does this mean in the context of everything we've talked about?
[00:43:33] We're worried AI is going to impact jobs, but this shows that AI can't really yet do jobs fully autonomously. Like what should we be paying attention to here?
[00:43:44] Paul Roetzer: Just a quick note. So this research is tied to a definition of AGI paper that we did talk about on episode 1 76. So it's the same people are basically putting this out and when we talked about that one, they had said that they were, you know, at that point they were focusing on like the 10 basically [00:44:00] skills and traits of humans.
[00:44:01] And they're trying to track like these models, improvements tied to those things. So that was like human level ai, this is the economy level ai. So now they're actually trying to move beyond just like the intelligence side and say, okay, can it do the work that a human can do? And so they, yeah, they look at, you know, work that is actually available on these freelance sites and say, okay, can it do the job that's posted on these sites, in essence?
[00:44:21] So the main takeaway is one, I'm not surprised at all that like only 2%. We're talking about using general agents that aren't specifically trained to do these tasks or jobs, that that's the main takeaway here. So will it, will it increase? Yes, they will continually get better. But as we just talked about an AI researcher, they're intentionally, intensely focusing on taking a base model and training it to do a specific job.
[00:44:47] And so my guess is openAI's is way further along than 2.5% for that specific thing. We've talked about the meter research where the runtime or the ability for an agent to complete a work with [00:45:00] 50%, you know, accuracy, is doubling every seven months. That is for coding largely. So like, again, it's a very specific job if you go in and say, Hey, I wanna build a marketing campaign, or I wanna do a sales development campaign, or I wanna do a employee upskilling and training program, whatever it is, the models aren't trained to do that.
[00:45:20] So they're, they're just these general models that have some capabilities like embedded within them. And then, like we've talked about with Meco, which Is the company we've mentioned numerous times. What they do is they go train them to do specific things we talked about last week with, openAI's hiring Goldman Sachs bankers to train it to be an investment banker.
[00:45:37] So what happens is you take these base models and then you do the reinforcement learning to like tune them to be very good at a specific job. So right now, outta the box, these things are good at tasks. So if we think about, like, I always think about, tasks, projects, jobs is kind of like how I broadly think about it.
[00:45:56] So if there's like, you know, every month there's [00:46:00] like 25 major things I do as a CEO within those 25 things, there's like a collection of all these little tasks, all these, you know, I, these activities that I do to complete these projects. that's where the AI's pretty good. It's good at the tasks.
[00:46:12] It's not good at doing the full thing. So it can't replace me as a CEO, but there may be like 25 things every month that can help me with at a task level. So that's really what's, where, where we're at is like humans. When we talk about agents are still needed to set goals, plan and design the agents, connect the data sources, integrate supporting applications and tools, oversee 'em, verify the outputs, things like that.
[00:46:34] In almost all instances, the humans are still needed for that. and so what we start to look at is the runtime. How long can they work, on longer horizon tasks without the human needing to jump in and like fix things. And that's where the concept of actions per disengagement that I've sort of floated as like kind of how Tesla looks at self-driving.
[00:46:54] You look at the same kind of thing. And at the end of the day, what you're really looking at is the economic turing [00:47:00] test that we've talked about, Mike, where you say, okay, is it to the point where I would hire an agent or a symphony of agents instead of a human? and in every instance I can think of the answer is still no.
[00:47:13] Yeah. So again, agents are getting better, they're getting more autonomous in some industries, some jobs. They are not replacements to people, not even close in the vast majority of industries, but they increasingly support the human to where we just don't need as many humans, which goes back to our previous topic.
[00:47:33] As the agents get more autonomous, as they get more reliable, as more companies understand how to build and integrate them into workflows, you don't need as many people doing the work that you previously did.
[00:47:46] Mercor Quintuples Valuation
[00:47:46] Mike Kaput: And in terms of making those agents better, that's exactly what the next topic is about. You mentioned this, we're talking here about Mercor the company, which is providing, domain experts to leading AI labs to help them train foundational models.
[00:47:59] [00:48:00] And Mercor has apparently quintupled its valuation to $10 billion after raising a $350 million series C led by ESIS Ventures. Now, what's interesting, Mercor, which we've talked about a number of times, used to be an AI driven hiring platform. But now it's kind of pivoted to connecting AI labs with people like scientists, doctors, lawyers, to help them train those models to be domain specific.
[00:48:26] They say they pay out $1.5 million a day to more than 30,000 contractors who earn an average of 85 bucks an hour. And MER core has also started to expand into reinforcement learning infrastructure. So systems that let a AI models learn from the human feedback these experts are providing. They say they're on pace to reach 500 million in annual recurring revenue faster than any rival.
[00:48:50] And so Paul, I mean, I just read that and it seems like Mercor's runaway success is a very clear indicator of where this is all headed.
[00:48:58] Paul Roetzer: Yeah. So a [00:49:00] again, just to like make this as tangible as possible, if you haven't listened to the recent episodes where we talked about Mercor their, their CEO's like 22, if I remember correctly, Mike.
[00:49:07] Yeah. Which is wild. And yeah. And there's a couple of really great podcast interviews with them recently where he kind of tells the. So, they're automating human labor, like the point blank. This is what they do. So imagine, I work in law firms like accounting firms, you know, consulting firms, take your pick analyst firms.
[00:49:27] All it takes is someone to come to them or one of the major AI labs to come to them and say, Hey, we'd like to automate the job of an entry level accountant. how, how, how much would it take to do that? And then what they do, like, let's just say the budget is 25 million or whatever, whatever it's damn.
[00:49:47] they will go find, a hundred expert accountants, a hundred CPAs who are willing to work for $300 an hour to train a model, to fine tune a model, to do the job of an entry level accountant up [00:50:00] until the point where it becomes as good or better than an average human or even an expert human at that job.
[00:50:05] And now you've just automated the labor force. So that is the only thing, like, again, if we stopped AI progress today and we just moved forward with Gemini 2.5 Pro and GPT five, whatever, and we just did that, we would automate large portions of the workforce by just doing reinforcement learning on top of the existing models.
[00:50:30] We're not gonna stop. but that's why this company is worth so much money and they will either get acquired, like scale AI or acquihire, like scale AI did for 15 billion, or they will be a rocket ship that'll IPO within the next 12 to 18 months. And, yeah, and they're not the only player in this space, but this is how it's done.
[00:50:51] This is, this is the, what I, if their, their CEO had a name for it, what did he call it? The reinforcement, econ reinforcement learning economy, he said, yeah.
[00:50:58] Mike Kaput: Like RL [00:51:00] economy maybe was Yeah. The RL economy. Yeah.
[00:51:02] Paul Roetzer: He had like a paper about it where he basically said like, this is it. Like humans are basically gonna get paid to train models to do the work of humans.
[00:51:10] Like that's where a lot of the money's gonna come from. I actually, I was half joking with, a couple of family members I was golfing with a few weeks ago who are retired, and I was like, you could probably be making like 300 bucks an hour to like train an AI model to do what you did for a living if you, you know, if you just wanna make some easy money.
[00:51:26] Sure. I said, you gotta deal with the ramifications of like the moral and ethical side of it. But, that's where the money's going. Like if you're an expert and you're sitting around with some time on your hands, like you could just join the Merck economy and start training models to do what you did.
[00:51:39] Mike Kaput: Yeah, no kidding. And you know, we've said this a bunch of times, but tying it full circle back together, I mean, openAI's trying to, IPO potentially at a trillion dollar valuation. That revenue, that growth at that speed comes from not just tackling the software as a service market, it comes from tackling the [00:52:00] salaries of the people doing the work, which is an order of magnitude larger market than software.
[00:52:04] Yeah.
[00:52:05] Paul Roetzer: And again, like put in perspective, so SaaS industry, the software as a service industry is roughly 300 to 500 billion a year in sales. And that's like Salesforce, HubSpot, people like that. The US labor force is about 11 trillion, and give or take, like numbers vary, but let's say 5 trillion of that per year, is for knowledge work.
[00:52:25] So that's, that's the number you go after if you're trying to IPO or if you're trying to convince an, a VC firm to invest at a $10 billion valuation, you're automating human labor. You're not going after the SaaS industry.
[00:52:40] Nvidia Valuation
[00:52:40] Mike Kaput: All right. Next up, another valuation milestone. Nvidia has officially become the world's first $5 trillion company.
[00:52:47] Speaking of 5 trillion. Yeah, right. Shares surged past $211 a share this past week. They Roetzer nearly 5% at that time. Set yet another record for the chip maker because they crossed only [00:53:00] the $4 trillion mark in July. So this is pretty much cementing NVIDIA's, absolute dominance and just mission critical position in the AI economy.
[00:53:10] I mean, they have explosive demand for their processors because those are the backbone of modern ai. Now, apple trails them at 4 trillion. They're followed by Microsoft Alphabet, Amazon, and Meta. Obviously those rankings could change day by day based on market cap. But, the shares climbed after Nvidia announced a $1 billion purchase of Nokia shares and a new partnership to build what it calls a AI native 5G advanced and six G networks.
[00:53:37] And then shares climbed even further after President Donald Trump's statement that he plans to discuss NVIDIA's restricted Blackwell AI chip with Chinese President Xi Jinping. Now, just 18 months ago, apparently Nvidia was valued at under a trillion dollars. Now it's worth more than Amazon and meta combined.
[00:53:54] So Paul, you're an Nvidia investor, I assume it's been a good week.
[00:54:00] Paul Roetzer: it's been a good like eight years.
[00:54:02] Mike Kaput: It's been right?
[00:54:03] Paul Roetzer: Yeah. So, again, this is like always one of my favorite stats I throw into like keynotes every once in a while. On November 30th, 2022, the day Chad, GBT came out, NVIDIA's market cap was 422 billion.
[00:54:16] Mike Kaput: Mm.
[00:54:17] Paul Roetzer: And now it's 5 trillion. Wow. So that, yes, so Nvidia is what has powered, their AI factories, as they call 'em. It's not just the chips, it's the entire AI factories. it is what has enabled this generative AI phase to explode the wayit has, I mean, obviously they're not the only company, but they have benefited greatly.
[00:54:38] but yeah, I started investing in Nvidia, I don't even know, like 2015 or something like that. Wow. Because that's around when I made the bet that like the economy, wall Street has no concept of ai. I talked to enough people, I realized like nobody has a clue what's happening. And so I just started investing in all the companies that I thought would like be at the forefront.
[00:54:56] Once everyone else realized what was gonna happen, the companies [00:55:00] that would, do very well. And, so yeah, it's, it's been a good run with.
[00:55:07] Mike Kaput: And congrats to, Jensen Huang, who I think also was photographed or videoed, like doing shots with other, semi. Yeah, it was awesome. CEOs and someone tweeted like, does that look like a man who's having a bad quarter?
[00:55:20] And it was, the answer is no. 'cause he was having a great time
[00:55:24] Paul Roetzer: Jensen. Like I mean the guy's awesome. Like, yeah. Yeah. When you think about these like different CEOs at the center of this, that's the dude, you just feel like, I would just love to like sit around and have a coffee with that guy. Like he just seems like such a person.
[00:55:35] Seems fun person. Yeah. And the thing that's amazing, I've heard him say like, he doesn't wear a watch, because whatever he's doing and you, you get this vibe from him. Any interview you ever watch, he is just in the moment. And like as a CEO, it's kind of like the sort of feeling like I, like, it's like if he can like sit for a half hour uninterrupted with someone and give them like their complete attention.
[00:55:57] Then I can do that. Like no matter how many things I think [00:56:00] I have going on It's nowhere near as much as what he's got going on. And you never feel like he's in a race to get anywhere. And whoever he is talking to, you just feel like he is giving them a hundred percent of his focus. And so I watch his interviews and I notice that every time, he just seems like a genuinely good person and you know, the kind of person where you're like, I'm, I'm happy he's being successful.
[00:56:24] And I think, he's the kind of person we want at the forefront of all of this, for the good of kind of society and humanity.
[00:56:31] Mike Kaput: Yeah, I agree. I think he's someone that we should probably be studying a little closer in terms of biographies or what he's doing. Right. Because there's way Yeah. Their success was not
[00:56:39] Paul Roetzer: inevitable.
[00:56:40] Yeah. It's, and I think the, oh, what's the, what's the podcast we like to listen to about, like the Deep Story acquired? Yeah, I think they did an Nvidia, one if I'm not mistaken. So let's go back and re-listen to that.
[00:56:54] Wharton AI Adoption Report
[00:56:54] Mike Kaput: All right, so our next step, our topic is about some new data coming out. So three out of four companies are [00:57:00] already seeing positive returns from generative ai, and that's the headline finding from a new large scale Wharton study that is tracking corporate AI adoption.
[00:57:09] So in this study, the, this is the third annual time they've done this survey to inform this report, and found that 75% of business leaders report a positive ROI from their AI investments. And fewer than 5% say returns have been negative. They also found that 46% of leaders now use generative AI daily, which is up very sharply from last year.
[00:57:32] Another 82% use it at least weekly. The most common applications are data analysis, document summarization, and editing across industries, technology, finance, and professional services. professionals and leaders are furthest ahead with AI according to this data. While retail and manufacturing are lagging a.
[00:57:51] And interestingly, 90, nearly 90% of companies plan to increase AI budgets over the next year and six in 10. Now [00:58:00] have a Chief AI officer in place. So Paul, we can talk a little bit about the methodology of this report. It's largely a survey to a bunch of business leaders. Again, Wharton and Ethan Molik, AI experts behind this, along with his team of researchers.
[00:58:14] They put this out, I think they've done it three years in a row now. And I just have to say, this paints a way different picture than that MIT report that got all the headlines where it's a 95% of generative AI pilots are failing. What is accounting for how different these two data sets are?
[00:58:31] Paul Roetzer: Legitimate research? Yeah. I mean, I'm, the 95% thing was always ridiculous. Yeah. The thing that pisses me off is I still see that it's
[00:58:40] Mike Kaput: still, people are still citing it all over.
[00:58:42] Paul Roetzer: Oh, like breaking news. 95% of it getting no value. It's like, oh my God. Yeah, we can stop with that survey already. So, no, I just like, you know, this was done June to July of this year, so it's pretty fresh data, 800 or so responses, senior decision makers, [00:59:00] US based enterprises.
[00:59:01] So yeah, it's like we've always talked about the show. The first thing we do anytime we see data is like we go look at the methodology and so it is legitimate because even that's the problem with the MIT stuff. It's like, oh, it's from MIT it must be legitimate. And people don't actually drill into how do they figure it out.
[00:59:14] So yeah, I think like to, to start to see the success but also just this continued. You, you'd mentioned like 88% anticipate Gen I budget increases. Yeah. 62% increases of, expect increases of 10% or more. One third of gene AI technology budgets are being allocated internal r and d, which I thought was interesting.
[00:59:33] An indication that many enterprises are building custom capabilities for the future, and then training, hiring, and rollout approaches are key human capital aspects that need to be addressed to increase chances of success. That was kind of like their forward looking, you know, as we go into 2026, so. Yeah, good stuff.
[00:59:47] good thing to check out. We always like putting a spotlight on some legitimate research.
[00:59:53] Mike Kaput: Yeah, and I, you know, anytime I read these, I kind of always try to think about it through the lens of like, okay, so like what do I do with this [01:00:00] information? You know, like, it's interesting, but it can be abstract. It kinda read these stats and trends and one thing jumped out to me was they were talking about the biggest challenges for these leaders were recruiting talent with advanced gen ai, technical skills and providing effective training programs were two of the biggest challenges.
[01:00:17] A lack of training resources was in the top 10 barriers to using gen gen AI for the first time, since they've been doing this. So I kind of just took away, like, the problem is also a massive opportunity at an individual level, whether you're an agency consultant, knowledge worker, leader, whatever. It's not like, okay, you can go offer courses in education, but like, there's such a hunger for guidance.
[01:00:38] People are so honestly lost or like need clarification and translation of this step that if you can be the person. To even just in an informal way, like share what you are learning, whether it's publicly, whether it's in your company, I think that's a really big advantage. And I can almost guarantee you people will notice and it will lead to good things.
[01:00:57] Paul Roetzer: Yeah. And that was, you know, I think even for my Move Three [01:01:00] Seven Moment keynote at MAICON, that was like, my call to action is you got, like, if you're in the room, if you're, you know, someone who's listening to this podcast, you're ahead of other people most likely.
[01:01:09] Mike Kaput: Yep.
[01:01:09] Paul Roetzer: And you gotta, you gotta pull them along.
[01:01:12] Like, we all kind of have this responsibility to figure this stuff out and then find ways to help our peers get through this. Because there are a lot of people who are just afraid of this. Like, that's abstract. So they just don't ever get started. Yeah. And it's, the future isn't gonna go well for people who don't, learn to embrace this stuff in a responsible way.
[01:01:32] And so I think we all sort of have a responsibility to do everything we can to help people figure this out. And, you know, I hope we start to see more and more efforts like that. I've been encouraged, certainly our community is like wonderful at that. That's like the 1500 people are at MAICON. You could just feel everybody felt that same sort of, responsibility.
[01:01:53] So yeah, I mean I think it's gonna continue to be a barrier within, but we need, need those internal champions to help drive this forward. [01:02:00] Yeah.
[01:02:01] Nudify Apps and Public Figures Getting Deepfaked
[01:02:01] Mike Kaput: Alright, well this next topic is important, but not so fun. there is a wave of what they call notify apps and also just general deep fake, instances and apps that are triggering new alarm among public figures and AI ethicists as synthetic images and videos spread faster than platforms can remove them.
[01:02:20] So in recent weeks, multiple apps offering what they call AI undressing features have gone viral sometimes in channels like Telegram and Discord. And these are basically tools that can generate realistic nude images of anyone from a single photo. So as AI ethics researcher Rebecca SMA warned on LinkedIn, we will include the link to this post.
[01:02:40] She did a deep dive on this technology and said it is cheap, instant, and targeting real people, mostly women and teens without consent. At the same time, some prominent scientists recently have also become targets of not ified DeepFakes, but DeepFakes nonetheless impersonating them. In videos, Dr. Mico Kaku said he is [01:03:00] been impersonated in fraudulent unauthorized deepfake videos that spread false claims on YouTube and TikTok.
[01:03:05] He urged companies to police this better and respond to take down requests faster. Physicist Brian Cox confirmed fake AI accounts of him had also appeared. He thanked YouTube for removing them, but called the long-term problem quote, deeply concerning, especially in science and politics, and Neil deGrasse Tyson got ahead of it and deep faked himself to make a point about how good this technology has become.
[01:03:27] He started a recent video on his YouTube channel with a deep fake version of himself saying that after looking at all the evidence. He believed the earth is flat and then the real Neil deGrasse Tyson came in and corrected him. So Paul, like, this is not new, but it seems like an absolute disaster personally.
[01:03:45] Like how do you stop new to fi ops? How do you stop DeepFakes? Like even if you ban them, which doesn't seem to be happening fast enough on the platforms, you can do this with open source AI too.
[01:03:56] Paul Roetzer: Yeah, I don't think there's a good answer to that one. So Rebecca's post [01:04:00] is what I had kind of like caught my attention.
[01:04:01] So she had said she spent the last week pretending to be a teenager looking for deep fake apps to help me make some AI nudes. And I learned some stuff I think you need to know about. and then she said she had found 85 sites in under an hour. And this is actually where Grock was very helpful because most of the other tools wouldn't even talk to her about it.
[01:04:19] So as she was trying to like figure it out, Grok doesn't have these filters. And then she said she was sharing, this for awareness that this was an issue. So, at a, at a real high level, like this is more probably of a platform distribution problem, meaning, openAI's, Google, whatever, they can put filters in meta to prevent people from doing these things by training these models.
[01:04:46] Unfortunately, the way you have to do this is you have to train them on nudes. And so some humans have to sit there and actually like, show all of these things and not just like standard stuff, but like horrific stuff so that the models learn to identify [01:05:00] those things and extract them in an automated way.
[01:05:03] So I don't think you are, well, I don't, I don't, I would rephrase that. You are not going to stop AI models from being able to do these things. Yeah, because to your point, Mike, the smaller open source models could do these things. Like you don't need a powerful model. So openAI's, Google, whatever they can put filters in so that they're.
[01:05:25] standard consumer AI systems won't do these kinds of things. It'll, it'll block it from happening. But anybody could take, you know, an open source model and train it to do this stuff, and it, it'll be there. And not just images, like the videos like, you know, imagine SOA like capabilities where you can take someone, run it through the notify app, extract the clothes, then upload it to a video thing and have it turned into a video of someone doing something.
[01:05:49] They obviously never did. it, it's horrendous. Like, but, but this is like, this is the whole point of this conversation is we can't stop it. Like [01:06:00] we have to have awareness about it. Schools have to be aware, this is a problem. Parents have to be aware this is a problem. You know, kids have to be aware. This is a problem.
[01:06:07] It is part of society now, and I don't know how you would extract it from it because, there is demand for it, unfortunately. And so people will create it. Now the deep fake stuff. Yeah. I mean the scientist, it, it's, again, like you could do the same thing with politicians and celebrities. Like, I dunno, we've, we've just entered a very different phase in society where the things we've always worried about being possible are now possible.
[01:06:36] And most of societies still doesn't know it's a thing. They, they don't know that the models can do this stuff. And so when they see things online, they just assume it's real. Yep. So this goes back to that awareness and sort of pulling your peers, your family, your friends along and like making sure everybody knows what's actually happening in the world.
[01:06:54] Yep.
[01:06:55] Google Labs Introduces AI Marketing Tool
[01:06:55] Mike Kaput: Alright, next step. Google Labs has launched a tool called Pomelli, which is [01:07:00] a new experimental AI marketing tool built to automate campaign creation while staying true to your brand's voice. So the way Pomelli works, it analyzes a company's website to understand identity tone and audience, and then it generates custom marketing campaigns like headlines, social posts, and ad copy.
[01:07:16] All designed to sound like they came from the brand itself. According to Google Labs, the tool's goal is to make scalable on-brand content accessible to smaller teams that lack dedicated marketing resources. Now, Pomelli is currently available only in the us, Canada, Australia, and New Zealand. So Paul, I'd be curious to hear about your experiment with Pomelli.
[01:07:37] 'cause I, they, I tried it and they immediately came up with a warning that like, hey, there's high usage. It might not work. And I struggle. I couldn't even get it to work to start.
[01:07:47] Paul Roetzer: Okay. I was, luckily I was, I was probably doing it like midnight eastern time last night, night, right? Probably low usage, marketers are safe.
[01:07:54] Like my, so my, my first takeaway is marketers and creatives. Don't worry, this is not [01:08:00] automating your job. so when you go in, it's got like these three steps where you, it generates your, your business, DNA, you give it a website and it goes and pulls logos and fonts and visual aesthetics and. Learns your tone of voice and studies, your brand values and all this stuff.
[01:08:12] So basically just going through the site in an automated way, imagine like a little agent going through and like learning about your brand, which I was like, oh, okay, this is cool. Like we used to do this when, you know, when I owned a marketing agency, we would go through and we would like learn the brand and someone would probably spend three to five hours going through the website and figuring all this stuff out.
[01:08:28] So it goes and then actually pulls all the images from the website and puts them into a little library that it can then use for creative, creates your, it creates a, a color palette. Like it is kind of cool. So the first step was like, all right, so like five minutes in it's gotten to know my business, DNA and then it asked, what campaign do you wanna run?
[01:08:44] So I was like, okay. So I just typed in grow our artificial intelligence show podcast audience, and then it gave me like three options of, of like campaign theme, I guess. So I chose Essential AI Insights Weekly was like the one I went with. and then [01:09:00] it lets you edit the creative. Soit created like four variations of, I guess these are like digital ads.
[01:09:06] I don't, I don't know, like the intent here. And so then I tried to, like, I changed the visual. I was like, all right, lemme see what the editing looks like. And so I picked a different image where I was standing on stage and it cut me off. And so like the image it dropped in is like my right shoulder.
[01:09:24] And, and then like just a background from the stage. And I was like, okay, like this obviously isn't very intelligent. It's not even like, doesn't even know to focus on like the human and the thing. So cool concept. again, I know it's just an experiment, but my very, very limited one time test, I would not be running a second test.
[01:09:43] Like I would say it is to the point where it's not even worth trying to do this again. Doesn't mean it won't improve significantly if they put resources behind it, but they could, that would say that if we go to the Meco conversation, they need to go hire like 50 creatives who've [01:10:00] actually done creative work Yeah.
[01:10:01] And do some fine tuning on this model because it is, it is not there yet. Yeah. My overall take.
[01:10:07] Mike Kaput: And as one final note, if you're wondering where the name came from, keep wondering because I did research, I thought this was like a reference to something and it apparently, according to perplexity, it is not a reference to any like marketing thing or person or concept.
[01:10:20] It's just the name. So cool. alright Paul, well that is another pact week in ai. That is a wrap. Now I just wanna leave our audience with a couple quick reminders here as we wrap the episode. First, go check out the link. We'll include to that a the AI pulse survey. We'd love to hear from you. We wanna learn more about your perspectives on ai.
[01:10:42] So go ahead and go to the link in the show notes that we'll provide and take that survey. If you've got a minute. Also, I'd encourage you to subscribe to our newsletters we talked about on SmarterX dot ai. You can, sign up for the exec AI insider newsletter every Sunday from Paul with his kind of.
[01:10:57] Take on where AI is and is going. And [01:11:00] then we also have our, this week in AI newsletter@marketingaiinstitute.com slash newsletter. That includes all the stories we talked about today, plus everything we didn't get to on the podcast. And last but not least, we would love if you could leave us a review, good, bad, ugly, whatever.
[01:11:16] We just want to hear your feedback. Please leave us a review on your podcast platform of choice. It really helps us improve the podcast and get into more headphones of more people. So, Paul, thanks again.
[01:11:28] Paul Roetzer: Thanks Mike. I will, I'll see you back in the office later this week when I find my way back from these travels.
[01:11:34] Sounds
[01:11:34] Mike Kaput: great. Looking forward to it.
[01:11:36] Paul Roetzer: All right, thanks everyone. Have a great week. Thanks for listening to the Artificial Intelligence show. Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI [01:12:00] Academy, and engaged in the Marketing AI Institute Slack community.
[01:12:03] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
