The Artificial Intelligence Show Blog

[The AI Show Episode 170]: How ChatGPT Is Used at Work, New GDPval Benchmark, AI “Workslop,” ChatGPT Pulse, Meta Vibes & More AI Economy Warnings

Written by Claire Prudhomme | Sep 30, 2025 12:15:00 PM

Never-before-seen research on how ChatGPT is actually used at work, a brand new evaluation framework to determine AI's impact on the economy, and the rise of "AI workslop"...

Needless to say, it's been a busy week. In this week’s episode of The Artificial Intelligence Show, Paul Roetzer and Mike Kaput break down everything going on in the world of AI, including the topics above and brand new AI releases like ChatGPT Pulse, Meta Vibes, and much, much more.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

 

Watch the Video

Timestamps

00:00:00 — Intro

00:06:36 — ChatGPT Usage and Adoption at Work

00:16:03 — OpenAI GDPVal Benchmark

00:30:43 — AI Workslop

00:40:40 — ChatGPT Pulse

00:48:00 — OpenAI-Nvidia Mega-Deal

00:54:23 — Meta Vibes

00:58:32 — ChatGPT Parental Controls

01:02:24 — Latest Updates on AI and Jobs

01:09:30 — Mercor Founder Interview

01:12:11 — AI Product and Funding Updates

Summary

How ChatGPT Is Being Used at Work

A new report from OpenAI details how ChatGPT has become the fastest-adopted enterprise technology in history.

The report is titled “ChatGPT usage and adoption patterns at work,” and it contains tons of data on how ChatGPT is evolving beyond a simple productivity tool and becoming a foundational “operating system” for daily work.

To get at this data, the report “combines findings from independent third party industry-wide studies with analysis done by OpenAI on usage of ChatGPT and ChatGPT Enterprise.”

Here are some of the data points that jumped out as we read through this:

Today, over a quarter of all U.S. workers (28%) use ChatGPT for their jobs. Adoption is highest among younger employees (ages 18-29 are twice as likely to use it as those over 50) and correlates strongly with education level (45% of workers with graduate degrees use it).

The Information Technology (IT) industry is the largest user, making up 27% of business weekly active users. Other heavy-adopting sectors include professional services, manufacturing, and finance. Healthcare has been slower to adopt, likely due to strict privacy and compliance rules.

In the first 90 days of use, four main tasks dominate: writing, research, programming, and analysis.

Most employees stick to core, accessible features like search and data analysis. More advanced capabilities, like deep research and custom instructions, are used primarily by technical "power users" in roles like R&D and engineering.

OpenAI Releases New "GDPval" Economic Benchmark for AI Performance

OpenAI just launched a new benchmark called GDPval, and it’s a big shift in how we measure AI’s real-world impact.

Instead of testing models on abstract logic puzzles or academic exams, GDPval looks at whether AI can actually do the kinds of work people get paid for. It spans 44 knowledge jobs, from mechanical engineers and financial analysts to nurses and real estate managers, all drawn from industries that drive the US economy.

Each task in GDPval was built by professionals with over a decade of experience. And the results? Some of today’s best models, including GPT-5 and Claude Opus, are already producing work that experts rate as equal to or better than human output nearly half the time, and doing it 100 times faster and cheaper.

How AI "Workslop" Is Hurting Business Productivity

Generative AI was supposed to supercharge productivity. Instead, many companies are grappling with something that researchers are calling workslop—AI-generated output that looks polished but actually creates more work.

Harvard Business Review recently published an article by the team at BetterUp Labs detailing their research showing that “Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers.”

In a national survey run by BetterUp, four in ten employees said they’d run into this in the past month. Each instance of workslop takes nearly two hours to untangle. And people report that after receiving workslop, they see the sender as less capable, less trustworthy, even less intelligent.

Now, the problem here doesn’t seem to be AI’s actual capabilities.

AI can produce slides, reports, code, writing, strategies, etc. The issue is that too often, the content lacks context or accuracy, and many human knowledge workers aren’t adding any of their own thoughts, oversight, or synthesis to make the output actually valuable.

As a result, a teammate or manager receives something that is clearly just cut and pasted from an AI tool, and ends up rewriting, double-checking, or clarifying with others. What looked like efficiency on one end just pushes the effort downstream.

This episode is brought to you by AI Academy by SmarterX.

AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off either an individual purchase or a membership by using code POD100 when you go to academy.smarterx.ai.

This week’s episode is also brought to you by MAICON.

This is our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: Go talk to anybody who knows how to use ChatGPT and ask them how many hours a month they're saving you. 20, 30, 50. Like, okay, well that's gonna transform the economy, right? If everyone else has that knowledge, then everything changes basically overnight. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.

[00:00:23] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute, chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:45] Join us as we accelerate AI literacy for all.

[00:00:52] Welcome to episode 170 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording. Monday, September [00:01:00] 29th at 10:30 AM Eastern time. I do expect quite a bit of news this week. So timestamps as always can be very important here.  OpenAI's in particular, Mike has been dropping stuff left and right, not major model things, but I think preludes to the model stuff.

[00:01:18]   I mean, literally every day last week it feels like there was like two or three announcements, either through tweets or actual posts on the website about new things. We're gonna talk about a lot of those things today, but they're definitely a prelude to what's coming, which Sam Altman,   indicated not about a week or so ago.

[00:01:34] We talked about the tweet where he said lots of updates coming in in September and October. So yeah, it's gonna, it's gonna be an interesting October. I think we'll get the next version of Gemini. It sounds like the next version of Anthropic. Claude,   new models, probably SOA two maybe from openAI's. I think a lot is coming in October.

[00:01:53] So sort of buckle up when it comes to that stuff. And some of it might start hitting this week. So we, we have a lot to [00:02:00] talk about, a lot of kind of macro level stuff, more things related to the economy, jobs, a new eval from openAI's, which I'm really intrigued by to get a new product from Cheche, GBT Pulse.

[00:02:11] We'll talk about. There's, so, there's just a ton going on.   and I think October is gonna just speed up here. So today's episode is brought to us by AI Academy, by SmarterX. We've been talking a lot about AI Academy. It's where a lot of our,   internal time and resources are going to building out the courses and certificates to help people try, sort of drive their personal and business AI transformation.

[00:02:32] So these include AI fundamentals, piloting ai, scaling ai, our new AI for industries and AI for department courses. And,   gen AI app series, which drops a new review every Friday and then our AI Academy live. So the latest, well, there was a couple things that dropped last Friday, but,   one of the things that dropped is one of the new AI for department series courses, and this is our AI first sales.

[00:02:55] So we have AI for marketing is now live, that is a full course series with professional [00:03:00] certificate and we now have AI for sales. Mike is the instructor for that, created the series. So I was gonna, Mike, just,   if you wanted to give kind of a little preview of what's involved in the AI for Sales course series.

[00:03:10] Mike Kaput: Yeah, for sure. Paul. This one was a real joy to record. It's something that we, we talk with so many salespeople and sales enablement organizations on how AI can make you more productive and performance driven. So this course is really intentionally designed to take you step by step through a journey of no matter how much you do or don't know about AI actually coming away with.

[00:03:32] A real framework and plan for AI transformation in your job. So we kind of go start at a high level and kind of look at how AI is actually transforming the role of sales, thanks to the capabilities we now have available. And then we get into some real useful frameworks and systems and models for you to actually build your own kind of AI blueprint, both for your specific sales role and or for your team if you're managing a team.

[00:03:57] From there, I show people [00:04:00] exactly how I would go about vetting and finding technology for each of your use cases and problems and areas of improvement you can use AI in for your role. And then we kind of end with like a really hands-on experience of using some of the more advanced AI tools out there.

[00:04:16] And the whole point of this is that you are going to come away as a sales professional with not only the skills and the knowledge, but a very clear blueprint of what to do next to actually reinvent your role and how you work using ai. And that comes with. A professional certificate at the end as well, so you can actually,   showcase what you've learned and proof that you've gone through the exercises and acquired the skills that we teach.

[00:04:40] Paul Roetzer: That's awesome. Yeah, I'm looking forward to taking it. I actually haven't gone through it yet, but I'm excited for it. You can check that out as well as all the other course series and certificates at academy do SmarterX dot ai and like all the other course series, you can buy them individually or as part of the AI Mastery membership.

[00:04:56] They're part of that annual membership program. Alright. And then,   [00:05:00] also brought to us by ma Con 2025, which is coming up very quickly. That is Macon Do ai m ai CO n.ai. We've also been talking a lot about this. This is our sixth annual marketing AI conference,   October 14th to the 16th in Cleveland. 98% of the agenda is set.

[00:05:19] We have one more,   final mainstay session that we're working on finalizing hopefully,   in the next day or so here. And then the complete agenda will be theirs. But you can go check out the dozens of speakers, dozens of breakout and main stage sessions, as well as the four,   optional workshops that are happening on October 14.

[00:05:39] So go to Mako AI to learn more, and you can use POD 100 for $100 off your ticket. We are still trending toward the 1500 plus,   attendance, so we'd love to see you there in Cleveland, October 14th to the 16th. Okay.   like I said, there was a lot, the economy stuff just seems like every week now, Mike. [00:06:00] Like it's, yeah, we're starting to get way more data every week.

[00:06:03] We're getting new reports about the impact.   we had some really good research drop last week and   you know, as I was going through it was just like you can just start to feel that the impact is coming in part because of the velocity of the research around it and that it's not just the AI labs, but it's actually the economist themselves.

[00:06:25] Lots of interesting stuff, and I think some of this is foundational research that we're just gonna see get better and better,   as we move forward. So let's, let's kick off and start talking about some of those, Mike. 

[00:06:36] ChatGPT Usage and Adoption at Work

[00:06:36] Mike Kaput: Sounds good, Paul. Our first big topic this week is a new report from openAI's that details how ChatGPT has become the fastest adopted enterprise technology in history.

[00:06:47] So this report is titled ChatGPT Usage and Adoption Patterns at Work, and it contains tons of data on how ChatGPT is evolving beyond a simple productivity tool and becoming a [00:07:00] foundational operating system for work. So to get at this data, the report quote combines findings from independent third party industry-wide studies with analysis done by openAI's on the usage of ChatGPT and ChatGPT Enterprise.

[00:07:16] So they looked at some chats that were anonymized from users as well as combining third party studies into the data that they have provided in this report. So a couple key points that kind of jumped out as we read through this first today, over a quarter of all US workers, 28% used ChatGPT for their jobs.

[00:07:34] Adoption is highest among younger employees, ages 18 to 29 or twice as likely to use it as those over 50 and the usage correlates strongly with education level. 45% of workers with graduate degrees use it. The IT industry is the largest user making up 27% of business weekly active users. Other heavy adopting sectors include professional services, manufacturing, and finance.

[00:07:59]   [00:08:00] healthcare has been a bit slower to adopt, probably due to privacy and compliance rules. In the first 90 days of use, they found that four main tasks dominate writing, research, programming, and analysis. So technical teams like engineering, it, et cetera, are the heaviest users. They focus mostly on coding, research, and troubleshooting.

[00:08:20] Go to market teams like marketing, sales, comms are also major adopters using it for writing creative ideation and research. Interestingly, most employees actually stick to core accessible features like search and data analysis. More advanced capabilities like deep research and custom instructions, are primarily used by what they call technical quote unquote power users in roles like r and d and engineering.

[00:08:45] So Paul, we talked about. Similar research openAI's did with a similar methodology on consumer ChatGPT usage back on episode 1 68. At that time, we noted it would be really nice to see some data on business usage. So [00:09:00] now we've got some,   what jumped out to you here's particularly interesting or noteworthy?

[00:09:06] Paul Roetzer: I think, like I said, it just the fact that we're now getting a baseline from ChatGPT, which has a far greater,   diversity of users by industry and use cases than Anthropic does. So I think we start to get a better representation overall of how these tools are really being used. The, I I thought it was interesting they did say, so the source, in terms of the percentage of usage by industry, they indicated that they did look at ChatGPT free Plus and pro users in the US by,   they, they segmented out the people that have professional email addresses and then they mapped those email domains to industries.

[00:09:39] Yeah, so there was some connecting of data points here.   you had mentioned it at 27%, which makes sense for sure. But the professional scientific technical services, 17,   finance and insurance was 6%. Public admin, 5% healthcare and social assistance, 5%.   you had mentioned, you know,  the challenges in terms of [00:10:00] privacy compliance rules around healthcare, but that they're starting to see those, you know, areas like clinical documentation, administrative workflows,   suggesting healthcare could become a hotbed for AI adoption, which makes total sense.

[00:10:12]   and then, you know, we start to look at those,  the thing you mentioned about the advanced features, this is what we've been talking about all the time.   People know to go in there and give prompts and have it help with emails or summarizing meetings, like the obvious things. But in terms of using the reasoning models, going into deep research,   custom instructions, building their own gpt.

[00:10:32] Yeah. Again, if you're listening to this podcast every week, you're, you're in the AI bubble and not the economic AI bubble. You're in the bubble of, yeah, obviously, like we, we've been using custom GPTs for a year and a half now. Okay. That is not normal. Like the average worker does not know how to build a custom GPT.

[00:10:51] They don't know what deep research is, and they don't know the difference between a reasoning model and a chat bot. Like that's not normal. And so I think we have [00:11:00] to keep reminding ourselves that, for those of us that are sort of on the frontiers of this, always experimenting with the new stuff, wanting to hear what the new things are that is abnormal in today's work environment.

[00:11:11] And you know, we often talk about the need for all of us who are in the know to help educate and pull along the other people that aren't. And we're gonna talk a little bit later on about the continued demand from the C-Suite for people who are AI literate. These are like actual words coming right from CEOs now.

[00:11:30] And so we all being in this kind of AI bubble, have a responsibility to help,   convince other people that it's for the good of their own careers, that they really start to understand the stuff at a deeper level. Build a custom GGPT for them. Show them a deep research project. Like do something to help pull your coworkers along, your family, your friends, especially that next generation.

[00:11:53] Like really get them doing these things because, you know, we see from this research it is still early. And you know, I think that, [00:12:00] that we're starting to really though see that it's starting to make a major impact within companies. It's gonna start to have a major impact within people's careers. And this research is only the beginning.

[00:12:09]   in terms of where openAI's is going. They have a vested interest in,   better understanding,   and utilization of the tools. And they're gonna start to really push the different levers they have internally to drive that adoption of the reasoning models, the iGen capabilities, things like that. So,  

[00:12:28] Yeah. The one thing I will note, Mike, that they said, what's next for work? When they were, you know, sharing this said in the years ahead, AI will embed itself in nearly every workflow. As this happens, employees will spend less time performing tasks and more time supervising and shaping AI outputs. The cross-functional reach of ChatGPT means individuals will be able to take on tasks once spread across multiple departments.

[00:12:50] A product manager, for example, might u   use it to analyze customer feedback, test and refine a new feature and draft the legal and marketing content needed [00:13:00] to bring it to market. This goes back to what we talk about in episode 1 68 about the generalist and like, maybe you don't have to be an expert in these different departments.

[00:13:08] So, you know, I don't have to be a customer success person to, to bring value to the customer success role within the organization. And it's interesting, like after we talked about that generalist mentality, I actually had multiple people reach out to me after episode 168 saying, that was me. Like you were describing me.

[00:13:24] And I wanna do more of that. Like how do I do that within my organization? How do I help them realize that I can create value in other areas? Because I know how to ask smart questions, I know how to use the tools to figure things out, and I can provide that layer of critical thinking that's needed to know is it a good output or isn't.

[00:13:40] So yeah, I just, I don't know. I think it's a really interesting direction. I'm happy we're starting to see more and more of this research.   you know, I think it helps those of us who are sort of on those frontiers to really process where we are and realize we're probably way further ahead than we give ourselves credit for.

[00:13:55] And now what can we do to bring everybody else with us? 

[00:13:58] Mike Kaput: Yeah, I love that last point [00:14:00] because the nature of being in the AI bubble really jumped out at me as I was reading this. And I just came away even more optimistic and excited than ever because there's opportunity here for everyone in the bubble with us listening.

[00:14:12] Yeah. You probably don't have to over complicate it. You don't have to learn every single tool. I would say just doubling and tripling down and getting more systematic about what we already know is working. GPT is great prompting, deep research, et cetera. You can really just establish yourself as such a leader in your organization.

[00:14:31] Paul Roetzer: Yeah, and I agree. I mean, I think we've talked about that idea of not overcomplicating this, just get really, really good at, you know, take your pick copilot, ChatGPT, Google, Gemini, whatever it is. Just get really good at it.   understand the different features when new capabilities come out. You know, if you're an AI Mastery member with us, watch the product reviews that we drop the gen app reviews every Friday.

[00:14:54] Like go learn real quickly how to do it, and then go integrate and then go teach someone like, you know, it's, it seems [00:15:00] maybe simplistic, but like make it a goal that every time you new learn a new capability, teach someone that capability.   Try and pass that knowledge along in some way. If you're not, you know, creating your own courses or, you know, have a big online presence.

[00:15:13] Then, you know, teach someone in your company, like if you learn how to build GPTs or you learn how to do a deep research project, like pass that knowledge along, it's contagious. Like, once people get it and they see the value, they're gonna wanna do it too. So, yeah, I don't know. I agree. I, like we always say on this podcast, like, I choose to be optimistic.

[00:15:29] I get that there are many, many ways that this does not go well in the economy, in society. We talk about a lot of those things. We'll talk about some of those things today.   but at the end of the day, we choose to be optimistic about the potential here and trying to make sure that we do this in, in a responsible way.

[00:15:46] But yeah, I don't know how you can't be excited about the potential when, when you realize that the knowledge we all have and are gaining every day is like unparalleled in human history. Like how quickly we can learn things and adapt [00:16:00] and,   I don't know how you can't be excited about that. 

[00:16:03] OpenAI GDPVal Benchmark

[00:16:03] Mike Kaput: Our next big topic this week is also openAI's related.

[00:16:07] openAI's just launched a new benchmark called GDPVal, and it's a big shift in how we measure AI's real world impact. So instead of testing models on things like abstract logic puzzles or academic exams, GDPVal looks at whether AI can actually do the kinds of work people get paid for. It's an evaluation framework that spans 44 knowledge work jobs from mechanical engineers and financial analysts to nurses and real estate managers, all drawn from industries that drive the US economy.

[00:16:41] So each task in this evaluation that they're judging AI models on was built by subject matter experts with over a decade of experience on average. So they basically built out all these tasks on which they can evaluate AI's performance based on benchmarks and delivering real world results that you would [00:17:00] expect of a human.

[00:17:02] They found that some of today's best models, including GPT-5 and Claude Opus are already producing work that experts in these evaluations doing kind of blind evals rated as equal to or better than human output nearly half the time. And AI was found to have done it a hundred times faster and cheaper.

[00:17:23] So Paul, this is the, what they're kind of doing here is building out the kind of MVP of this,   GDPVal framework, which sounds a lot like the type of evaluation you've been saying that we need in AI for a while. A way to actually measure like, is AI doing a good or a passable or even better than human job on real world economic tasks?

[00:17:47] Is that what we're getting here? 

[00:17:49] Paul Roetzer: Yeah,   the thing we've talked about for a while is that the IQ tests were saturated, meaning, like, that we know that they're, you know, exceptional at basically everything. I mean, [00:18:00] GPT-4 was in, in, in what March of 2023. It was already on par experts at basically every test you could give it.

[00:18:08] So there's only so much to be gained by increasing the IQ here. So what we, we really needed to understand was the implications on actual work. People do the tasks that are part of these jobs. This is why we, you know, we talk about jobs GPT that I created last, what was the,   summer of 2024. It was trying to look at real world tasks at a job level.

[00:18:30] So we can see and feel this, like anytime we stop and look at, you know, a campaign we run, or just the 20 things we do in our roles each month, we can see it's happening. And we kept saying like, why aren't people doing the research to prove this out? Like, you have all these things like, well, it's not impacting GDP yet.

[00:18:47] So the economists weren't worried. It's like, well, that's a trailing indicator. Go talk to anybody who knows how to use ChatGPT and ask 'em how many hours a month they're saving you. Like 20, 30, 50. Like, okay, well that's gonna transform the [00:19:00] economy, right? Like if you, if everyone else has that knowledge, then everything changes basically overnight.

[00:19:06]   so yeah, I think this is what was missing is these kinds of more real world assessments that look at the reality of what's happening and how AI can be integrated into these different roles. So, to dig a little bit deeper into this, so they, they looked at 1,320 specialized tasks. Every task is based on real work, such as legal briefs, engineering blueprints, customer support conversations, nursing care plans.

[00:19:29] So they're getting into like the real stuff. They said, unlike traditional benchmarks, GDPVal tasks are not simple text prompts. They come with reference files and context and the expected deliverables, span, documents, slides, diagram, spreadsheets, multimedia. The realism makes GDPVal a more realistic test of how models might support professionals.

[00:19:48] It is limited to one shot of valuations. That's important to know. So there's giving you this one chance to do it so it doesn't capture cases where model would need to build context or improve through multiple drafts. [00:20:00] They do say that future versions will extend to more interactive workflows and context rich tasks to better reflect the complexity of real world knowledge.

[00:20:09] They took,   nine industries initially they were chosen,   based on those contributing over 5% of us. GDP as determined by data from Federal Reserve Bank of St. Louis. They then selected five occupations within each industry that continue, contribute, contribute most to total wages and compensation, and are predominantly knowledge work occupations.

[00:20:31] Now this one, Mike, was interesting because one, it goes back to episode 1 49 where I was talking about the total addressable market of salaries.   And how venture capital firms and AI labs we're basically gonna look at. Every industry. And then they would total up  the salaries of all the people working in that profession.

[00:20:47] And that would tell them, where do we target first? So in essence, they're doing that, but they're just doing it initially through research,   in terms of where this impact is happening. But you can see the parallel where you would then build  the [00:21:00] technology, the agents, things that would, that affect that.

[00:21:03] And then this is actually also very similar to how we look at our AI Academy roadmap, is that we look at the industries that are most impacted, that have the most potential. That's where AI for Industries,   course roadmap comes from, which ones we do first. It's why we did healthcare and professional services right off the top.

[00:21:18]   it'll also get into how we build out the AI for Careers series as we start to think about this. So they define knowledge work,   predominant knowledge work as at least 60% of the components tasks were classified as not involving physical work. Th they used the experts not only to create the task, but they also used them as the greater, so this is a really interesting way that they did the GDP valve.

[00:21:39] So. They said they rely on expert graders as group of experienced professionals from the same occupations represented in the dataset. These graders blindly compare model generated deliverables from those produced by task writers not knowing, which is AI versus human generated, and then offer critiques and ranking.

[00:21:58] So it's a blind study. Yeah. [00:22:00] Which again, Mike goes back to the thing we talked about. The way you know when the impact is gonna be real is when you take a campaign or a project or a collection of tasks and you give it to the ai and you give it to a, a, a human at different levels, say an average human above, average human at their job.

[00:22:15] And then you blindly have to decide which is a better output. And it's when you start picking the AI more often than you pick the human output that we got problems. And their research shows we're pretty much already there, right? So the graders blindly compare the models,   the outputs offer the rankings, graders then rank the human and AI deliverables and classify each as better, as good as, or worse than one another.

[00:22:39] Then another interesting thing is they also built an automated grader. So now think about the impact of this, unlike the educational system. So they built an automated grader, which is an AI system, trained to estimate how human experts would judge a given deliverable. So yeah, they learn how humans grade things, then they build an AI to predict how a human would [00:23:00] grade something.

[00:23:00] And now we have an automated grader that functions like a human.   in other words, instead of running a full e expert review every time the automated grader can quickly predict which output people would likely prefer, or releasing this tool through evals dot openAI's dot com as an experimental research service.

[00:23:19] That's fascinating. Hmm. And so overall results here. So they found that today's best frontier models are already approaching the quality of work produced by industry experts. I'm gonna repeat that. The best frontier models are already approaching the quality of work produced by industry experts. To test this, they ran blind evaluations where industry experts compared deliverables from several models.

[00:23:42] So, you know, you might, some people listen to this be like, oh, well it's openAI's. They're just hyping this. Okay? They didn't just test openAI's models, they ran it against Claude Opus 4.1, Gemini 2.5 Pro, Grok 4, as well as their internal models. And guess what? They admitted that Opus 4.1 was actually better [00:24:00] than GPT-5 at things.

[00:24:01] So this is not some like hype fluff thing. This is as good of a, as I've seen of real world stuff where they're actually saying, listen, we did it this way. When we look at wins and ties and like the human assessment experts, how they assessed it. Claude Opus 4.1 is actually the best performing model.   excelling at aesthetics like document for formatting, slide layout where GPT-5 excelled on accuracy.

[00:24:26] Here's the, here's another one that's kind of like a, you know, a leader here. In addition, we found that frontier models can complete GDPVal tasks roughly 100 times faster and 100 times cheaper than industry experts. Okay, I'm I'm gonna repeat that one one more time too. And keep in mind not at the average human worker industry experts at what they do.

[00:24:50] So we found that frontier models today, keep in mind, we're gonna get new ones in  the next two weeks. So today's current models can complete tasks roughly [00:25:00] 100 times faster and 100 times cheaper than industry experts. We expect that giving a task model before trying it with a human would save time and money, obviously.

[00:25:10]   and then they did say our goal is to keep everyone on the up elevator of AI by democratizing access to these tools, supporting workers through change and building systems that reward broad contribution. Okay, so then two other quick notes, Mike related to this. So a tweet I saw this weekend. This was like Saturday Sunday from Julian Schrier.

[00:25:32]   he is a computer scientist and AI researcher and Anthropic. Now that's significant on its own. However, Julian also played key roles at Google DeepMind on the development of AlphaGo AlphaZero and Me Zero. This guy is a major player when it comes to this. He's highly respected within the AI research community.

[00:25:52] He does not publish often. So the last post I saw from him on his Twitter account was like July. So very rarely is he on social media. I couldn't even find his [00:26:00] profile on LinkedIn. I don't even know if he has one. So this is someone who keeps low key intentionally. So he wrote an article and tweeted it out.

[00:26:07] Got a ton of attention from people over the weekend. He wrote, people notice that while AI can now write programs, design websites, et cetera. It still often makes mistakes or goes in a wrong direction, and then they somehow jump to the conclusion that AI will never be able to do these tasks at human levels or will only have a minor impact when just a few years ago having AI do these things was complete science fiction.

[00:26:30] Or they see two consecutive model releases and don't notice much difference in their conversations and they conclude AI is plateauing and scaling is over. Kind of goes on. And he, he concludes. Given consistent trends of exponential performance improvements over many years and across many industries, it would be extremely surprising if these improvements suddenly stopped.

[00:26:50] Instead, even a relatively conservative extra extrapolation of these trends suggest that 2026 will be a pivotal year for [00:27:00] widespread integration of AI into the economy. These bullet points models will be able to autonomously work for full days, eight working hours by mid 2026. At least one model will match the performance of human experts across many industries.

[00:27:15] Before the end of 2026, by the end of 2027, models will frequently outperform experts on many tasks. It may sound overly simplistic, but making predictions by extrapolating straight lines on graphs is likely to give you a better model of the future than most experts, even better than most actual domain experts.

[00:27:36] And then Mike, one other note, we'll put this PDF in. We didn't have time to like go deep dive into this one, but the authors, Eric Breon, Stanford University, Ajay aal, university of Toronto, who we've talked about before, he's one of the authors of prediction machines and Antoine Knick, university of Virginia, department of Economics, and Econ TI.

[00:27:56] They published a new paper research agenda for the economics of [00:28:00] transformative ai. I would suggest reading it, but the basic premise is they call transformative ai. Artificial intelligence that enables a sustained increase in total factor productivity growth of at least three to five x historical averages.

[00:28:14] What they're predicting or what they're recommending is we take on these grand challenges of research that looks at the impact of transformative AI on economic growth, innovation, income distribution, decision making power, geoeconomics information flows, safety risks, human wellbeing and transition dynamics.

[00:28:34] What they're basically laying out is they, they have these like, it's like 25 questions across these nine different areas that's saying, we're seeing it happening as economists, as people who've studied this for our whole lives. We are seeing it going and we don't have answers to any of these things and we have to go faster.

[00:28:49] So I think, again, when you go back to episode 1 68, we said, when you zoom out, you just see all this economic research happening, all these new studies, and it, the trend just [00:29:00] continues into this week, and I think between GDPVal. And what we see coming out from this group,   and then now seeing AI researchers who are talking about these economic studies within it is the time now where we have to really start trying to figure this stuff out.

[00:29:16] Mike Kaput: Wow. I mean, I wanna print out that paragraph from,   RI Weiser. Yeah. And paste it everywhere. The people notice that while AI can now write programs, et cetera, it still often makes mistakes and they jump to the conclusion. AI will never be able to do these tasks. I would really encourage even the people listening, because I fall into this trap myself.

[00:29:37] You have blind spots. We are getting too comfortable with AI progress to the point where it doesn't seem magical anymore. And we are, that's leading us to not fairly appreciate exactly what's happening here. 

[00:29:51] Paul Roetzer: Yeah. And I think I see that in my personal life. Like there's plenty, I've mentioned this before, like you just talk with, you know,   parents at my kids' school or people I play basketball with on Thursday nights.[00:30:00] 

[00:30:00] You watch the conversation change over time. So they all know I'm involved in AI in some capacity, and so I'll get the, oh yeah, we got copilot at work. It doesn't really do anything. I don't know. I use it for like meeting notes. And that would be like six months goes by and you get, Hey man, I use this new thing copilot, and like it was amazing and now it's like saving me three hours a day.

[00:30:18] And you just start to see the light bulb moments where it goes off, where these conversations shift where they now had their moment where they realize, oh my gosh, it's really helpful when it's personalized to me and it does something I do every day. And then you get the, oh wait a second. It's getting really good at all the things I do.

[00:30:35] Like what is this gonna mean to me in my job? So you watch these sort of like different levels of understanding of AI as they emerge. 

[00:30:43] AI Workslop

[00:30:43] Mike Kaput: All right. Our third big topic this week. Generative AI is supposed to supercharge productivity and oftentimes does, but many companies are also grappling with something that some researchers are now calling work slop, which is AI generated output that looks polished, but actually [00:31:00] creates more work.

[00:31:01] So Harvard Business Review recently published some research from the team at BetterUp Labs. BetterUp Labs wrote an article in Harvard Business Review that detailed their work, which shows that employees are using AI tools to create low effort, passable looking work that ends up creating more work for their coworkers.

[00:31:20] They call this,   AI work slot. So in this national survey, they ran four in 10 employees said they'd run into this in the past a month. Each instance of work slot takes nearly two hours to untangle. And people report that after receiving work slot from a colleague or a teammate, they see the sender as less capable, less trustworthy, and even less intelligent.

[00:31:44] The problem here in this phenomenon does not seem to be AI's actual capabilities. AI can produce these really great outputs like slides, reports, code, strategies, et cetera. The issue is that too often the initial output lacks context or accuracy, and many [00:32:00] human knowledge workers end up not adding their own thoughts, their own oversight, their own expertise to make the output actually valuable.

[00:32:08] So as a result, teammates, managers, they're receiving stuff that is clearly just copy and pasted from an AI tool, and as a result, they end up rewriting, double checking or clarifying the content, which doesn't make it very efficient at all. What looked like the employee being really efficient, just creates more work for other people.

[00:32:26] Now Paul, we've kind of touched a little bit on this issue of work, slap in a few different ways over the last, you know, like dozen episodes or so, but it was really cool to hear someone put a a name to it. I know I've seen it. Maybe talk to me about your experiences here. Have you encountered AI work slap a lot?

[00:32:44] Like what should we be doing about it? How does that play out in your 

[00:32:47] Paul Roetzer: experience? You've been work slopped is like my favorite phrase of 2025. I think I need that on t-shirts.   yeah, I think this is gonna become a major problem as more and more [00:33:00] people understand like reasoning models and deep research capabilities.

[00:33:03] And you start to be able to get more and more what appears to be high quality work at the push of a button, but then has no critical thinking tied to it, no context behind it, no effort to format the work to where it just like, you know, they didn't do this because they're just copying and pasting straight out of ChatGPT.

[00:33:23] And so it becomes obvious to you. I dunno. I really like their overall approach here. Defining it as this content that masquerades is good work, but lacks that substance to meaningfully advance a given task. Their research said that 40% report having received work slap in the last month, employees have encountered it on average,   15.4% of the content they received as work qualifies as this.

[00:33:47] So, yeah, it's, it's crazys. Yeah.   I think what it highlights me, Mike, is the, like a theme we talk about all the time is responsible AI training and approaching this as a change management thing, not a tech [00:34:00] thing. So I think in a lot of organizations where they just go buy the tech, you know, your copilot licenses, Gemini licenses, ChatGPT licenses, whatever, just give it to everybody.

[00:34:09] Hr, finance, legal, marketing, sales, service, like whatever. Everybody just gets the tools and you don't think about actually teaching them how to use the tools in a responsible way. And honestly like. Part of the problem might be if it is the one leading the charge to get these licenses and then provide them to people.

[00:34:27] They're not the ones who would usually think about like the business side of change management and responsible use of the tool by a marketer and like ways they might misuse it, things like that. So I think that in until organizations take a more holistic approach and think about the change management needed to make sure people actually understand this and how it can amplify their capabilities and unlock new levels of productivity and efficiency and innovation, but without like just short cutting everything and thinking you're just gonna copy and paste.

[00:34:56] It's like we talk about an education where people are gonna use [00:35:00] things like to just do stuff faster and they're not gonna care about the output.   and that's what happens when students aren't properly trained how to use these things. So the same stuff happens in work. I think it's probably like early in terms of this because most people don't fully understand how to use the tools.

[00:35:17] But as AI literacy improves and more. You know, workers really understand what these things are capable of and they realize I can do the report that usually takes me five hours and five minutes, and no, maybe nobody really reads this thing is gonna even realize that I didn't read this thing. I think it's gonna become prevalent, it's gonna be everywhere.

[00:35:35] And it does like, create problems like you have to, you're gonna create these, like they talked about,   work SLO uniquely uses machines to offload cognitive work to another human being. When coworkers receive work slop, they're often required to take on the burden of decoding the content, inferring Mr.

[00:35:52] False context, and then a cascade of effortful and complex decision making process may follow, including rework and uncomfortable exchanges with colleagues. [00:36:00] Like, it's weird when you have to say to somebody, Hey, did you even read the thing you sent me? Like, this looks great and it's like seven pages and I got three paragraphs in, and I'm pretty sure you, you didn't actually edit this because you would've known to not put this in there.

[00:36:20] It does make for very uncomfortable conversations and,   it can be a little bit embarrassing for the people who are producing the work. Stop. So, yeah, I think I put, like, I posted this to the internal chat for our team and I said, friends, don't let friends produce work. Stop. Like, consider this a policy for SmarterX.

[00:36:39] Yeah. Like, we are, we are not going to be doing this.   and we've kind of already had it. I just again, didn't have the name for it. 

[00:36:46] Mike Kaput: Yeah. One, the one point they made where people were actually emotionally responding to saying, look, after you received work slop, you see the sender is less capable, less trustworthy, less intelligent.

[00:36:58] That's a hundred percent my [00:37:00] experience. I'm not even that upset at the amount of time though that upsets me. It's more the lack of consideration because, and I've said this before in one way or another, I think in the age we're in now, you just have to say like, if you deliverable is just something chatGPT produced.

[00:37:18] What point do you have in the process? I can go do that myself if I have to review this anyway, and I could probably do it better than you. Your job is what comes after ChatGPT. 

[00:37:28] Paul Roetzer: I'll say a quick side note, Mike, I thought I found this so fascinating and I was actually like explaining this to my kids and my wife last night.

[00:37:34] We were driving home from somewhere, so I put my, so I wrote this editorial about work slop in my exec AI newsletter on Sunday, and then I actually posted that editorial to LinkedIn on Sunday. 

[00:37:44] Mike Kaput:   

[00:37:45] Paul Roetzer: And when I did it, I went into Google Gemini and I said, can you create an image for a LinkedIn post for me?

[00:37:50] Here's the post. And I copied and pasted the editorial into it. It then created this great realistic looking thing with like content all over the screen, all over the [00:38:00] desk. And then it had a stamp at the bottom that said Work slop detected, except it misspelled detected D-E-T-E-N-C-T-E-D. And I was like, that's interesting.

[00:38:10] So I noticed it before I used the image that it was misspelled. And I thought, I wonder if Gemini did that on purpose. And so I chose to leave the image with the misspelling, sort of like a macro level, like us, kind of clever or it's not. Well, someone actually screenshotted the image and in a comment on LinkedIn said, worth noting, meaning like, Hey, it misspelled this.

[00:38:33] And I replied and said, yeah, I actually thought the misspelling was appropriate here. Also wondered if Gemini did it intentionally. Someone else replied, doubtful, would I want to believe it? So I thought, h  this is interesting. So I went back into the thread in Gemini, the same thread that created the image, and I said, did you purposely misspelled detected as detected?

[00:38:53] It said, yes. That was an intentional misspelling to reflect the idea of work slop, a subtle error that might be [00:39:00] overlooked, but like the low effort on reviewed AI generated content. The article describes it's meant to be a meta commentary on the concept itself. I thought, 

[00:39:08] Mike Kaput: holy shit. 

[00:39:10] Paul Roetzer: I now, we have no way of knowing since I prompted it, that it misspelled something.

[00:39:15] Did it only then realize it and provide the rational after the fact? Yeah, my guess is no. I think it actually intentionally did it. And that's wild. Like, and so like, I always explain this to my wife and, you know, she listens to me talking about a all the time, and it was one of those where she actually kind of stopped and looked to me.

[00:39:32] She's like, that's crazy. And I'm like, I know. 

[00:39:36] Mike Kaput: I'll be honest, even with what we've talked about, about AI's ability to reason and persuade and deceive, even if it did make up a BS excuse, that's impressive to me as well, actually. 

[00:39:47] Paul Roetzer: Yeah, I mean, the level of think like it really made me think about the models differently.

[00:39:53]   That, I mean, six months ago these things couldn't accurately spell well, now we're [00:40:00] using the nano banana. We, we know we have a model that is very accurate in its spelling.   So six months, I wouldn't even thought to think that it would intentionally do that. But now it doesn't misspell stuff.

[00:40:11] So now it's like, oh man, that's crazy. Like it actually knows how to spell it correctly now, and yet it chose to not do it in the go. Look at the LinkedIn post, like we'll put it in the post, you can see it and then actually put a screenshot of the chat. Like you can see this is legit straight up. Like it just screenshot Shadow Gemini, just not copy and paste anything.

[00:40:28] It's like there's the screenshot of the conversation. I just thought that was wild. 

[00:40:34] Mike Kaput: It's getting weirder and weirder. I love it. Yeah. Alright, let's dive into some rapid fire topics this week. 

[00:40:40] ChatGPT Pulse

[00:40:40] Mike Kaput: First up, openAI's has released Pulse, which is quote, a new experience where ChatGPT proactively does research to deliver personalized updates based on your chats, feedback, and connected apps like your calendar.

[00:40:54] So here's how this works once a day. Pulse right within ChatGPT. It's just a feature you toggle on [00:41:00] scans your chat history. It scans connected apps like Gmail and Google Calendar, and also just your feedback on past pulses once you've started to kind of write them. And it generates a curated feed of updates in the form of these like short visual cards that are tailored to what matters most in your life.

[00:41:16] So this could be anything from like dinner ideas, travel tips, next steps on a side project, a reminder to buy a gift for a birthday on your calendar. So just all of this kind of personalized, almost like newsfeed for you, once a day that you can guide by kind of curating,   it to request topics. You can start telling it like, Hey, there's certain topics I'm more interested in.

[00:41:37] You can give it thumbs up or down to kind of rate the suggestions and train it. Now for right now, this pulse kind of,   you know, daily curated newsfeed is mobile only and limited to pro users, which is the highest tier plan. So 200 bucks a month. But OpenAI does plan to release it to other users once they learn and improve on this early version.[00:42:00] 

[00:42:00] Now, Paul, this seems interesting for a couple reasons. It might kind of hint maybe where we're headed. I mean, OpenAI outright says that Pulse is the first step to chatGPT becoming a proactive assistant in your life. It's not just responding to your prompts, it's act, actively, proactively finding what might you be interested in on any given day.

[00:42:20] Others have also speculated this is how OpenAI starts putting ads into your ChatGPT experience. What's your take on this? 

[00:42:28] Paul Roetzer: I've been experimenting with it since Friday, and I agree. I mean, it is very obviously a gateway to more personalized experiences, which Sam Altman and OpenAI have, have said.

[00:42:37] That was where they were going was personalized experiences.   definitely could be a very lucrative ad model. You can see that the instant you start using it. Because it is truly building a knowledge graph that predicts interest behaviors, buying habits, fears, concerns, political leanings. Like everything, like the more comfortable you are talking to ChatGPT, the more it learns from you.

[00:42:59]   [00:43:00] and then the key is it doesn't just use chat history. When you go into it, you, you get continually audited. So there's prompts in there. So as you're scrolling through, it'll say, I'm curious about, you click it and then you just say, you know, like the origins of the universe.   it says, I'd like to learn more about, and you could say,   I don't know, like the upcoming election cycle.

[00:43:21] And then it says, keep me updated on, and you can go and say the Cleveland Guardians who are in the playoffs. By the way, I'm really excited for Tuesday.   so I did this, like I went through, I said, you know, keep me updated on Cleveland sports. I'd like to learn more about, and I picked like a parenting topic I'm curious about.

[00:43:35] And I put in something related to space. So I actually went through and did it to see what happens the next day. And it does, it's then like curated the next day based on the things you indicated these curiosities about. And then it'll also recommend topics. So as you're scrolling down, I got one that says, do any of these areas around health and family interest you?

[00:43:53] And it's like parenting strategies, sleep science insights. So it's recommending in a very intelligent way, related [00:44:00] topics. Just like when you go into Google search and you search something and hey, you might also want to like look at this, these and this. So, and the other thing is it wasn't there today when I logged in.

[00:44:09] So they're obviously experimenting with stuff. But when I logged in on Sunday, today's pulse was above the chat field. So they're, they're using the dominant real estate of the mobile app for the pulse, like they're testing to see how people will use it. You can see the business side of this is tremendous.

[00:44:26] So if you imagine that you've connected ChatGPT to your CRM system, your analytic system, your calendar, your email, whatever, all the other apps are. And you can just say, Hey, gimme a pulse. Each morning I want to see like what's the latest on, you know, our lead flow or, or the website traffic from yesterday, or let me know how much traffic's coming from ChatGPT.

[00:44:45] Like send me a report each morning. You can start to see all that stuff. And then in their post announcing Pulse, they actually said, what's next is the subhead. And as you mentioned, it's, they, they position as the, toward a step toward a new paradigm of interacting with AI by [00:45:00] combining conversation, memory connected apps, moving away from answering questions to the proactive assistant that works on your behalf over time, they envision AI system that can research, plan and take helpful actions for you 

[00:45:11] Mike Kaput: Yeah, 

[00:45:11] Paul Roetzer: based on your direction.

[00:45:12] So that progress happens even when you are not asking for it. They've said Pulse introduces the future in its simplest form, personalized research and timely updates that appear regularly to keep you informed. Soon Pulse will be able to connect with more of the apps you use. So updates capture a more complete picture of your context.

[00:45:31] They're also exploring ways for Pulse to deliver relevant work at the right moments throughout the day. Whether it's a quick check before a meeting, a reminder to revisit a draft or a resource that appears right when you need it. As we expand to more apps and richer actions, ChatGPT will evolve from something you consult into, something that quietly accelerates the work and ideas that matter to you.

[00:45:51] Hmm. Go. If you need to go back and re-listen to those 150 words I just read from the thing, or go read it yourself, that is a roadmap for [00:46:00] where they're going. It is a roadmap likely for where Microsoft will go with co-pilot, where Google will go. With GeminI it is, I've called this like omni intelligence.

[00:46:08] It's like it's just everywhere, all around you at all times, infused within all the apps you use. Like intelligence is going to just be everywhere in everything. And it's going to be proactive. It's gonna know everything and it's gonna surface actions and outputs and things like, before maybe even, you know, needed, think about how Amazon tries to predict purchase behavior to like say, Hey, you haven't ordered this in three weeks.

[00:46:31] Like, it's probably time to order it. Imagine that. But intelligence applied to everything in your life, your personal life, your business life. The more it knows, the more stuff it's connected to, the more proactive it can be. And AGI agentic it can be while you're sleeping. It's, I mean, really this is it.

[00:46:46] This is, this is what it looks like. This is what AI as an operating system is.   and they're telling you point blankets what they're gonna build. 

[00:46:53] Mike Kaput: Hmm. Okay. From a commercial perspective, it's not hard to imagine. Suddenly you're going to get [00:47:00] pulses that are extremely helpful of like, Hey, I know, I noticed you were having trouble with that house project.

[00:47:04] Here's a local person to do it. Or, I noticed your company's struggling with your CRM. Here's like three other options you might want. Wanna, it, I found, found it. They're 

[00:47:11] Paul Roetzer: highly rated. I went and searched. Yeah. I mean, yeah. I mean, you could honestly, like, this is one of those Mike where they kind of wanna just get a bottle of scotch and go sit on the patio and we can just like, game 

[00:47:22] Mike Kaput: out.

[00:47:22] Paul Roetzer: Yeah, just play this out. Like, it would be a lot of fun to think of all the ways this could go, but yeah, again, like they're telling you the roadmap. Now what, you know, the opportunity is how do you get there first, how do you prepare for when this is reality in six months, 12 months, whenever, you know, however long it takes.

[00:47:39] But like, be ready for this as a business, as an entrepreneur, as a professional, as a business leader. This is what's happening. Yeah. And And they're all gonna be going in this direction. 

[00:47:49] Mike Kaput: Yeah. My guess is this is really important for SEOs to keep an eye on because this is how you get found moving forward, most likely would be my guess.

[00:47:56] Paul Roetzer: And advertising, like you said, pay to play. Yep. Yep. 

[00:48:00] OpenAI-Nvidia Mega-Deal

[00:48:00] Mike Kaput: All right. Next up. openAI's and Nvidia have announced what they call a letter of intent for a landmark strategic partnership to D Deploy at least 10 gigawatts of NVIDIA systems in AI data centers. So to support this partnership, NVIDIA intends to invest up to 100 billion in openAI's as the new NVIDIA systems are deployed.

[00:48:22] And by systems here, we of course mean chips, GPUs, and this scale is pretty outta control. Like 10 gigawatts equals the output of about 10 nuclear reactors that would translate roughly into four to 5 million GPUs, which is about everything Nvidia will ship this year and twice last year's volume. And this funding is apparently structured in stages.

[00:48:43] Nvidia will invest progressively as each gigawatt of capacity comes online. So the first $10 billion tranche arrives with the initial build, which is scheduled for the second half of 2026. And then each investment is made at openAI's, you know, current as of right now [00:49:00] valuation, which is pegged at about half a trillion dollars.

[00:49:03] So they estimate the cost to build this out is gonna be between 50 and $60 billion per gigawatt, with about $35 billion of that flowing directly to Nvidia hardware. So that means this full 10 gigawatt plan could top half a trillion dollars. So Paul, in true AI fashion, the numbers here are pretty staggering.

[00:49:23] Like, talk me through how realistic this is. What do we need to be paying attention to here? 

[00:49:28] Paul Roetzer: It is hard to just envision  the scope of this. It's massive. I was listening to a couple of podcasts last week where they were talking about this one with Jensen Wong.   It is just a tremendous amount of energy.

[00:49:40] It's the thing that like I get caught up in, because it's hard to wrap your brain around how much this is. It's the fact that they expect the demand for intelligence to be so massive. Yeah. That they need to build out like this, you know, over the next five years.   meaning [00:50:00] they're not just thinking about the training of the models, but it's the use of that intelligence, again in every device and every piece of software.

[00:50:07]   and not just chats. Chats don't take a ton of energy. It's, it's the reasoning models, the agent process, the things like we're seeing with Pulse, where it goes and does work while you know you're sleeping. I saw a fascinating tweet this morning where somebody. Said it was interesting. They had a normal ChatGPT conversation and it didn't provide the response they needed.

[00:50:30] And then they woke up the next day and looked at their, their pulse, and pulse provided a better answer than he got from a straight conversation with chat. GPT and AI researcher at openAI's replied and said, our best models work overnight   When demand is low. So in other words, they have more powerful versions than what you're using during the day.

[00:50:51] And so when demand is low for the intelligence at nighttime, opening eyes, models are running in the background doing this work. [00:51:00] Yeah. And so imagine this like almost infinite demand for intelligence that requires this scale of build out to where it is truly like over the next decade, there's no end in sight.

[00:51:11] When you listen to these interviews with people like Jensen. You realize how early we are in, in all of this and how, how intensive the demand is going to become when reasoning and video and image generation and agents and all these things are just literally everywhere 

[00:51:28] Mike Kaput: feels like openAI's could be moving towards at least some option where you just pay as you go for usage, because there's people that would pay thousands of dollars, tens of thousands per month.

[00:51:39] Paul Roetzer: Yeah. I mean, I don't, again, I sort of use the analogy when we're thinking about intelligence as a utility, I don't know that, that's not like a, a possible monetization approach, right? In the future is like you're just paying for compute. Yeah. The level of intelligence you use, and if you use more because it's an intensive time for your business or your [00:52:00] personal life, fine, but there's every incentive in the world for these labs to not be stuck.

[00:52:06] $20 a month plan, especially as, you know, you start looking at. Well, I want to have the video generation capability. I want to have more reasoning and I wanna let my AI think for a week on this process.   Like, I'm gonna build a business, like go away for seven days and do all the things. Build the marketing plans, build the brand, build the sales structure, build the customer success pro.

[00:52:25] Like that's, you know, as I'm saying this out loud, that is the, that is the vision that Ilia, Eva and others have had is these swarms of agents working together to just go do stuff. And now that you actually look at what they're saying about Pulse and some of these other things, like, I think that's actually probably like where they're looking at.

[00:52:43] When you hear this like one person building a billion dollar company, this is it. Like you're just gonna get a single chat interface. You're gonna say, Hey, go build this thing, here's what I'm envisioning building. And it just goes and it does it and it works with all these other agents to do these things.

[00:52:55] And that's a massive pull on compute. It's a massive pull on energy, but. [00:53:00] Yeah, I mean, metering it, so you're paying by the intelligence unit. The token is a very logical thing. It's like how APIs work basically. 

[00:53:08] Mike Kaput: Yeah, exactly. Yeah, and it's interesting with all the other commentary around comparing the cost of this to the cost of salaries of employees, it's like there's a pretty high threshold of what someone would be willing to pay to have a swarm of agents working for them.

[00:53:21] That's still a fraction of what it would take to build that out with people. 

[00:53:24] Paul Roetzer: Yeah, and again, I think the people who get it get it. Like it's what we've been saying for months, Mike, probably, you know, two, two years now, when you give a CEO or a board or whomever, a CFO, you say, Hey, you can spend $50,000 a year for a customer support agent, or, or you can pay a human who needs to sleep and eat and take PTO 120, like I'm sorry, but the publicly traded companies, the VC back, the PE firm.

[00:53:57] They're taking the $50,000 agent [00:54:00] seven days a week. Like it, I don't know. And so I keep coming back to like, some of this just seems like such an inevitability and all these counter arguments may hold up for the time being. And then when you look back through the rear view mirror, they're gonna see absurd that people were making the counter arguments.

[00:54:16] Like what it just seems too logical. I don't know. 

[00:54:20] Mike Kaput: Yeah, no, I agree. All right. 

[00:54:23] Meta Vibes

[00:54:23] Mike Kaput: Next up, meta has launched Vibes, a new short form video feed built entirely around AI generated clips. It lives inside the Meta AI app and on Meta do ai. Basically, it gives users a TikTok style scroll, but every video is machine made.

[00:54:39] So you can generate videos from scratch. You can edit your own, or you can grab something from the feed and change the music style or visuals. And the finished clips can be shared to the Vibes feed or cross posted to Instagram and Facebook. Meta has even partnered with Midjourney and Black Forest Labs for these early,   creative tools that power vibes.

[00:54:58] And it's also building its own [00:55:00] in-house models. So meta pitches, vibes as a way to spark creativity. And Zuckerberg's,   framed it as the next stage of AI powered media. But the rollout's been met with some skepticism because users flooded his announcement with comments, calling it things like AI slop and asking who actually wanted an endless scroll of synthetic content.

[00:55:20] So Paul, I'm super biased. I don't know where to begin. I cannot personally say I'm thrilled with this, but I guess maybe I could be sold. 

[00:55:28] Paul Roetzer: No, I It's absurd, honestly, like it's such a bad look for meta,   at, at this, in my opinion. Yeah. Yeah. And mainly because Alexander Wang, who's their new, you know, $15 billion Chief AI officer is what they,   the number, if you haven't been following along, they, they paid 15 billion roughly for scale AI to license the technology and hire Alexander Wang to become the chief officer at Meta.

[00:55:52] And yep, some of his lieutenants from scale AI came along.   so he, he joins Meta in June. They [00:56:00] hire, you know, spend tens, well, no, hundreds of millions, probably billions on hiring a bunch of AI researchers to Purdue pursue the idea of super intelligence. Zuckerberg posts a article on July 30th, personal Super Intelligence is  the subject line, and it's all about their pursuit of super intelligence for the good of people, and, you know, perhaps even more important.

[00:56:21] And so super jealous has a potential to begin a new era of personal empowerment where people have greater agency to improve the world in the directions they choose. Like this is our vision. And then the first thing we get from Alexander Wang is. A feed of AI generated videos that aren't even theirs.

[00:56:39] Like they licensed from Midjourney to do this. Yeah. So they announced on August 22nd, Alexander Wang did on it in a tweet that they've partnered with Midjourney and they're really excited to show what they're gonna build together. Well, here you go. This is what they built together because he, he acknowledged in the tweet about the launch of vibes that this is actually mid journey in Black Forest Labs technology, not meta's [00:57:00] technology.

[00:57:00] And it's just a preview of where meta AI is headed. I'm sorry if, if this is super intelligence or a step towards super intelligence, I don't get it. And yeah, I mean like who, who wants this? Who needs this? Like how is this toward the good of humanity and like personal intelligence that be benefits everyone just getting sucked into like scrolling nonstop through AI generated stuff.

[00:57:24] Like, I don't know. it just makes no sense. And from a. It's a, it's just a really bad look. And I think it actually, reputationally, you could see all the AI researchers in the AI communities. Like, what are you doing? Like, I could see some of these high priced AI researchers who are maybe second guessing their decision at the moment.

[00:57:42] Right? And I'm, and this is not to say Meta isn't gonna do amazing things, and they aren't going to make a ton of progress and do incredible research, but Meta had a reputation for doing great research under Jan Koon and putting out incredible stuff. and the fact that the chief AI officer is [00:58:00] tweeting this stuff out, like, I can't imagine he wanted to do it.

[00:58:03] Like, I don't know, like he, I'm sure he's self aware enough that this was gonna look horrible and be like, ridiculed,   from the, like, AI research community as like one of his first acts as chief officers to publish this. Like, I don't know, It, I don't get it. Like I don't understand what they're doing.

[00:58:21]   Again, doesn't mean they won't do other great stuff. It's just a really weird look, you know? 

[00:58:27] Mike Kaput: Yeah, yeah. We'll see. We will keep an eye on how this develops. All right. 

[00:58:32] ChatGPT Parental Controls

[00:58:32] Mike Kaput: Next up, openAI's is introduce parental controls to chat cheap chat, GPT, allowing parents and teens to link accounts, which quote gives parents tools to adjust certain features, set time limits, and add safeguards according to openAI's.

[00:58:45] So these include things like safety notifications where ChatGPT will notify parents if quote, our system and trained reviewers detect possible signs of serious safety risk. In conversations though, parents says part of this feature cannot access their teens [00:59:00] conversations. Parents can also set quiet hours when ChatGPT cannot be used, and they can turn off different features like memory, voice mode, and image generation.

[00:59:10] To enable this, you can go into parental controls in ChatGPT settings or ChatGPT dot com slash parental controls. From there, you invite your team to connect by email or text. Once your teen accepts and accounts are linked, you can start adjusting these different settings. Now, teens can unlink accounts at any time and the parent is notified if the accounts are no longer connected.

[00:59:32] So Paul, we've, you know, openAI's doesn't say it here, but doesn't seem like a stretch to say this is a response to several high profile, tragic incidents involving teens using chat. GPT would be my guess. 

[00:59:45] Paul Roetzer: Yeah. And I think we talked maybe on episode 1 68 about how Open Eyes is also trying to predict age based on chat.

[00:59:53] Yeah. Like they're trying to decide if you're 18 or not. This is a pretty standard feature for any parents who've managed their kids online. You're familiar with this [01:00:00] stuff for any parents with younger kids. You will be familiar, you should be very soon.   apple, Google, Microsoft, Minecraft, Roblox, like, they all enable these different capabilities for Google.

[01:00:10] The way theirs work is you can,   connect the account. Like, so my, like for example, my daughters turned 13,   earlier this year. And she got the option to remove her account from parental management at age 13. So like, I could manage her Gmail account basically, and then she could choose to like remove it if I understand it correctly.

[01:00:30] Microsoft's like, good luck parents, you, you need a whole like custom GPT just to help you manage like all the stuff in Microsoft you manage because they own Minecraft and there's like 17 logins to get it. it is like mind-numbing. And every time I have to go back in there it's like, oh my God. Like, it's like doing my taxes, trying to figure how to get back in there.

[01:00:48]   apple, I manage my kids' stuff through Apple a lot, so we have the Apple family plan.   so this is really important.   I think I had a conversation with my kids like on the same ride home last night. I guess it was a [01:01:00] really, like, it was only a 30 minute ride. I feel like we talked about all this stuff, but I actually explained to them the importance of understanding people's,   you know, in their age group that their friends, you know, may actually become connected to their chat bots and they had to be aware that this is a thing and,  

[01:01:16] Companionship and, you know, if kids are lonely, like they may, you know, become to rely on these things. And my son was like, well, why is that? And I, my daughter was like, well, because kids are committing suicide when they talk to them.   And I was like, oh, okay. Well I didn't know she was aware of this, but, so we had a, we had a very honest conversation around kids and ai and I said, you're gonna grow up in generation where it's very normal for kids to talk to their AI assistants and like, as a friend and a companion and stuff.

[01:01:43] So, yeah, it's,   I mean, I'm glad ChatGPT is being, or openAI's is being very proactive with chat, GBT here. I think kids safety is critical. and just a reminder, we have a free tool. So I built this last year KidSafe, GPT for parents. We'll put a link in the show notes, but it's meant to help [01:02:00] parents like, understand risks of these different tools.

[01:02:02] AI chat bots, different games online.   helps you talk to your kids about these tough topics. And then it'll help you create guidelines for different applications if you want. So it's just a SmarterX DOIs where, where it lives under the tools section, but we'll put it there.   again, I just created it as a parent like that.

[01:02:17] I thought it was a needed thing.   so, you know, hopefully that can help some people if you're trying to figure this stuff out. 

[01:02:24] Latest Updates on AI and Jobs

[01:02:24] Mike Kaput: All right, next up, we're seeing some more weekly signals that executives are not mincing words when it comes to AI's impact on jobs. So the biggest one this week is Walmart's, CEO.

[01:02:33] Doug McMillan,   was just quoted by the Wall Street Journal as saying quote, AI is going to change literally every job. And the retail giants already mapping out how roles will shift, shrink, or be reinvented. According to the journal headcount at Walmart is actually expected to stay flat over the next three years despite growth plans as AI eliminates or transforms roles.

[01:02:54] Meanwhile, Accenture taking kind of a similar approach. The consulting giant says it's quote unquote [01:03:00] exiting employees that can't retrain and reinvesting in AI literate talent. However, unlike Walmart with layoffs, they still expect their headcount to grow next year. So their AI and data specialists have literally nearly doubled since 2023.

[01:03:14] And finally SAP's. CFO was very direct with,   an interviewing business insider basically saying, look, quote, I will be brutal. I also say this internally for SAP and any other software company, AI is a great catalyst. It can either be great or a catastrophe. It'll be great if you do it well, if you're able to implement it and do it faster than others, if you are left behind, you will have a problem for sure.

[01:03:36] And he said simply the AI lets SAP produce more software with fewer people and the restructuring accordingly. So Paul, you had posted on LinkedIn that the Walmart story in particular was further evidence that AI will likely have a far greater impact on jobs than most economists, business leaders and politicians think or are willing to admit.

[01:03:53] Can you maybe elaborate on that a little bit? 

[01:03:56] Paul Roetzer: Well, we've been hearing from a lot of tech CEOs that's been pretty standard for [01:04:00] the last, like four or five months. But we're talking about the largest private employer in the United States. They have over two, 2.2 million associates worldwide and 1.6 million in the us.

[01:04:10] So this is not insignificant. This is a, a, a major shift in how CEOs are talking about this Accenture. So Julie Sweet. The CEO said in the article you were mentioning, our number one strategy is upskilling, but we are exiting on a compressed timeline. People who were reskilling based on our experience is not a viable path for the skills we need.

[01:04:33]   The firm employed more than 779,000 people at the end of August, down from 790 1003 months earlier. So 12,000 from doing quick math. Right.   so 12,000 down in a month. And they're not done, it doesn't sound like. So now they did say like they do still plan to hire, but if we can't re-skill these people, then they're just out and we're not messing around.

[01:04:58] And then that's exactly what the [01:05:00] SAP executive said that you've mentioned, Mike. It's like we're, we're gonna be brutal here. Like we are going to cut.   so yeah, I again, just.   f further. I don't know. I don't know how else to say this. Like, there's still people who, and I see, I was actually listening to,   an interview this morning with Reid Hoffman,   creator of LinkedIn, and he's like, yeah, it's gonna be like, kind of messy for a while, but like, it's going to, it's gonna end up being okay.

[01:05:26] And I keep, like, I think that the frustration I'm having right now is all these tech CEOs and AI leaders in particular who keep saying it's gonna be okay, maybe in their world. Like, yeah, maybe it works out. Like I honestly like,   man, I trying to think of like the guy who is, as I'm saying this, I'm gonna figure it out.

[01:05:48]   so the interview I was looking at right now is Moonshots by Peter de Diamandis. Yeah, I Peter's great, like a moonshot's, an awesome podcast, fascinating interviews, obviously a great thinker [01:06:00] and leader.   so nothing against Peter. But there's actually a point in the interview where he is got these different, like, brilliant guys that's a, a room full of guys including Reid Hoffman.

[01:06:09] And they're like, yeah, you know, there's gonna be a period where there's like transition. They like to, transition is like the new go-to word, to describe the really painful part where nobody has jobs. It's gonna be a transition in the economy and it might take a little while. No joke. And I, again, I don't wanna like overly critique someone who says one phrase, like, as someone who does this podcast live with no retakes every week.

[01:06:30] Like sometimes you just say something like, oh, it probably didn't come out like really well. Yeah, I'm not kidding you. He basically said like, during the transition period, well I'll just like, we'll have dinners together and we will like, go to museums and like, we'll, we'll just do all these wonderful things with our transitional period.

[01:06:46] It's like, no, you are a billionaire, right? You get to have nice dinners with your buddies and talk about the transition period while people working at Walmart don't have jobs and can't pay their bills and can't get the me like. That is [01:07:00] not how it's gonna work for most of the economy. We're not all billionaires.

[01:07:03] And I just thought, man, like  the disconnect I think that exists with the elite class that is building the models versus the reality of the worker, like the average working family. And then they did in that podcast actually get into, well, what happens if you get a bunch of young males who don't have jobs?

[01:07:23] Then they revolt and we have like, like a revolt against the society and politics. And they're like, yeah, that could happen by next year. And it's like, and then they just move on to the next topic. I was like, oh, let's come back to the part about how by next spring we have a bunch of young males who don't have jobs and are really pissed.

[01:07:41] And they're the ones saying, young males, like, this is not my language. Yeah, right. So they're saying like, yeah, and they're gonna act out. And I'm like, oh my God. Like, so I, again, I fear that there's just this disconnect between the reality of the average American worker, not just American, but really internationally, talking a lot about India  

[01:07:56] Was a, you know, their big key topic. A a and like [01:08:00] the transition period, just working out and being part of this, it's like that, that's a very loaded term for people who, who need jobs for fulfillment and like basic living necessities. 

[01:08:13] Mike Kaput: Yeah. Let's also recall that the Great Depression was a period of transition and there was a decade of almost global disaster that followed in a little conflict called World War ii, which had a lot of those angry out of work, young men in a few different countries.

[01:08:28] So, I mean, it's like pretty important to consider that transition isn't always fun. 

[01:08:34] Paul Roetzer: Yeah. And again, like our, we, we take a very optimistic approach to all this, and I do. I think there's a really good chance it works out really well.   But I also increasingly feel like the conversation is being had by people that maybe aren't.

[01:08:50] Seeing the reality of what transition really means. Yeah.   and the pain that comes along with this sort of process that we're not gonna [01:09:00] know the timeline of like how long that transition period goes. But going back to the early talk from today, like at least we're studying it now. Yeah. Like at least it's a conversation people.

[01:09:10] And that was the part that wasn't happening six months ago. Why I was so frustrated and felt such a sense of urgency to be pushing for this conversation to move forward on the podcast is like, this is sort of inevitable. Why aren't we talking about it? Well, at least we're now talking about it. Yes. And like hopefully that keeps going in a really positive direction.

[01:09:27] Mike Kaput: Yeah. It's better than silence on the topic. Yeah. 

[01:09:30] Mercor Founder Interview

[01:09:30] Mike Kaput: Alright, so another topic this week, a new episode of the 20 VC podcast hosted by Harry Stebbings, brings us an interview with Brandon Foodie, the 22-year-old co-founder and CEO of Mercor now you may recognize that name because we talked about it on episode 1 68.

[01:09:45] Because in just 17 months, he says his company scaled from 1 million to 500 million in revenue by providing a marketplace to find vet and supply elite talent used to train AI models. And they claim that growth rate makes [01:10:00] Merkel the fastest growing company in history. So this conversation's pretty important to note, not just because of this crazy story, but because MEO sits at the center of how AI is getting built.

[01:10:11] Because for years, companies like Scale AI, for instance, relied on armies of low paid crowd workers to label data. Now what they actually need now, though, foodie argues, is whole business is built on, this is not just the sheer volume of data labeling but precision. So data curated by elite human experts.

[01:10:30] Meco built the marketplace that connects, you know, high-end consultants, doctors, lawyers, top tier engineers with AI labs directly so they can help them build this really high quality data. So Paul, like we said, we talked about Meco previously. Their founder we talked about said he envisioned a new economy powered by humans training AI agents.

[01:10:52] He's building a marketplace to do just that. Definitely seems like he's a voice worth listening to. Like what jumped out to you in this [01:11:00] interview? The fact that he is 22. That's actually insane. Well, he did mention at one point he is like, I forget what he is. Like, yo. Yeah. Like this,   this May or whatever.

[01:11:09] My graduating classes graduate. Yeah. And he's like, hungry. That's wild. 

[01:11:14] Paul Roetzer: Yeah. So just two, I guess two quick notes on this one.   one is it, so if the episode 1 68 conversation around where this is going and this whole idea of real reinforcement learning economy where these experts are being paid to train these models and then at some point they're not needed anymore.

[01:11:29] 'cause now they're at expert level. You can see the threat even throughout this episode here. Like we started talking about this with these evals and now. We're coming back around. So if you're intrigued by this, go listen to it. It's a really fascinating interview. And my second takeaway is you should totally subscribe to the 20 VC podcast.

[01:11:45] Harry's incredible. Yeah, like I, he's one of those ones I have alerts for anytime, you know, he tweets or anytime he posts something great insights,   you know, from a VC perspective, but just,   exceptional at at doing the interviews oftentimes goes off [01:12:00] script and just like goes down. Fascinating path.

[01:12:01] So I, and he is hilarious. So I'm a, I'm a big fan of his podcast. People always ask me like, what are the podcasts? I listen to that, that would be one of the ones I regularly check out. 

[01:12:11] AI Product and Funding Updates

[01:12:11] Mike Kaput: Alright, I'm going to wrap us up Paul, with a few AI product and funding updates to close out the week. So, first up, a new AI product being built by is being built by some of the key players behind Google Notebook, lm, and that has launched this past week.

[01:12:26] It's called Hux and Co-founder Raza Martin used to lead the notebook, l LM team at Google. And Hux is actually a personal AI audio companion app that generates interactive, highly personalized content for users. So it can connect to like your email calendars and preferred topics to create daily audio briefings.

[01:12:44] It can do live audio streams on specific subjects and instant AI generated podcasts tailored to each user's needs and interests. 

[01:12:52] Paul Roetzer: And quick note on that one, she did tweet out, apparently what they're building is very similar to ChatGPT Pulse. Yes. And everyone sent it [01:13:00] to her right away. So we'll drop a link in if you wanna see.

[01:13:02] I thought it was a really great response on, you know, kudos to her. She just, just, I was like, yes, it is, it is a copy. Basically what we were building, but like, let's go. And so, like, building a startup is hard. Being an entrepreneur is hard, especially in the AI space. So,   yeah, I thought she just gracefully responded to it and it was the kind of response you'd wanna see from a leader.

[01:13:22] Mike Kaput: openAI's is also rolling out a suite of new features for business users. These include team projects for organizing chats and files and expertise routing to automatically direct questions to the right custom GPT in an organization. They're also now allowing apps like Draw, Google Drive and Salesforce to be used directly within ChatGPT for enterprise 

[01:13:43] Paul Roetzer: team projects.

[01:13:44] Might be an underrated side note in today's podcast. That might be one of those ones we look back in a few months be like, ah, that was actually like a pretty big deal. Yeah. I haven't started testing internally, but the idea of being able to share projects that are collection of chats and threads,   that's really interesting.

[01:13:58] And probably moving toward their, [01:14:00]   workforce like productivity platform approach where they take on Microsoft Office and Google,   workspace. Yeah. 

[01:14:09] Mike Kaput: Apple is internally testing a new ChatGPT like app to help develop its next generation AI powered Siri. This app is code named Siri S and allows employees to experiment with the company's new large language models.

[01:14:20] It lets Apple refine the underlying technology and its conversational abilities before the revamp. Siri is released eventually to the public Axiom, which is a startup founded by former Palantir employees, has raised a new funding round at one a $1.8 billion valuation. This company is building AI powered software for military and intelligence agencies to help them analyze complex data.

[01:14:45] Microsoft is adding more model choice to 365 co-pilot, starting with Anthropics Quad 3.5 sonnet. So basically gives users an alternative to using OpenAI's models for generating content and doing analysis within apps like Word. [01:15:00] So this is rolling out across Microsoft 365 Suite, and you can kind of choose your preferred AI model for different tasks.

[01:15:07] Now in some other Microsoft News, they've also launched analyst in Microsoft 365 co-pilot. This is a new AI agent designed for data analysis. It connects to Excel or SharePoint and performs complex multi-step analysis to generate charts, visualizations and insights. So basically it allows you to ask complex questions,   about your data and talk to it in natural language.

[01:15:31] Last but not least, Spotify has updated its policies to crack down on infringing AI generated music. They announced they will remove content. They uses AI to clone an artist's voice or likeness without their permission. So Paul, thanks again for breaking down a really, really packed week in ai. 

[01:15:49] Paul Roetzer: Yeah, Mike, real quick, 'cause I was just scanning Twitter to see if any models got re released while we were doing this and Yeah.

[01:15:55] Just to build on. So I don't see any models yet, but Good.   the information, Sam [01:16:00] Altman wants 250 gigawatts of power. Oh, wow.   and so real quick,   Last week, openAI's Nvidia,   announced the plan to work together for 10 gigawatts, which we talked about Inside openAI's, Sam Altman floated an even more staggering number, 250 gigawatts of compute by 2033, roughly one third of the peak power consumption in the entire us.

[01:16:25] Or think of it this way, a typical nuclear power plant generates around one gigawatt of power. Altman's target would mean the equivalent of 250 plants just to support his own company's ai. Based on today's cost. To build a one gigawatt facility around 50 billion, 250 of them implies a cost of 12.5 trillion.

[01:16:45] So we heard last year that Sam was looking for about 7 trillion in funding, and then they kind of like played that off. When that come out now, now you can understand how we could get to trillions when it comes to this stuff. So, my gosh, again, the numbers are so hard to comprehend, but [01:17:00] it helps put in context how.

[01:17:03] Prevalent these leaders believe intelligence will be and what the demand will be. And I guess that's the thing we all get to ponder for the next week.   I will say there's also a special edition of the podcast this Thursday. So what is that, Mike? October 2nd, I think we decided. Yeah. So we'll have another AI answers episode.

[01:17:22] So if you aren't familiar, we do AI answers like twice a month where we just respond to questions from our,   free intro to AI and scaling AI classes. So we will be recording another AI answers session tomorrow. That'll be Cathy and I, and then that'll drop on Thursday. So you get two episodes this week.

[01:17:36] And then we'll be back for our regular weekly episode next week. So Mike, thanks as always, and   I think we're in for another busy week. Thanks for listening to the Artificial Intelligence show. Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI [01:18:00] blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy, and engaged in the Marketing AI Institute Slack community.

[01:18:10] Until next time, stay curious and explore ai.