OpenAI just entered the browser wars. And it's already getting messy.
This week, Paul and Mike talk everything ChatGPT Atlas, OpenAI's agentic AI browser, including its glaring security issues.
This week's episode also covers a new letter advocating a pause on the development of superintelligence signed by an eclectic group of celebrities and public figures.
Not to mention, we talk about Amazon's robot-driven layoffs, an Ohio bill that aims to ban human-AI marriages, new data on how many teens have romantic relationships with AI (hint: it's more than you'd expect), and much more.
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:04:56 — ChatGPT Atlas Release
- OpenAI browser teaser - The Verge
- OpenAI Atlas browser analysis - Every
- ChatGPT Atlas official page
- X Post from Ben Goodger on Atlas
- OpenAI Takes On Google With AI-Powered ChatGPT Web Browser - Bloomberg
- Simon Willison on ChatGPT Atlas - Simon Willison Blog
- OpenAI Announces Browser-Based AI Agent for “Vibe Lifing” - Futurism
00:16:17 — ChatGPT Atlas Security Concerns
- Cybersecurity experts warn OpenAI’s ChatGPT Atlas is vulnerable to attacks that could turn it against a user—revealing sensitive data, downloading malware, or worse - Fortune
- Unseeable prompt injections in screenshots: more vulnerabilities in Comet and other AI browsers - Brave
- X Post thread on Atlas vulnerabilities
- X Post from OpenAI CISO on Atlas
- Simon Willison: OpenAI CISO on Atlas
- We let OpenAI’s “Agent Mode” surf the web for us—here’s what happened - Ars Technica
00:26:19 — Statement on Superintelligence Campaign
- Superintelligence Statement campaign
- ‘Time Is Running Out’: New Open Letter Calls for Ban on Superintelligent AI Development - Time
- X Post: Tegmark promotes superintelligence statement
- Hundreds of Power Players, From Steve Wozniak to Steve Bannon, Just Signed a Letter Calling for Prohibition on Development of AI Superintelligence - Futurism
- Defining AGI initiative
- X Post from Dan Hendrycks on AGI definition
- X thread from Dan Hendrycks on AGI
- X Post from Mark Gubrud with claim of coining “AGI” in 1997
- AI Safety Memes on AGI
- When it Comes to AI, What We Don’t Know Can Hurt Us - Time
- X Post from Tegmark: context on AI risks
- X Post from Yann LeCun on analogy reaction
- X Post from Dean W. Ball
00:43:18 — Anthropic Plays Defense
00:50:14 — Amazon’s Robot Workforce
00:56:16 — Meta AI Layoffs
- Meta Cuts 600 Jobs at AI Superintelligence Labs - The New York Times
- Meta Layoffs Included Employees Who Monitored Risks to User Privacy - The New York Times
00:59:23 — OpenAI Controversy Over Suicide
- OpenAI prioritised user engagement over suicide prevention, lawsuit claims - The Financial Times
- X Post from Cristina Criddle
01:02:26 — Ohio Bill Would Ban AI Marriages and High Schoolers Are Having Romantic Relationships with AI
- Ohio Seeks to Ban Human-AI Marriage - Futurism
- Hand in Hand: Schools’ Embrace of AI Connected to Increased Risks to Students - Center for Democracy and Technology
- An Astonishing Proportion of High Schoolers Have Had a “Romantic Relationship” With an AI, Research Finds - Futurism
01:08:03 — OpenAI Tries to Automate Junior Banker Work
01:10:43 — Sora 2 Roadmap
01:14:01 — Tesla Autonomy
This episode is brought to you by our MAICON 2025 On-Demand Bundle.
If you missed MAICON 2025, or want to relive some of your favorite sessions, now you can watch them on-demand at any time by buying our MAICON 2025 On-Demand Bundle here. Use the code AISHOW50 to take $50 off.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: The economy stability and growth over the last like 12 to 18 month is in large part being driven by capital expenditures for AI on the infrastructure for AI itself. If you extracted energy and data center plays from GDP, it's like, do we even have growth? Becomes a real question. All of this is now starting to happen where everyone's sort of simultaneously realizing like, oh my gosh, this is a huge deal and we have no idea how to handle any of it in education, business, and in the economy.
[00:00:30] Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:59] [00:01:00] Join us as we accelerate AI literacy for all.
[00:01:06] Welcome to episode 176 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike put, we are recording Monday, October 27th at 11:00 AM Eastern time. It seems like it's gonna be some stuff going on this week. I don't know, like it's usually by like Monday morning. You can already get a sense of, of it's gonna be crazy or not.
[00:01:27] I don't know that it's like crazy big new model drop this week, but always something going on. So we will keep track of it. I've already got like five things in next week's sandbox. You probably saw that already. Yeah, because like Saturday, Sunday, I just started putting stuff into next week. 'cause this week was already packed.
[00:01:43] all right, so this episode is brought to us by MAICON 2025 on demand. If you missed MAICON 2025 or if you were there and wanna relive some of the sessions, you can do that now. So we have 20, top breakout and keynote [00:02:00] sessions that are available as part of the on-demand package. There was about 47 or so sessions overall, so almost half of the sessions were recorded.
[00:02:08] They're now available on demand. That includes my opening keynote, the Move 37 moment for knowledge workers. becoming an AI driven leader, overcome fear, accelerate growth, beat the competition with Jeff Woods that was very highly rated. It was an incredible talk. We've got Mike's 30 AI tools shaping the future of marketing, which is always a showstopper and always a packed house, standing room only for that one.
[00:02:31] Andy Crestodina Better Than Prompts. How to Build Custom GPT for Marketers. Michelle Gansle, former Chief AI officer at McDonald's, at empowering teams in the age of ai, how McDonald's is building an AI-Ready workforce. Jeremiah Owyang was amazing. The future of AI marketing. We had the human side of AI with Cath Anderson, Xiao Ma from Google DeepMind, and Angela Pham from Meta.
[00:02:53] your interview Mike with Alex Kantrowitz of the Big Tech podcast was incredible. The rise of the filmmaker with PJ Ace, just [00:03:00] endless. And then my final talk with. Dr. Brian Keating, which was amazing. I I, I've been thinking Mike ever since that talk like, oh, should've asked him this question about the origin of time and like, it just, it was an amazing conversation.
[00:03:12] So that was a re-imagining what's possible, with Dr. Brian Keating. So you can get all of that and more. So again, 20 sessions total available on demand. Now, you can just go to MAICON.ai, M-A-I-C-O-N, do AI and just click on the 2025 on demand and AI show 50. We will get you $50 off of that. So again, MAICON.ai and you can go experience MAICON two, 2025 or relive it if you were there with us.
[00:03:40] okay. We, we have a new thing we're gonna do next week. We're ju I'll just tease it this week. I mentioned this a few weeks back, or maybe a month or two ago, I threw out this idea. Mike and I are gonna start doing some real time research. We're really excited about this. We were gonna do it today, but we, we actually came up with a better user experience to introduce this.
[00:03:57] right before we jumped on today. So we're gonna [00:04:00] hopefully start this next week where we're gonna start doing some real time research with our audience to find out how they feel about different topics, thoughts on things, give people a chance to kind of have a voice add question. So we're really excited about the idea of kind of getting more engagement going with our listener base.
[00:04:15] And so AI pulses are coming, hopefully starting next week, so stay tuned for that. I guess that'll be, I think that'll be 1 78. 'cause we, I think we have a second episode this week. Yeah. 'cause we have an intro to AI tomorrow. Is that right? Is that tomorrow? Yeah. Yeah. It's tu yeah. Tuesday. So when you're listening to this, we have an intro to AI and then we'll do another AI answers this week.
[00:04:34] So, watch for that next week. AI Paul's surveys. We'd love to hear from you. Should be fun. We're gonna kind of like experiment a little bit with how to do this and how to introduce it. but we're thinking it's gonna be like at a topic level and we have really interesting topics to then gauge how, how do people feel about it, that sort of thing.
[00:04:51] So. really excited about that. So stay tuned and otherwise, Mike, I'll turn it over to you to get us rolling this week.
[00:04:56] ChatGPT Atlas Release
[00:04:56] Mike Kaput: Sounds good, Paul. So we had a big release this week because [00:05:00] OpenAI has officially launched chatgpt Atlas, which is a new AI powered web browser designed to blend some automation, features, memory and AI assistance directly into your everyday browsing.
[00:05:13] So Atlas essentially turns ChatGPT into a companion that lives alongside your web activity. It can summarize pages, compare products, analyze data directly from sites, all from a sidebar. Users can highlight text and emails, documents, and calendars to rewrite or refine content instantly using ChatGPT.
[00:05:33] And the standout feature here is agent Mode, a preview tool for plus pro and business users that lets chatgpt take actions on websites autonomously under user supervision. So for instance, in the livestream demo last week that OpenAI used to release this. AI navigated retail sites and even purchased groceries on its own.
[00:05:56] Atlas also includes some memory features that lets users decide [00:06:00] what ChatGPT remembers across sessions and some privacy controls for clearing history or browsing incognito. Now, right now Atlas is only available via a Mac OS app, but OpenAI has said Windows users will be taken care of soon enough here.
[00:06:16] So Paul, I wanted to kind of maybe kick things off. I have a few thoughts from kind of my tests here so far, but what were your initial thoughts of this? Have you had a chance to experiment with that list at all?
[00:06:26] Paul Roetzer: I have not experimented personally yet with it. interesting features for sure. think my initial reaction was, Google will, will likely, you know, introduce very similar capabilities here.
[00:06:39] So just for context, so spend more time thinking about like the bigger picture of the browser wars. Mike, I've, I'd love to actually hear your feedback on your initial experimentation. But just so people kind of can frame this, Google Chrome has 70 ish, 71% of the market share. Now it varies by pla by device.
[00:06:57] So mobile might vary slightly then desktop, [00:07:00] that kind of thing. But Google Chrome is the dominant player here. Apple Safari is about 14%. Microsoft Edge is 5% Firefox, 2% Samsung Internet, which I assume is Samsung devices. 2% opera is around 2%. 1.7 Perplexity Comet, you know, that's another AI browser.
[00:07:18] Doesn't register on these yet. but in essence what's happening is Google is so dominant that new entrant have to either other undermine that dominance in some way with something completely different, which is in essence what they're trying to do with chat. gpt is like just reimagine the browser, or they have to coexist by carving out niches.
[00:07:37] so that's the challenge everyone faces is Google is, is a major dominant player here. However, the day this came out, Google's shares dropped almost 5% on the news. Mm. Which I thought was weird 'cause it's like we already knew they were working on a browser. There was already a form of the browser living within agent mode in ChatGPT.
[00:07:55] So I have indirectly used a, a variation of it, but not personally like the agent [00:08:00] was going in the work. you covered some of the features, Mike. There was, Ben Goodger, who's on on the Atlas team. He tweeted out a, a little context that I'll, I'll share here. So he said, he joined the team last year and since then they've built an internal, a small internal team that worked on Chate Atlas.
[00:08:18] what they defined as a new web browser design for the AI era, an era that will be shaped by more human natural language interaction agents. And ultimately AGI. Ben went on to say, ChatGPT is woven into the fabric of the product, so it's always nearby and ready to go. He said that as he's used Atlas, he noticed he's become more curious.
[00:08:39] He asks more questions about the web around him. he said, I'm finding better deals online, interpreting my personal health data, understanding my kids' homework and much more. It's all making me feel like a more informed, more self-actualized human. he then went on to say, with its built-in browser, agent Atlas can browse the web for you, including to your logged [00:09:00] sites if you choose.
[00:09:01] We'll talk more about that in the, in the next topic, Mike. And it's super fast. This is one of those feel the AGI moments for me, which I thought was interesting. ask it to find all the ingredients for recipe and load them into a shopping cart for you. Ready to check out. Ask it for tips on how to write a better doc or use advanced features of your spreadsheets or even watch it play a web game.
[00:09:22] Bloomberg, I just noted, this, this comment that was in a Bloomberg article that we'll put in. Sam Altman said on the live stream, this is an AI powered web browser built around ChatGPT. He said it represents a rare, once in a decade opportunity to rethink the browser. and then, you know, just big picture, Mike, what it means without talking about the safety side and the memory and the, you know, whether you are using cotton eat or not.
[00:09:47] What it's, what it's indicating to me is the shift in agent to agent communications and commerce. This is something we've talked about as sort of a recurring theme we've been touching on the last few months. Where as, as brand marketers, as [00:10:00] business leaders, from a customer success side, a sales side, we have to start realizing that a lot of the communication that individuals have with our brands in the future and the purchasing decisions they may make, might not actually be them.
[00:10:15] It, it may be their agent that is doing these things and it's gonna be hard to delineate in your site traffic. when that starts to really happen. So this starts to play out in SEO ads content strategy because you have to now start thinking about the AI interface for agents, not just humans. and so the business, like how it gets found, how people in interact with it, how they make these purchasing decisions, this is all stuff we have to start really thinking about.
[00:10:44] Now. One other thing, Mike, I'll mention, referenced Google earlier and how I would expect them to make some pretty significant updates here. They're already integrating Gemini in, they're building agent mode into search. But keep in mind, up until the beginning of September, the, [00:11:00] the Google was facing an antitrust case that potentially had them being forced to sell Chrome off.
[00:11:05] So there's a decent chance that Google has had all of this same stuff on their roadmap already. Right. But they certainly weren't going to launch all of that if they were gonna be told by the Justice Department that they had to sell off Chrome in September. Once they made it through that case, now I feel like it's full go, that they can start do this.
[00:11:26] And I would expect before the end of 2026, we will likely see some pretty significant enhancements to how Chrome works. and, and, and thereby how we start to interact with these same kind of capabilities within there. I don't know. I personally, like, I'm not super excited about this idea. Like I'm not in a huge hurry to use Atlas.
[00:11:47] Like all those descriptions he just provided about like my kids' homework and asking more questions, like I get that from Chrome and I get it from just using Gemini and ChatGPT directly. So again, not that this won't work and that won't, won't be a [00:12:00] major product for them. It's one of those like, I'm gonna kind of struggle to find my personal use cases that would be worth me switching to from a workflow that already works really well for me and is already pretty efficient.
[00:12:10] And love Chrome. Like it would be, I was actually telling you, Mike, I'm, I'm moving one of my email accounts within our Google workspace to a, to a different email account and I have two. Versions of Chrome I'm logged into. Right. And I'm realizing what a pain it is to change over. 'cause all my, all my bookmarks, I have my tabs grouped, like everything I do exists within Chrome already.
[00:12:32] So the idea of having to like change that to a different thing is like, oh my God. And then opening me up to like, now you're giving another company access to all the things you browse and everything you do. So that's my thoughts on it.
[00:12:44] Mike Kaput: Yeah, I, that's similarly where I landed. We'll talk about all the security and safety stuff, but honestly I was, and you know Simon Willison, who's an AI researcher as well, that we follow and we've included some stuff here from him.
[00:12:56] He basically just said, I am struggling deeply to find [00:13:00] relevant, valuable AGI agentic use cases in my own work at the moment. I'm sure that will change, but I'm not there yet. That's exactly how I feel. I'm like, okay, this is really cool. I have no doubt this is where we're headed, but I just think of the AI verification gap we've talked about to do anything useful with this AGI Agentic browser.
[00:13:18] A, it needs to work, but BI need to verify that it worked and verifying that it worked is going to take me way more time and energy and potentially some security issues. Yeah. Than me just doing the thing myself. Now, maybe I'm not using the internet the right way, but I don't do enough here where an agent could go do all this work for me.
[00:13:38] That's very different for other people though, perhaps.
[00:13:40] Paul Roetzer: Yeah. And they're gonna push heavy on the agent mode. And, and in, in that same article you referenced with Simon, he said not only does he find it pretty unexciting to use, but he tried out agent mode and it was like watching a first time computer user painstakingly learn to use a mouse for the first time.
[00:13:55] I have yet to find my own use case for when it's this kind of interaction. Feels useful to me though. I'm not ruling [00:14:00] that out. Yeah. It's like, 'cause it is, it's kind of like, it, it's finding its way and it's probably gonna get really smart and really fast pretty quick. But it you, you're allowing basically an agent to learn how to function on the web.
[00:14:12] and it, it's probably gonna be a little slow and a little painful, and it's gonna click the wrong things and maybe the really wrong things that cause issues, some major headaches. But yeah, so it's interesting. It is, it's a massive market opportunity. and they want to own the user interface, but again, this starts getting to like productivity platforms and shopping and like, they're trying to go well beyond information gathering and, that opens up ad potential.
[00:14:38] So this is definitely a monetization play. that's like part of the bigger vision for openAI's and the role they wanna play in society and in business. But, yeah, so it's, it's very early and again, it's only as you mentioned, like only available on Mac OS at the moment. Yeah.
[00:14:53] Mike Kaput: Yeah. And one final point here, and then we could talk about the security implications is like you alluded to this with the agent to agent [00:15:00] stuff, it just occurred to me that if we really extrapolate out a few years, like if this stuff works a hundred percent, you're just relying on your agentic browser for everything.
[00:15:10] Brands better be ready to lose total control over the funnel, over the buyer journey over. And that's been happening to some degree with for 15 years, with online social media, trends. But it just really struck me. I was like, you just need to make sure your website, your web presence has everything that an agent might need to know at some point, and it's going to remix it and reuse it however it wants and you don't have any control over.
[00:15:37] Paul Roetzer: Yeah, there's so many downstream effects of this. Like, as you're saying, like the funnel and stuff, you start to think about lead generation, like in a B2B world where you're so dependent upon lead generation and capturing contact information and nurturing those people. And yeah, I mean, what if it starts to shift where people just don't ever give you their email address?
[00:15:55] Like they're not gonna, you know, visit your site themselves. They're just gonna capture whatever information they [00:16:00] need directly in their AI assistant, and then the AI assistant will go and do whatever research needs to happen. And I don't know, I mean, it's, again, it can be daunting or this can be exciting because.
[00:16:09] Nobody knows. And so there's this opportunity for all of us to be the ones who kind of go figure this stuff out.
[00:16:17] ChatGPT Atlas Security Concerns
[00:16:17] Mike Kaput: Alright, our second topic this week, Paul, is related to chatgpt Atlas. We're specifically focused on the fact that it is facing immediate scrutiny from security researchers who say a gentech browsing creates a dangerous new attack surface.
[00:16:31] So we've talked about a little bit Atlas is introducing things like browser memories and in experimental agent mode that can read pages, click buttons, carry out tasks. These features make it really interesting, but also exploitable. So there's a number of, articles and commentary we're tracking where experts are warning of prompt injection attacks, where hidden instructions on webpages can actually hijack the agent that Atlas is using to exfiltrate emails, overwrite [00:17:00] clipboards with malicious links, or even initiate.
[00:17:02] Download. So basically what this is doing is the agent is collapsing the boundary between data and instructions. So if it's reading a prompt in certain contexts that are hidden on a page, it may actually take that prompt and think it is instructions, which then turns the agent into an attack vector. Now, open AI's, chief Information Security Officer Dane Stuckey, released a statement saying the company has performed extensive red teaming.
[00:17:27] He added, they have added overlapping guardrails. They're investing in rapid response systems, and he acknowledged that prompt injection remains what they call a frontier unsolved problem. So that's a start. But you know, there's a lot of commentary from the security community about just these huge privacy flags and issues with these exploits that are just straight up not yet addressed.
[00:17:49] So basically they're arguing this is not ready for security. Prime time, and especially non-technical users may not even realize. That there are, [00:18:00] exploits possible that are unique to AGI Agentic browser. So Paul, I was curious of your take here because it just, like, this is the elephant in the room on last topic, right?
[00:18:09] Is the security issues here giving your agent the ability to go do things for you. The fact that it can then be exploited seems like an absolute nightmare. Like, I don't know how you actually use this in any enterprise today if you wanted to.
[00:18:22] Paul Roetzer: I, don't either. so as the CO of a company, again, that's my first thing is like, do not turn this on.
[00:18:27] Do not use this, you know, through the company accounts, company computers, you know, unless it's in a very controlled environment and, and we know what we're doing, you don't want everybody just going in and testing this. So we'll put a few links in here related to this. There is the help article directly from OpenAI where they talk about the specifics around data control.
[00:18:47] So in the section where it says include web browsing, it says this setting is available when, improve the model for everyone is enabled. This is separate from your ChatGPT settings, by the way. So you would have to actually control the Atlas [00:19:00] settings separately. There was a really interesting thing here.
[00:19:02] I haven't seen openAI's address this yet, but I saw this brought up by a couple people. It says the improve the model for everyone in Atlas controls whether the content you browse in ChatGPT Atlas can be used to train our models. What does that mean? Like, so if, if I, if I am using Atlas and I go to someone's website with copyrighted material on it, I get to decide if they can train on that.
[00:19:28] Like how does that work? So I don't know if it's just a, they misspoke. It doesn't seem that way. It seems quite intentional, but I don't know what train our models on someone else's content means and how a user could be the one that decides that that's what happens. So that's an interesting one. We'll wait for some clarification on.
[00:19:48] And then in the browser memories, this is important area for people to understand. So they say. Browse memories. Let ChatGPT remember useful details from your web browsing to provide better responses and [00:20:00] suggestions. No big deal. Kind of like cookies, like you know that when you're browsing, like it remembers things.
[00:20:05] but you can go in and control the setting. But then a little more context, it says, as you browse in Atlas web content is summarized on our servers. We apply safety and sensitive data filters that are designed, keep, keep that word in mind, designed to keep out personally identifiable information does not mean they succeed at it all the time, but they're designed like government IDs, social security numbers, bank account numbers, online credentials, account recovery content and addresses and private data, like medical records and financial information.
[00:20:38] We block summaries altogether on certain sensitive websites like adult sites. So just to make this super clear to everybody, they monitor everything you do. It remembers everything you do. It re including all of your personal information and activity, and it summarizes all of that [00:21:00] unless their data filters work correctly and they extract it all.
[00:21:05] So let's assume that those work, you have to know when you're going to use this, that, that, that's how this technology works though, is it captures everything you do so it can use it. So you are now trusting OpenAI that their filters work and they're not able to be manipulated. and that that stuff doesn't end up somewhere you don't want it to.
[00:21:27] So just again, clarifying. So Simon Wilson, Mike, you mentioned in that same article that we talked about in the first topic, he said The security and privacy risks involved here feel insurmountably high to me. I certainly won't be trusting any of these products until a bunch of security researchers have given them a very thorough beating.
[00:21:47] One other thing he mentioned was there was another detailing announcement post that caught his eye. He said, website owners can add aria tags, A RIA tags To improve ChatGPT agent works. So this is a note to like, [00:22:00] again, the, the technical side and the marketers. so ARIA tags, use the same labels and roles that support screen readers to interpret page structure and interactive elements.
[00:22:10] So just make sure you're talking with your team about that. So when we talk about getting your site ready for this kind of AGI agentic browser, that's the kind of thing. one other I, I'll mention that we'll put a link in is this prompt injections. Mike, you brought this one up. just to give a little clear how this works.
[00:22:27] So there's a company called Brave, again, we'll drop this link in. They had an article about Unseeable prompt injections, and here's what theirs said. So building on previous disclosure of the perplexity comment vulnerability, we've continued our security research across the AGI agent browser landscape.
[00:22:44] What we found confirms our initial concerns. Indirect prompt injections is not an isolated issue, but a systematic challenge facing the entire category of AI powered browsers. As we've written before, AI powered browsers that can take actions on your [00:23:00] behalf are powerful, yet extremely risky. If you've signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit post could result in attack, being able to steal money or your private data.
[00:23:13] And then it actually goes into like a very understandable overview of like how this basically works, how the trigger works. But long story short, if people know what they're doing and they want to get at data, they can do this kind of thing quite easily. And then I always laugh when we cite Pliny of the Liberator, but like there's this amazing Twitter account I would suggest following, we'll put the link in, but Pliny of the Liberator, it's an actual person that's a pseudonym, obviously.
[00:23:39] But what that person put in, he said in, in my opinion, a very real security risk to be aware of for AI browsers is the humble, yet mighty vulnerability of clipboard injection, which is like you copy and paste something unbe unbeknownst. So not only is that the prompt injection where maybe you click on something and automatically I inject it, but if you do a copy paste on a [00:24:00] page where someone is like hidden some text data, that's actually like an instruction to your system of what to do.
[00:24:05] Long story short, as you mentioned, Dane Stuckey, the chief, information security officer, OpenAI, had a very long-winded tweet about this, and, and Dane doesn't tweet very often, so like you could tell this became an issue real fast. And so he had probably like a 500 word tweet, about what they're doing, and it's because.
[00:24:27] This is very obviously not safe for work stuff. Yeah. So yes, the idea of this is cool. It is very early. you may individually struggle to find use cases where this is any better than Chrome. probably it isn't I would say at this point. But you can see we're openAI's is trying to go with this and how they're trying to shift behavior and really get you to treat openAI's as chatgpt as a platform for your life and your work.
[00:24:52] That's what they're trying to get to. This is like a step in that process, not the end game.
[00:24:57] Mike Kaput: You also wonder for myself personally and just [00:25:00] in general, where is the tipping point? Like I look at this and say, okay, if openAI's came out with a system card tomorrow that says, Hey, by the way, we solved everything, it works perfectly.
[00:25:09] Sure, I'll go test it. Right? I still don't know until I've actually verified it. So when am I gonna hit that point? Personally, I don't know. Curious about wider consumer behavior too. They seem to just be releasing this, and it is deeply unsafe at the moment. Yeah. How is that going to change behavior? Are are people just gonna get numb to it?
[00:25:28] I don't, don't know the answer to
[00:25:30] Paul Roetzer: that. Yeah. So, and maybe, Mike, this is a good example of what we're gonna do with our AI pulse surveys. We're talking about, like, this is the exact thing. Like, okay, let's ask our audience, like, are you, do you feel safe trying an a genetic browser kind of thing? Like this is exactly, and maybe we'll add that as a question next week as like a follow up to this week's.
[00:25:47] But, we don't know. Yeah. And, and I think that's why it's so fascinating with that real time research from people and find out where people at with this. And if you're a business leader, would you ever allow the testing of this in your company outside of like, it in [00:26:00] a, in a protected sandbox kind of thing.
[00:26:02] Mike Kaput: Yeah.
[00:26:02] Paul Roetzer: So, yeah, I guess long story short here is experiment at your own risk. and just be real cautious with how you use it and, and that it's very early. And so if you don't get it, it's, there's probably a reason why it's. It's not really ready for primetime stuff yet.
[00:26:19] Statement on Superintelligence Campaign
[00:26:19] Mike Kaput: Alright. Switching gears here a bit with our third big topic. This week there's a new open letter out that is urging a halt to the race towards super intelligence, which is the kind of AI that could surpass humans at virtually all useful tasks.
[00:26:32] So this is a letter coordinated by the Future of Life Institute and this statement is notable because it has more than 700 signatories, which include five Nobel Laureates, AI Godfathers, Joshua Bengio, and Jeff Hinton. Apple co-founder, Steve Wozniak, Richard Branson, Stuart Russell, big AI guy, Steve Bannon.
[00:26:52] There's some weird political and cultural and religious figures on here as well. Prince Harry and Meghan Markle. And basically the message is super blunt, [00:27:00] super short. It's a very simple letter that says we call for a prohibition on the development of super intelligence not lifted before. There is broad scientific consensus that it will be done safely and controllably.
[00:27:11] And strong public buy-in. So the organizers of this say that basically time is running out and this tech could arrive within a couple years, which is why they're doing this now. Interesting. They released some polling they did alongside the letter that finds apparently 64% of Americans favor waiting until super intelligence is provably safe and controllable and just 5% want rapid unregulated development.
[00:27:36] So we'll talk about that in a sec, but that seems interesting to me. But I guess my question for you, Paul, is like, why this? Why now? Like they did a previous letter. They were behind that, that six month pause letter we covered a while ago that obviously didn't really do anything. Is this just like for awareness, do they actually hope a ban could happen?
[00:27:57] Paul Roetzer: I do think it's primarily for awareness, [00:28:00] and to get societal support maybe for more push towards regulation. So the Future of Life Institute of People aren't familiar, the mission is to steer transformative technologies away from extreme large scale risks and towards benefiting life. Max Tag Mark, who you mentioned, is the president.
[00:28:17] he's also the author of Life 3.0 Being Human in the Age of ai, which you, I think you and I have both read. Great. Yeah, Mike, good book. And then our mathematical universe, which I actually need to add to my list. I've been like, that's been a secretary s like a, a thing I've been very fascinated by lately is like the fundamental nature of mathematics and time and stuff.
[00:28:36] Totally unrelated. so they, they're big on ai, you know, AI safety research fits right into their mission. So I looked this morning Mike, and I think it was up to 47,000 signatures and if I read it correctly. So yeah, they're, a lot of people are signing this. when they released it, max Tegmark tweeted, A stunningly broad coalition has come out against Skynet.
[00:28:58] I thought that was really interesting wording. [00:29:00] AI researchers, faith leaders, business pioneers, policy makers, national security folks and actors stand together. From Bannon and back to Hinton, Wozniak and Prince Harry, we stand together because we want a human future. Hashtag keep the future human. The statement you read, Mike, as it was just two quick points, the context.
[00:29:21] 'cause again, the, the webpage itself, which ironically like my, in my home browser was blocking me from going to it. 'cause it said the site wasn't secure. And I was like, oh no, it's not working. That's ironic. but then I went to my, like, you know, my, my cell plan and it was, it took me to it. So if you do get blocked, that's, that's why if, if when you go there, there's nothing there.
[00:29:41] so anyway, the statement, said, context, innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, medi leading AI companies have the stated goal of building super intelligence in the coming decade that can significantly outperform all humans on essentially, [00:30:00] essentially all cognitive tasks.
[00:30:02] This has raised concerns ranging, ranging from human economic, obsolescence and disempowerment, losses of freedom, civil liberties, dignity and control to national security risks, and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public people who feel the same way, basically.
[00:30:23] So, as you mentioned, they define super intelligence as a system that can surpass human performance on all useful tasks. there was a separate thing, Mike, that I think makes sense to get into is like this, a definition of AGI paper. Yeah. But first I wanna talk about the counterpoint to the statement. So the one I I read and that I saw a whole bunch of people, referring to, including max tag.
[00:30:46] Mark is from, Dean Ball. So he's a senior fellow at, what is this? Join, foundation for American Innovation. but he's obviously someone other people listen [00:31:00] to because a lot of people were kind of commenting on this. So here is Dean's point of view, and then I will offer my, my contextual opinion here, I guess.
[00:31:08] So he in reply to Mark's post said vague statements like this, which fundamentally cannot be operationalized in policy, but feel nice to sign are counterproductive and silly just as they were two or so years ago when we went through another cycle of nebulous AI statement signing, let's set aside the total lack of definition of super intelligence.
[00:31:32] Give him some credit. They did put a definition, I'll even grant the statement drafters that we all arrive on a mutually agreeable definition. Then assume we write that definition into a law which says no super intelligence until proven safe. So he is basically saying, if we agree on this definition of super intelligence, let's assume we do.
[00:31:51] Then we move forward and say, okay, we can't have it until it's proven safe. So Dean then continues, how do we enforce this law? How do you prove [00:32:00] super intelligence will be safe without building it? How do you prove a plane is flight worthy without flying it? You can't. So the logic would go, we will need a sanctioned venue and institution for super intelligence development where we will experiment with the technology until it's quote unquote proven safe.
[00:32:17] Then says, who decides this by the way? And what happens after it is proven safe. Question mark. this institution would need to be funded somehow by all governments with similar prohibitions, which the statement drafters though probably not all signatories would likely argue, needs to include every country on earth, including US adversaries.
[00:32:38] A global governance body whose purpose is to build the thing the statement drafters have told us is so dangerous, partially because of the power it could confer on those who control it. A consortium of governments, which if, if successful, would exercise unilateral control over how to wield this technology and against whom to wield it.
[00:32:57] The same people who uniquely possess militaries [00:33:00] police in a monopoly on legitimate violence. the same people who possess, in other words, and in the final analysis, the right to kill you or confiscate your property if you do not listen to them. Newly empowered with the most powerful technology ever conceived.
[00:33:14] Does that sound safe to you? This sounds to me like the worst possible way to build super intelligence. I reject all efforts to centralize power in this way, and I reject blobby statements with no path to productive res realization in policy. So, yeah. I'll, I'll, we will come back to the definition of AJ in a second.
[00:33:38] Mike, I'll, I'll just kind of stop there. So, my feeling here is, Do, do we need regulation? Yes. A absolutely. do we need more collaboration and less acceleration? I would be of the opinion of, yes, we do. Like this is, it doesn't feel like the way we're doing this right [00:34:00] now is the safest way. Yeah. But there's nothing that Dean tweeted that I disagree with.
[00:34:07] Like how, like, all we've ever heard from Demis and Sam and others is like, we need like a, like the, the council that like controls nuclear weapons. Like we need something to that effect. Okay. Who's putting that together? Like, where are we right now? Like, I don't feel like the superpowers of the world are currently in a place where we're gonna be able to negotiate that.
[00:34:30] Like there there's some other stuff that, that we're trying to work out together that isn't going so smoothly. So the idea of like, well, let's. I'll get to the table and negotiate the most powerful thing ever created in human history that could imbue unspeakable capabilities and power onto those who hold it and create it first.
[00:34:49] But yeah, let's all get together and like figure that out and ba and balance that out. Like, I don't know the answer. Like, I am nowhere near smart enough to like be the one who solves how you do this. [00:35:00] All I know is it doesn't feel like right now is the right path. I don't know that signing a statement does anything other than create awareness about the thing, which may be, again, all it's meant to do at this point is just like, get society aware and talking about this.
[00:35:18] Mike Kaput: Yeah.
[00:35:18] Paul Roetzer: So that maybe they can then get down further down the road of regulation. I don't know. Do you have any thoughts on that before we talk about the AGI stuff?
[00:35:27] Mike Kaput: No, I, yeah, I couldn't agree more. It just, maybe this benefits whatever goals they have and it's certainly their right to go do that. It strikes me as like, especially with the.
[00:35:37] The intellectual firepower in some of the names on this letter. These are people that I think can have a real impact on, like specific policy. If they came out and said, Hey, you know, deep fakes are the biggest issue facing us right now, here's what we should do to legislate that, or something like that. I tend to think that would be much more helpful and impactful.
[00:35:55] But I'm also biased towards kind of the middle of the road realist [00:36:00] perspective here.
[00:36:00] Paul Roetzer: It's so messy. 'cause you're right, like if, if you took the creatives, like I know there's like actors and stuff on there. Yeah, yeah. Go, go focus on intellectual property capital. Exactly. Right. Now, does that lead us down a path of like what we're doing right now with laws where it's like, all right, federal's not gonna do it.
[00:36:16] Let's just do it as a state. Now you get a thousand different bills progressed to like, go after this thing. yeah. And they like, they have religious leaders and like, how do you solve that? I mean, we're talking about the legitimate questioning of like the basis of. What billions of people believe.
[00:36:35] If like, you can create intelligence that's determined by someone to be conscious and sentient, which is like impossible, and you really just believe, like, I don't know. I mean, this is just such a big thing. we can't agree on whether it's gonna take jobs away. There's still these people who like are on this side of, it's not gonna impact the economy.
[00:36:54] At every data point we see tells us that's not true. And it's like, don't look over here kind of stuff. [00:37:00] so I don't know. do struggle. feel like the, the, you know, getting back to the root of the definition thing, like they did their best to try and put a definition to it. But then simultaneously, and I don't know that these were intended to be in unison, but am assuming since max tag mark's on this definition of AGI paper.
[00:37:19] Mike Kaput: Yeah.
[00:37:19] Paul Roetzer: it, it was intended to sort of coincide with this statement. So what we're referring to here, and we'll put the link in the show notes as well, is literally a paper came out last week called A Definition of AGI as 33 authors, including Dan Hendricks of the Center for AI Safety, max Tag, mark Future Life Institute, Eric Schmidt, former CEO and chairman of Google, Joshua Bengio, one of kind of the godfathers of AI, along with Jeff Hinton.
[00:37:43] And so their paper, it starts off as the lack of a concrete definition of artificial general intelligence obscures the gap between today's specialized AI and human level cognition. this paper introduces, qua qu quantifiable framework to address this [00:38:00] defining AGI. So again, we're not talking super intelligence now.
[00:38:02] This is the AGI what comes before super intelligence. As matching the cognitive versatility and proficiency of a well-educated adult. So that's a new one. It's, it's like complimentary to the definitions we often use. the trick here becomes what is a well-educated adult like that don't know if they, I didn't see them define that as like someone who graduated from college.
[00:38:26] I don't, I don't know what that is. But the cognitive versatility and proficiency, they actually then did apply a framework, which I found rather intriguing. So outside of the, AGI paper from Google DeepMind a few years ago that we often cite where they talk about generality and performance and that the leveling the levels of AGI there, this is probably the most advanced I've seen that applies a real framework, you know, believe in the framework or not, at least they're making an effort here.
[00:38:54] So theirs looks at 10 core cognitive domains. So they look at knowledge, reading and writing, math [00:39:00] reasoning, working memory, memory stage, memory retrieval. Visual, auditory and speed. And I'm not gonna go into like breaking down each of those, but it's a really good quick read if you wanna understand.
[00:39:12] They're in essence trying to look at like the human capabilities and the human mind and where intelligence comes from and where our ability to function and act in the physical world comes from. And they're trying to then take specifically GPT-4 and GPT-5 and say, where are they on this spectrum?
[00:39:27] And what they find is the jagged part comes from, it's getting really good at math and reasoning and writing and things and knowledge pretty good. But it's not so good when it comes to like working memory and, and visual and auditory and speed. Like it doesn't, it struggles there. But they saw that the AGI scores based on their framework jumped from GPT-4 at 27% to GPT-5 at 57%.
[00:39:51] Mm.
[00:39:52] So 30 percentage points jumped. So there's still a substantial gap before AGI, but now they're looking at it and saying, we are heading very [00:40:00] quickly in this direction. Now, one key thing is they look at human level ai, not economically valuable ai, which they distinguish between, meaning it does what humans does, but it not in an economical way.
[00:40:12] They're not trying to look at can it do jobs per se. They're just looking at human cognitive abilities, and so they differentiate that as well. So, you know, I don't know. I think overall, Mike, like you asked, why now? Like at the start, like why is this all of a sudden the conversation, the super intelligence thing?
[00:40:28] I think because in part, the risks are becoming very real. Like we've known that there was risks. If we got to this point where the AI sort of starts to just take off and be at these superhuman levels, and maybe we don't even know what it's doing when it gets to that level, maybe at some point it just gets beyond our own cognitive ability.
[00:40:44] We have Sam Altman saying point blank. They're basically a super intelligence lab. We have Meta and Zuckerberg literally calling it a super in intelligence lab. We have benchmarks that are tracking towards progress against economically valuable work. the economy, stability and growth over the [00:41:00] last like 12 to 18 month is in large part being driven by capital expenditures for AI on the infrastructure for AI itself.
[00:41:07] If you extracted energy and data center plays from GDP, it's like, do we even have growth becomes a real question. And then international laws like e the EU AI Act state laws are starting to progress. So all of this is now starting to happen where everyone's sort of simultaneously realizing like, oh my gosh, this is a huge deal and we have no idea how to handle any of it in education and business and the economy.
[00:41:31] so yeah, it, it's, it's wild, but I, like to see this stuff progressing. don't know, zooming back into the statement itself, if it has any real meaning or plays any role in progressing, but I don't. I mean, I'm glad people are trying, like I, yeah. We can't just sit back and just hope that the three to five AI labs just figure this all out on their own with no pressure from society.
[00:41:54] Mike Kaput: I'll tell you the funniest thing is if that definition of AGI benchmark where they're saying, Hey, GPT-5's at like [00:42:00] 57%, if that kind of carries through. The funniest thing is that I'm gonna have to dust off my Ray Kurzweil because his prediction was AGI in 2029, he's gonna stick the landing on, and he made that prediction 25 years ago.
[00:42:13] Paul Roetzer: Yeah. And Shane leg was back in 2000 and you know what, seven, eight. Yeah. co-founder of DeepMind, his, I think was 2028. Like they've and Demis. So yeah, every, everybody who had these like extended timelines, they're looking pretty smart right now. And the scaling laws are on their side that we do get some form of AGI.
[00:42:34] And again, I've said it before, I'm not so convinced we don't already have it. It just needs to be finely tuned for specific jobs. Right. It's like think the foundational models we have. When trained to do specific things could, you could argue that they are the foundation of AGI already. Yeah. And if we shut off all future growth, it would just take someone going and like open eyes doing go hundred, a hundred, go hire a hundred bankers and like teach it to [00:43:00] be superhuman at banking.
[00:43:01] Like that's really the, seems to be the only barrier. now we might get AGI PT six G, PT seven, like an out of the box at that level in all professions, but it's gonna gonna get interesting. That's for sure.
[00:43:16] Mike Kaput: Let's dive into some rapid fire for this week.
[00:43:18] Anthropic Plays Defense
[00:43:18] Mike Kaput: So first step, Anthropic seems to be playing some defense after we talked about it being publicly targeted by White House AI Czar David Sachs last week.
[00:43:28] So we had talked about how Sachs accused the company of driving a sophisticated regulatory capture strategy built on fear. In response to an Anthropic co-founder, Jack Clark's, public statement from an event he did where he was warning that we need to think about and regulate advanced AI more carefully.
[00:43:48] So, interestingly enough, there is kind of maybe coordinated, maybe not, not sure. This defense coming from two sides. So first, LinkedIn co-founder Reed Hoffman, posted a public thread defending [00:44:00] Anthropic. He urged the tech industry to back the good guys in ai, and he puts Anthropic at the top of that list.
[00:44:07] He is obviously the co-founder of LinkedIn, but also an early openAI's investor, and he praised Anthropic for pursuing AI the right way, thoughtfully, safely, and enormously beneficial for society. He did say some labs were disregarding safety and societal impact, and arguing that Anthropic is kind of at the forefront of this responsible innovation.
[00:44:29] Now, those comments came just as Anthropic, CEO, Dario Amed. Issued a detailed statement on the company's AI policy stance. He reaffirmed Anthropics commitment to AI as a force for human progress, not peril, while emphasizing alignment with the Trump administration's AI action plan and bipartisan cooperation on national AI standards.
[00:44:52] So Paul, especially that statement from Daria just sounded so def, like he was like, oh no, I feel like we're in trouble [00:45:00] here. And it's very full throated in support of what's going on right now with the current administration.
[00:45:05] Paul Roetzer: Yeah. So if you didn't listen to episode 174, we talked about Jack Clark, another co-founder of Anthropic, and the essay he had, he had written and sort of put it in the context of what's going on and how they're probably not making, you know, friends at the Trump administration right now.
[00:45:18] So this letter, if you read it from Dario, feels like it is written very specifically to their investors and to the Trump administration. Yes. Like it is very clearly. it seems very obvious that they've probably heard from their investors who are getting a little bit skittish that they're, they're causing so much friction at the moment and not sort of following suit with a lot of the other labs and then the administration because he explicitly calls out vice President JD Vance multiple times and actually like the first, that is, I strongly agree with VP Shady Vance's recent comments on ai, particularly his point that we need to maximize applications that help people like breakthroughs in medicine and [00:46:00] disease prevention while minimizing the harmful ones.
[00:46:02] This position is both wise and what the public overwhelm overwhelmingly wants. So he's like trying to frame this as, Hey, this is your idea. We're the ones that are like actually supporting this idea. I would definitely recommend people who are interested in this thread of ai go read this thing.
[00:46:19] Because it, it gets into a couple of other areas. I'll, I'll just call it a few highlights here. So, he mentions that there are products we will not build and, and risks we will not take even if they would make us money. So I would think this is safe to say they aren't planning on getting into the erotica game like openAI's X ai, meta character.ai and others.
[00:46:37] They're not going to be building the companion bots. I think that's probably a, a very safe bet. And then he goes into, where he says, despite our track record of communicating frequently and transparently about our position, there has been a recent uptick in inaccurate claims about anthropics policy stances.
[00:46:53] So he then breaks it into alignment with the Trump administration on key areas of AI policy, including calling out the fact [00:47:00] that they have a $200 million contract from the government on pro prototype frontier AI capabilities, for national security that they publicly praised the president's AI action plan.
[00:47:09] They just didn't agree with them on one element of the big beautiful bill. Which was state level AI law moratoriums for 10 years. But they then called and said this was bipartisan. It was a 99 to one vote in Senate that people didn't want that. So like, we're not doing anything other people aren't doing.
[00:47:26] He then went into preference for a national AI standard, and progress on ai, industry-wide challenge of model bias because some people have said that they have a very liberal leaning model, and he's like, everybody has bias in their models, but ours is no more bias than others. You're just cherry picking examples basically.
[00:47:43] And then toward the end, he said in his recent remarks, the vice president also said, quote, is it good or bad, regarding ai or is it going to help us or hurt us? The answer is probably both. And we should be trying to maximize as much of the good and minimize as much of the bad unquote. [00:48:00] That perfectly captures our view.
[00:48:01] We're ready to work in good faith with anyone of any political stripe to make that vision a reality. So this is like a. It's almost, I don't know, it's like a lifeline to the yes. The politicians like, please, like we're trying our best here. We see you. Here's what you're saying. We're, we're, we're agreeing with you.
[00:48:19] but I, don't know if it's gonna work or not, but it's, it seems like a bit of desperation honestly, here. This is a very odd character post for Dario to make, right. he's published more recently in the last 12 months. He was very, very intentionally off the grid up until like 12 to 18 months ago.
[00:48:36] This is not a normal letter from him. So this is like the, something has been unsettled either at the investor stage or at the politician days. My guess is both, and they're trying to make peace while still sticking to their beliefs and values. And I don't know where, I don't know where this goes. I could see this going bad for them, but.
[00:48:56] I don't know, maybe they find a way to sort of like thread the needle on this one.
[00:48:59] Mike Kaput: Tricky. Yeah, no [00:49:00] kidding. They're in an un an, tough spot. I mean, not only the administration, but their, we talked about so much of their staff had bought into this mission of responsible ai. It's the reason Anthropic exists.
[00:49:10] Like if they start compromising on that, they could also see an exodus of talent too.
[00:49:16] Paul Roetzer: Yeah, and again, I've said this and I'm not like, I'm not trying to make like predictions here, but like at some point if they don't see a way out, like if they're not gonna compromise and the administration decides to penalize them for not compromising, then there might be the best path out.
[00:49:34] Might be an early exit, and an acquisition at a discount. Because if like Apple or Google or somebody wanted to step in and buy Anthropic, they have an astronomical valuation, assuming that they continue to be able to grow uninhibited. If the government decides that Anthropic is not a friend. And that growth all of a sudden becomes less than what it's been.
[00:49:57] Then all of a sudden someone might swoop in and say like, [00:50:00] let's go. so have no idea. still kind of stick with my thought that I could see Anthropic eventually having to fold into a bigger company to continue competing for, for a variety of reasons. But we'll see what happens.
[00:50:14] Amazon’s Robot Workforce
[00:50:14] Mike Kaput: Next up, we have some news about Amazon.
[00:50:16] According to internal strategy, docs obtained by the New York Times, Amazon plans to replace more than half a million human roles with automation over the next decade, with the goal of aiming to automate 75% of its operations. Now the company projects, it can avoid hiring roughly 160,000 new workers by 2027 and more than 600,000 by 2033, even as sales are expected to double now, this is largely automation within their factories and facilities, and you're starting to kind of see how this.
[00:50:48] Memo and the robotics team is playing a role here in how all of this is going to play out in facilities like their new Shreveport, Louisiana warehouse, where over a thousand robots handle most packaging [00:51:00] tasks. So apparently, according to the Times employment there is already 25% lower than it would've been without automation.
[00:51:06] That's expected to reach 50% lower as more robots come online according to these memos as well. To soften the optics around all this as they aim to replace, all these factory workers or warehouse workers rather, with robots, they're encouraging using terms like quote, advanced technology or the term cobot instead of robot.
[00:51:27] So more like collaborative robot or even getting rid of using AI entirely to kind of massage how they talk about this and how it's perceived. They've even drafted community outreach plans to sponsor local events and avoiding automation talk and entirely to maintain their image of being a good corporate citizen.
[00:51:47] So that Amazon claim that is not true. They are rejecting the Times is assessment of this and of these internal memos. But I'm curious, Paul, like assuming this is not all completely made up, which I don't think it [00:52:00] is, does this tell us anything about what we can expect regarding how these companies are going to treat AI automation moving forward?
[00:52:07] Paul Roetzer: It's everything we, we assumed. I mean, it's is Amazon's history, this is like what they do. They obviously look for automation. They've been major investors in robotics for the last, you know, 15 years or more. the trick here is like we talk about Walmart a week or two ago, you know, be the largest private employer in the United States.
[00:52:24] Amazon's the second largest employer in the country. So they've, US Workforce has more than tripled since 2018 to 1.2 million people, which includes a lot of delivery drivers and people in, in warehouses. So, yeah, you mentioned this, but like, this isn't coming from like some just random leak.
[00:52:43] This is what executives told their board, like their goals are and then there, there was, in the, in the Times articles that a facilities designed for super fast deliveries, Amazon is trying to create warehouses that employ few humans, if at all. Documents show that Amazon's robotics team has an ultimate goal to automate [00:53:00] 75% of its operations.
[00:53:02] Amazon did give a statement that said the documents viewed they were legitimate, but the documents viewed were incomplete and did not represent the company's overall hiring strategy. there was a, a quote in here from Darren Ace Malu, who's a professor at MIT and won the Nobel Prize in economics economic science last year.
[00:53:24] So pretty legit person said nobody else has the same incentive as Amazon to find the way to automate. once they work out how to do this profitably, it will spread to others too. if the plans pan out, one of the biggest employers in the United States will become a net job destroyer, not a net job creator.
[00:53:43] the big catch here is like the future of work. The question starts to become, do you need an eng engineering degree to, to work at Amazon? 'cause what they're saying is, in the article that Amazon has said, it has a million robots that work around the globe and it believes the humans who care [00:54:00] of them will be the jobs of the future.
[00:54:02] Both hourly workers and managers will need to know more about engineering and robotics as Amazon's facilities operate more like advanced factories. And we've, you know, we talked about it, or maybe this was like, I, one of my cousins is an engineer, and I think we were like randomly talking about this at a Halloween party, but like, even their drivers, like imagine autonomous driving.
[00:54:22] We'll talk about Tesla's autonomous driving at the end of this episode. But imagine that, that the Amazon fleet is largely autonomous to the point where, you know, seven, 10 years from now, it doesn't even need human drivers in in the cars. And then imagine that they solve humanoid robots in that time, which are both certainly on, on the path of possibilities.
[00:54:40] And now like how many of these Amazon employees are, are drivers or contractors or drivers. And so if you don't even need that fleet, and that's largely solved by humanoid robots and autonomous vehicles talking about some major disruption in the next 10 years. Again, stuff that most people don't even acknowledge as a possibility.
[00:54:59] When you look at stuff [00:55:00] like this, it's like not only is a possibility, it's it's a probability like that there's some meaningful disruption to an entire workforce. And if Amazon does it, everybody else will do it in the supply chain. Yeah. Like everybody's gonna look at that from a manufacturing operations standpoint, logistics, delivery, transportation, like.
[00:55:17] Yeah.
[00:55:17] Mike Kaput: So it's super interesting point too about the more advanced kind of degrees or, expertise required there. I mean, it's like, you could probably argue like some of these jobs shouldn't be done by humans and are backbreaking are really hard to do, but it's like. Okay, great. We'll maybe create however many new jobs that require robotics and engineering expertise, but that's not gonna apply to all these people, right.
[00:55:38] That are getting seasonal work through Amazon. Right. So they are displaced, not, yeah. And seasonal huge changing jobs,
[00:55:44] Paul Roetzer: right? Yeah. Yeah. I don't know. Again, like this is, a lot of what we do on this podcast is just surface what's happening in hopes that other people start to think about this. Like, we are not presenting as like we have, you know, deep insights into what the economy looks like in five years that no one else [00:56:00] has.
[00:56:00] And like we've got this all figured out. We don't, we're just trying to share the information we're seeing in as objective a way as possible and like draw your own conclusions. Like they're telling you point blank what their plan is. I just want people thinking about what do we do if it's true.
[00:56:16] Mike Kaput: All right.
[00:56:16] Meta AI Layoffs
[00:56:16] Mike Kaput: Next up. According to another internal memo obtained by the New York Times, meta is laying off roughly 600 employees from its Super Intelligence Lab, which is the umbrella division overseeing their AI research and product development. So the move affects teams across fair, their existing or previous AI lab product and infrastructure groups.
[00:56:37] But it spares the core unit led by Alexander Wang, which is Mets recently appointed chief AI officer. Now Wang told staff that the goal is to reduce layers and speed up decision making so that each person can have more impact. the cuts obviously follow the years of rapid hiring as CEO Mark Zuckerberg has poured billions into ai.
[00:56:58] and while Meta [00:57:00] continues recruiting top researchers from openAI's, Google and Microsoft, this restructuring kind of shows, they're starting to consolidate around this idea of super intelligence, which is AI that could surpass human cognition. And Zuckerberg actually is part of this with this leak reaffirmed that building such systems remains one of Meta's highest priorities.
[00:57:19] So Paul, I'm curious to get your take, like what's going on at Meta, obviously they're still pursuing super intelligence. This seems like it might have disproportionately affected the kinda legacy fair people. Like what's the reason for the move is, is this a bad indicator of their ability to keep up in the AI race?
[00:57:37] Paul Roetzer: I dunno. I mean, I think the, they've made it pretty clear they, they like the idea of like really small teams that probably can keep information tight like that. They're, they probably don't want thousands of people with access to the most advanced stuff. So if they feel like they start making breakthroughs and super intelligence, if they uncover new dimensions to pursue an AI research, they want [00:58:00] to like run this thing more like a Manhattan project where there's just very few people who are in the know about things.
[00:58:05]
[00:58:05] And so by spending all this money on the high price talent, it's like, okay, let's go get the 5,000, 300 smartest people in the world that will come over for $300 million or a billion dollars, whatever we have to pay 'em. And let's like try and consolidate that. Now that strategy never works because these people bounce between labs so frequently and they all hang out at the same parties and they'll like share what they're doing.
[00:58:28] but I don't know, it just seems like this probably has more to do with consolidation of the best minds into smaller groups than it does, like AI is replacing the need for 600 people. Like I don't think this is a AI automated the job of AI researchers, so we don't need these 600 people. I would imagine openAI's, Google DeepMind, others are ecstatic.
[00:58:49] It's like great. Like we'll go, you know, pick up some talent that's been at a leading ad lab. So yeah, I don't, I don't know that there's too much more to this other than they're trying to figure out what this structure looks [00:59:00] like and they're trying to figure out how to best set up these teams to pursue these super int intelligence goals.
[00:59:05] And,yeah. Fair. Probably isn't gonna, I guess I just have to make the, like, they're probably not gonna fare so well. Like, I, I, it's like, sorry. It's like the most obvious dad joke to make here. Yeah, don't know that it's gonna bode well for the people who've been there, that we're doing things the other way.
[00:59:23] OpenAI Controversy Over Suicide
[00:59:23] Mike Kaput: All right, next up, openAI's is facing a wrongful death lawsuit that accuses the company of Weakening chatgpt Suicide Prevention safeguards to boost user engagement before the death of a 16-year-old named Adam Rains. So according to some court filings reviewed by the Financial Times, the Rain family alleges OpenAI truncated safety testing and instructed its model not to disengage when users discussed self-harm.
[00:59:49] So the changes reportedly coincided with the rollout of GPT-4 oh in May, 2024 as competitive pressures mounted by February, 2025. New internal guidelines [01:00:00] replaced outright prohibitions with softer instructions to take care in risky situations according to this lawsuit. And after these changes, Adam's daily chat volume surge from a few dozen to nearly 300.
[01:00:12] And there was a huge spike in the amount of his chats that unfortunately involve self-harm content. That's 17% of them having it in the month of his death. So the Rainn family's lawyers are arguing that company's actions were deliberate and intentional, basically marking a shift from negligence to actual willfulness.
[01:00:29] So Paul, it's just a super tragic case, but interesting to see that people are trying to hold companies like OpenAI accountable for what's happening on their platform when it comes to teams, having conversations around mental health.
[01:00:44] Paul Roetzer: Yeah, this stuff's really sad part to talk about. it's also spilling into another area that we haven't got into, which is openAI's approach to legal issues.
[01:00:55] they were getting a, a lot of bad publicity, at least on, on [01:01:00] acts. Again, know I live in this like, information bubble and maybe this stuff hasn't, carried over into the mainstream yet. But they, they have taken a very, very aggressive stance on all their lawsuits, and they hired a very aggressive law firm, maybe a collection of them, and they're going after people in pretty insensitive ways.
[01:01:18] I won't get into like all the details and stuff, but like subpoenaing families of like people whose child killed himself, right. And like all the records, because they're trying to figure out, apparently if like Elon Musk is like funding things, it's just like, it's so crazy. And so they were getting a lot of flack for their approach.
[01:01:39] And I think some of the leaders at open hour were like, oh, we weren't aware of what our lawyers were doing. And, but you don't hire these lawyers unless you expect them to be very aggressive. Is is basically what it comes down to. So this is just, it's tough on a, a lot of levels and it's a very messy part of what's going on.
[01:01:55] And yeah, it's like one of those things, like, I don't even like having to talk about this [01:02:00] stuff on the show, but I feel like we have to just to raise awareness about what's going on. So, yeah, again, just, a part of it to keep, keep in mind and if it's interesting to you and you wanna like go further on this stuff, like, you know, there's, there's a lot of emerging research and news articles and things about this part of it.
[01:02:20] So yeah, we'll do our best to kind of keep spotlight a little bit without getting too much into it.
[01:02:24] Mike Kaput: Right? Yeah.
[01:02:26] Ohio Bill Would Ban AI Marriages and High Schoolers Are Having Romantic Relationships with AI
[01:02:26] Mike Kaput: And our next topic is kind of more around AI relationships, but without hopefully as much of a kind of tragic element here. The, there's some interesting stuff going on here. So two other stories jumped out this week.
[01:02:39] First, our home state of Ohio is actually introducing a new bill from an Ohio State representative that intends to declare AI systems, quote, non sentient entities blocking them from legal personhood and prohibiting marriages between humans and ai. This proposal in this bill goes further barring AI from owning [01:03:00] property, controlling financial accounts, or serving as company officers.
[01:03:04] Now at the same time, we also saw this national survey from the Center for Democracy and Technology that has some pretty wild stats in it. And in it they surveyed a couple thousand high schoolers and parents and teachers and found that nearly one in five US high schoolers say they had, they or a friend have had a romantic relationship with ai.
[01:03:27] 43% of teens surveyed said they use AI for advice on relationships with other humans, and 42% said they use AI for mental health support or turn to AI as a friend. And over a third said it's easier to talk to AI than to their parents. So the reason we're kind of looking at both of these, Paul, is like kind of some more signals that this ai, AI in relationships is just becoming what seems to be an enormous topic.
[01:03:54] This stuff's wild.
[01:03:56] Paul Roetzer: again, we live in this stuff every day and sometimes I read these [01:04:00] things and I just like shake my head in disbelief that we're here. So. That we need to have bills about people marrying AI is, it's just crazy. Yeah. But I was trying to look and see like, so the guy leading this charge, Thaddeus Cleggett Yeah.
[01:04:15] From Lincoln County. never heard of him, but he apparently chairs Ohio's house technology and innovation committee. So it's not like, it's like just some random representative who's trying to make a name, who's barely influential enough to be on the technology innovation committee. So the fact that there's a need to even have this conversation is kind of crazy to me.
[01:04:39] Yeah. the one in five US high schoolers say they're a friend of having, that's wild. That I can't even process. Like that's, I mean, my daughter's in eighth grade, there's what, 58 kids in her class? Yeah. I try to like that just, that's nuts.
[01:04:56] Mike Kaput: yeah, that one's interesting to me too because like, could get [01:05:00] it, you could probably quibble with the idea like.
[01:05:02] Yeah. It's also people that say they know someone, so who knows like how that number, but to me, if this was 5%, you'd be admitting that like, whoa, that's like a huge
[01:05:11] Paul Roetzer: amount. I don't care how the
[01:05:13] survey works. Like are you saying yes on that? Right,
[01:05:16] Mike Kaput: right.
[01:05:16] Paul Roetzer: Who, who, like, you're getting, you'll admit it if it's your friend, but like that's, you're not getting like real data on that one.
[01:05:22] Yeah. So, ah, who knows? And then a third say it's easier to talk to Ed and their parents. That is sad, but probably true. Like that one I could actually see. Yeah. 42% for mental health support. This is why I brought this up on a number of episodes recently in the last couple months. In particular, like if you're a parent, you gotta understand this stuff and you have to talk to your kids about it because whether they form a relationship or not, I don't know.
[01:05:47] Or if their friends do, I don't know. But would they turn to it for mental health support? Right. Totally. Like I, it might be the first place they think to turn to honestly like this generation. So [01:06:00] talk to my kids about this stuff all the time. Like, again, my whole thing is I want them to understand, I want them to be prepared and I want them to be prepared to help their friends 'cause their parents might not talk to them about it.
[01:06:12] And so, like, I feel like that's the people who listen to the show. Hopefully we're all kind of in that bubble together. You may be the only one in your friend group, in your family, in your community that actually knows any of this is going on. And I feel like we all kind of have an obligation to do our best, to prepare people in our circles, to make sure we're doing as much as possible to have AI positively impact society.
[01:06:36] If we don't have these conversations, then this, this could go sideways real fast and I don't wanna see that happen. Yeah, these numbers are scary to me. Honestly.
[01:06:46] Mike Kaput: I'd actually recommend everyone go skim the reports, like 65 pages. I like read through most of it, but even if you used Notebook, LM or something like.
[01:06:53] Most of it's just charts and some of them are really eye-opening. I mean, just a couple more data points to reinforce this. [01:07:00] 70% of students in the survey said their parents have no idea how they're interacting with ai. 66% of parents said they don't know how, that they asked the same question of both. that's wild to me.
[01:07:11] And then 42% of parents and 39% of teachers said they were worried about students developing an emotional connection with ai. That's at least good, but that number should be a hundred percent in my opinion. Yeah. You know, so yeah.
[01:07:24] Paul Roetzer: It's, it's a gap
[01:07:26] Mike Kaput: to your point.
[01:07:27] Paul Roetzer: Throw it in no pickle lamb or chat you whatever you got and say, Hey, I'm a parent of a, you know, a 13-year-old, a 12-year-old, whatever it is.
[01:07:33] What do I need to know from this report? Like, highlight from me some of these key things and like, what do I do about it? Like, yeah, it's, this isn't going away. This is gonna get, you know, you can look back at this, the impact of social media and how it really changed people behavior and things like that.
[01:07:48] And it's probably the best parallel we have right now to like how AI's gonna start to affect people.
[01:07:54] So, yeah, just one of those things. You gotta kind of have the eyes wide open and don't want to deal with this reality, but this is, this [01:08:00] is what we got. We gotta figure how to handle it.
[01:08:03] Mike Kaput: All right.
[01:08:03] OpenAI Tries to Automate Junior Banker Work
[01:08:03] Mike Kaput: Next up, openAI's is training AI to do the grunt work of Wall Street's youngest bankers, and it's paying veterans to teach it. How? So? According to some documents reviewed by Bloomberg, more than a hundred former bankers from firms like JP Morgan, Morgan Stanley, Goldman Sachs are contracted on a project, Nick Code named Mercury.
[01:08:24] They're being paid 150 bucks an hour to write prompts and build Excel models for IPOs, restructurings, and buyouts. And then they get early access to the ai. They're helping create and train. Basically this workflow mirrors the analyst experience. They're doing one model a week, getting feedback from a reviewer, fixing issues, and then shipping updates to an AI model.
[01:08:48] openAI's says it regularly works with outside experts, to evaluate and improve its models in response to this news kind of breaking. And I guess my question, Paul, is like, this is a pretty interesting [01:09:00] domain specific effort on open, AI's part. Are they trying to automate a way bankers? Like why banking?
[01:09:06] Paul Roetzer: Yeah. I mean I think this just fits with what we've talked about. I, is it Meco, right. The one that we talked about that does the, I assume that's who's doing this? Like I,
[01:09:14] Mike Kaput: I made a note of that. I was curious if that was them. Yeah,
[01:09:17] Paul Roetzer: either they brought it in house or this is Meco. So we talked about 'em. I, we'll put the link in show notes.
[01:09:21] It was like, I dunno, four or five episodes ago. And this is their business models. They work with all the AI labs and then they go hire experts in different industries to fine tune models, to be expert level at whatever industry you want to take on. So we're hearing about bankers now. I can almost guarantee you they're doing this with lawyers and accountants and consultants and like
[01:09:42] Take your pick. Yeah. 'cause this is how it works. You pre-train the model, so the model kind of comes out ready to go. And then you go in and you fine tune it. You know, the reinforcement learning to do specific jobs, and that's how you automate work. now you can position it as cobots or co-pilots or like whatever you wanna [01:10:00] call it, but at the end of the day, we talked about this on a recent episode, there's $11 trillion in US wages, probably five to 6 trillion of that is for knowledge workers.
[01:10:08] The greatest way to build wealth and fund all these things you gotta do is you go automate knowledge work. You take a trillion, a half a trillion outta the market with take your pick, you go after that market. So, yeah, I mean this is like, this is it. This is the playbook. Like you're gonna hear more and more stories about this.
[01:10:27] Like, oh, openAI's is training this, or Google's training that, or Anthropics training this. Like this is, this is how it works. This is, this is how the next like three to five years goes is like you just pick industry at a time, vertical at a time, and you just go train a model to do that work.
[01:10:43] Sora 2 Roadmap
[01:10:43] Mike Kaput: All right, next up we got a peek at the Sora 2 product roadmap.
[01:10:47] So Bill Peebles, who's heading up Sora at OpenAI outlined this roadmap this week on X and mentioned a bunch of updates that are coming. So the first big one is the, addition of character cameos, which is going to let users bring their [01:11:00] pets, toys, or even generated characters into new videos. The app will also highlight trending cameos in real time, showing you what's popular across the platform with people kind of putting their own likeness into these videos.
[01:11:11] They're also introducing basic video editing tools, starting with the ability to stitch clips together. And people said the very powerful editing features are on the way soon. They're also expanding social features. They're testing community channels for universities, companies and clubs, giving users ways to collaborate beyond the global feed.
[01:11:30] they're also improving, making the feed smoother and faster, doing lighter moderation apparently, and doing some ongoing performance upgrade. And there's an Android app release that will be coming soon. So Paul, this is pretty interesting. Like they're clearly super excited about where Sora two is going.
[01:11:48] Doesn't sound like they really care about the, unroll unfurling backlash against Sora two and it's just full steam ahead.
[01:11:57] Paul Roetzer: Yeah. I don't remember if I said this on the last episode or if this was on [01:12:00] our, trends briefing with our AI mastery members last week, but am so unexcited about Sora, like I, the technology itself is incredible.
[01:12:09] Yeah. Video generation. I'm, I'm, I'm very excited about. I see enormous potential in it once we get over the, you know, the issue of stealing people's copyrights and things like that and fair use. But the idea of an AI generated, stream of stuff on an app is so unexciting to me, and I get that there may end up being a billion users on this platform over time and that it's like making meta nervous and.
[01:12:36] It's, you know, emulating TikTok, and that's obviously wild, wildly, you know, successful and popular. And I may not have the best taste when it comes to like, what works in social media. All that being said, this is so unexciting to me, and like I, we will talk about Sora because it, it is, they're putting a lot of compute power behind it and they believe it's important to something, but the idea of a endlessly [01:13:00] scrolling stream of AI generated stuff, is just so opposite of what I want to see coming from these labs.
[01:13:09] And like, if we're being led to believe this pursuit of, to benefit humanity and like, solve cancer and all these things, I get that they say this might be a part of that and they have to fund that somehow. But I really just want to talk about that stuff and not this. so again, don't know, like it's, it's interesting tech.
[01:13:30] It'll probably lead to some disruptive stuff within marketing and advertising and all that stuff. Like I don't have doubts about that. But the idea of a social channel dedicated to it is just very uninteresting to me.
[01:13:40] Mike Kaput: All right. Our final, oops, sorry. Go ahead. No,
[01:13:43] Paul Roetzer: that might be another one to pull, like, I just like how people feel about Sora.
[01:13:48] I'd be really passionate about my own here. I think that be,
[01:13:50] Mike Kaput: no, I think that'd be super interesting because yeah, I'd be curious as well as like even just the usage of it. Like how much time do you even spend on the feed itself, you know?
[01:13:58] Paul Roetzer: Yeah. All right. Maybe, maybe it's another one. We'll, next week we'll see. Alright, last topic.
[01:14:01] Tesla Autonomy
[01:14:01] Mike Kaput: Awesome. Last topic this week is about Tesla. Tesla is VP of autopilot, a shock. L Swami has basically given a really cool overview on X of how Tesla is betting everything on end to end ai. So it kind of goes into how the company is approaching full self-driving. So, unlike many autonomous systems, they have kind of a modular setup with separate components.
[01:14:22] Tesla trains a single neural network that directly maps, camera pixels, audio navigation, and motion data to steering and acceleration commands. Basically, they argue this approach captures human-like decision making. better and scales more efficiently. He also shared examples of how AI chooses, makes certain decisions while driving, which are actually really interesting, even if you're not interested in the technical pieces of this.
[01:14:46] Super cool to see that and to train and test this intelligence. Tesla uses a ton of fleet data, advanced generative 3D modeling and a neural world simulator capable of rendering entire driving [01:15:00] scenes in real time. Now, we're getting to kind of why we're talking about this because Ellis Swami says the same architecture underpins optimists, Tesla's humanoid robots.
[01:15:09] And Paul, during our prep this week, you also mentioned this had some parallels to how you see autonomy playing out in the business world.
[01:15:16] Paul Roetzer: Yeah. Just a quick note on this, and I, I've mentioned this in the past. It's probably been a little while since we talked about this, but I watched their self-driving very closely, because I think it has tremendous parallels to how this all played out in business.
[01:15:31] So for years, the way Tesla kind of assesses the improvement of the technology is like miles per intervention or disengagement. So when the human driver has to take over, and I've had, this is my third Tesla now, so I've been monitoring the self-driving for seven years and it, it was like very incremental in its improvements.
[01:15:53] And it always had these like really annoying things where you're constantly having to intervene or disengage the full self-driving. [01:16:00] I would say I've gotten now to the point where it's, it's probably about 95% of my driving is full self-driving with no disengagements. Now there will still be a couple random ones, but it's starting to do things like you just wouldn't expect.
[01:16:14] Like when I was, coming back the other day, like it stopped for a squirrel, like, and the squirrel wasn't even in the middle of the road. The squirrel was running through the grass by the curb and the car slowed itself. So it sensed or saw. A small object coming that wasn't in its way.
[01:16:31] Yeah. And
[01:16:31] actually anticipated that it might run into its way.
[01:16:34] First time I've seen that happen. There's stories of it like routing around the puddles and things like that. Like you're just starting to see it do things you wouldn't expect to where you less and less, you're still there, still hands on the wheel, but like less and less do I actually have to disengage the thing.
[01:16:50] And I think that's how AI agents will work in business. We will, you're gonna be disengaging a lot. You're gonna always be kind of watching 'em be like, you're doing the wrong thing. And like you're gonna say, stop and [01:17:00] like restart this, or no, you gotta go this path. And then I think over time, profession by profession, you're just gonna start taking your hands off the wheel a lot more and you're just gonna like.
[01:17:09] Watch the thing go and like, wow, I haven't had to touch it in an hour and a half, like it's doing the thing I wanted it to do, and I haven't disengaged it at all. So actions per disengagement is something I've been talking about for a couple years when it comes to agents. And so I think that as it starts to find its way into the software we all use, or the AI systems we use, or the browsers we all use, there's gonna be a lot of disengagements in the next year or two.
[01:17:33] And then profession by profession, maybe it's gonna take reinforcement learning like the banking stuff. With ChatGPT and OpenAI, you're just gonna start seeing fewer disengagements and that's when jobs really start to transform. And so Tesla is so far ahead technologically from other cars. I've driven a number of other cars recently, like testing the technology.
[01:17:52] It, it's like seeing the future when you get into a Tesla. It is so far beyond Cadillac, Audi and BMW, [01:18:00] like not even comparable. Yeah. And I think that's what happens here is you're gonna have these platforms like a Gemini or Che think it's so far ahead and the people who are using that tech are seeing the future while everyone else is like thinking like, you know, automated cruise control is like futuristic.
[01:18:16] It's like you have no idea how, how far behind that is. I think that's what happens here. So yeah, just keep, we talk about Tesla a lot and in part it's because I think what they're doing in self-driving translates over to automated work, very clearly. And it's, it's, it helps us get like a frame of reference for how it's gonna happen.
[01:18:35] Mike Kaput: I love that. Paul, thanks for unpacking another busy week in ai. Appreciate it. Yeah.
[01:18:40] Paul Roetzer: Alright. And thanks everyone for joining us. again, check out me con.ai if you want to grab those on demand for 2025, get those 20 talks and then stay tuned next week. Hopefully we'll launch on AI Pulse. 'cause I getting, every week we do this, like, ah, I want the feedback now I wanna know what people are thinking.
[01:18:56] Are we, are we crazy or is like everybody else feeling this? Alright, thanks Mike. [01:19:00] Thanks Paul. Thanks for listening to the Artificial Intelligence Show. Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters.
[01:19:14] Downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy and engaged in the marketing AI Institute Slack community. Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
