As AI races toward full automation, billionaires, policymakers, and everyday workers are colliding in a new kind of power struggle.
In this episode, Paul Roetzer and Mike Kaput tackle the biggest AI shifts shaking business, politics, and society. From OpenAI’s Dev Day to a new Senate report warning that automation could wipe out 100 million US jobs, the conversation examines what’s really coming for the workforce.
Listen or watch below—and see below for show notes and the transcript.
00:00:00 — Intro
00:04:12 — OpenAI Dev Day
00:10:27 — AI Is Getting More Political
00:27:05 — Sora Copyright Drama Continues
00:37:05 — Gemini Enterprise
00:43:10 — Gemini for Home
00:47:35 — AI’s Impact on Job Hunting and Hiring
00:50:10 — AI Making Us “Professional Generalists”
00:56:47 — AI Product and Funding Updates
OpenAI Dev Day
At OpenAI’s 2025 Dev Day, Sam Altman made it clear: ChatGPT is no longer just a chatbot; it’s becoming the operating system for the AI era.
The company unveiled a sweeping set of updates that transform ChatGPT into a full-fledged platform where developers can build and distribute apps, much like an AI-native App Store. With the new Apps SDK, users can now interact with apps from companies like Coursera directly inside a chat, creating, learning, and transacting in one seamless conversation.
OpenAI also introduced the Agent Kit, a suite of tools for building autonomous AI agents that can perform real-world tasks, from managing business expenses to automating entire workflows. Altman framed the change as moving from “systems you can ask anything” to “systems you can ask to do anything for you.”
The day’s surprise came from Altman and former Apple design legend Jony Ive, who revealed a secret three-year collaboration on a new line of AI hardware.
AI Is Getting More Political
A new Senate report has delivered one of the starkest warnings yet about the impact of artificial intelligence on American jobs.
According to an analysis led by Senator Bernie Sanders and the Senate Health, Education, Labor, and Pensions Committee, AI and automation could eliminate nearly 100 million U.S. jobs over the next decade. The study, conducted with ChatGPT-assisted modeling, projects that up to 89% of fast-food positions, 64% of accounting roles, and nearly half of trucking jobs could vanish as “artificial labor” reshapes the economy.
Sanders argues the technology’s current trajectory will allow corporate America to wipe out tens of millions of decent-paying jobs, cut labor costs, and boost profits. The report cites Amazon and Walmart as examples, noting their expanding use of automation alongside sweeping layoffs.
Democrats are calling for major policy interventions, including a 32-hour workweek, profit-sharing, and a “robot tax,” to ensure AI’s gains don’t concentrate further wealth among billionaires.
Republicans, by contrast, warn that heavy regulation could slow innovation and give China a competitive edge.
Sora Copyright Drama Continues
OpenAI now claims it wasn’t ready for the storm of controversy around its release of Sora 2, its new AI video generator.
The Verge reports that “OpenAI wasn’t expecting Sora’s copyright drama…and it didn’t realize people might not want their deepfakes to say offensive things.”
CEO Sam Altman conceded the company “didn’t anticipate” how visceral some of the reactions would be to Sora 2’s ability to generate copyrighted material or turn you into a deepfake that can be used across videos.
Now, the company is dealing with the fallout.
In the past week, The Motion Picture Association blasted OpenAI for putting the burden on studios to opt out of infringement, demanding “immediate and decisive action.”
CAA, one of the industry’s most powerful talent agencies, issued a statement saying Sora 2 posed “serious and harmful risks” to their clients’ intellectual property, and that control and compensation are “fundamental rights.”
Individuals have also spoken out. Zelda Williams, daughter of the late Robin Williams, condemned Sora 2 video recreations of her deceased father that people were creating.
OpenAI says it will soon give rights holders more control over how their characters appear, but for many in entertainment, the damage is already done.
This episode is brought to you by AI Academy by SmarterX.
AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off either an individual purchase or a membership by using code POD100 when you go to academy.smarterx.ai.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: If you're building AI to make a bunch of money, you can go after the software industry and say, let's just replace the need for this software. But the bigger opportunity is to go after the labor itself, to replace the need for accountants and auditors and lawyers and customer service reps. So. You get into this debate like, well, would people actually do that?
[00:00:18] Would companies actually just go straight after the labor? Yes, a hundred percent. They do. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host.
[00:00:38] Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all.[00:01:00]
[00:01:01] Welcome to episode 173 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording. On an unusual date and time. We are doing this on Friday, October 10th. Normally, if you're new to the show, you might not know this, we record on Mondays, but, by the time you listen to this, when it comes out on October 14th, right, Mike?
[00:01:24] October 14th. Yes. We will be kicking off MAICON our, our annual conference that we've been talking a lot about on the show. So the 14th is the first day. so I will be, again, when you're listening to this, probably in, AI council meeting. We have Google Cloud, we partner with them on an AI council.
[00:01:44] So we have a meeting that morning and then that's followed right by the workshops where Mike and I are both teaching a workshop Tuesday afternoon. So Monday for us is all about MAICON Prep. So we decided let's squeeze in a podcast episode and record it on Friday. So here we [00:02:00] are. It has been a busy week and we had an openAI's dev day.
[00:02:04] more it's the Sora drama with their copyright issues. I still think there might be some more news later today, so I don't know. We'll get to that on the episode after this one when we're recovering from macom. So, this episode is brought to us by AI Academy by SmarterX. We've been talking a lot about this as well.
[00:02:23] AI Academy is something wereimagined in August. We relaunched it with all new courses and certificate programs to help individuals and teams accelerate their AI literacy and transformation. These include, course series like AI fundamentals, piloting ai, scaling ai, as well as industry and department specific collections.
[00:02:43] And so, Mike, I know you just, led the charge on creating AI for healthcare, one of our AI for Industries course series. So why don't you give us a little bit of background on that one? And again, you can learn about all of this at Academy dot SmarterX.ai.
[00:02:56] Mike Kaput: Yeah, Paul. So, AI for healthcare is one of, like you mentioned, the [00:03:00] industry specific courses.
[00:03:01] And what we do here is we kind of. Tee up at a high level how AI is impacting a specific industry, and then go deep into how people in a certain industry can use our proven frameworks to transform their work and their organization using ai. So we go through a very specific methodology on how you, as a healthcare professional, need to be thinking about and approaching AI in your own role.
[00:03:27] And you'll come away with all sorts of use cases and ideas for tools on what, how you can actually achieve transformation in your work. So I personally have worked with a number of healthcare organizations on AI transformation. We have a ton of healthcare organizations in our audience and in our set of learners with AI Academy.
[00:03:45] So it was just a natural fit, really great industry to kind of unpack because there's so many exciting things happening in it with ai.
[00:03:53] Paul Roetzer: Yeah, this is great. I'm, I'm excited for this one and all, all the other ones to come. So again, you can check out Academy dot SmarterX [00:04:00] AI to learn more. All right, mark, let's get into it with the openAI's dev day that happened.
[00:04:06] I guess it'll be next week by the time or last week by the time people listen to this, but that started the week off for us this week.
[00:04:12] Mike Kaput: Alright, Paul. So yeah, openai's 2025 Dev Day happened and at it, Sam Altman made it clear that ChatGPT is no longer just a chat bot. It's becoming the operating system for the AI era.
[00:04:25] So at this event, the company unveiled a bunch of updates that basically are starting to transform chat GBT into a fully fledged platform where developers can build and distribute apps much like an AI native app store. So they announced the new apps SDK, where users can now interact with apps from companies like Coursera directly inside a chat, meaning they can create, learn, and transact in one seamless conversation.
[00:04:51] The other big announcement was Agent Kit, which is a suite of tools for building autonomous AI agents. That can perform real world [00:05:00] tasks from managing business expenses to automating entire workflows. Altman framed this change as moving from systems. You can ask anything to systems you can ask to do anything for you.
[00:05:12] There was also a bit of a surprise with a fireside chat between Altman and former Apple Design legend, Johnny Ive, and they revealed that we know they've been collaborating on a new line of AI hardware, but they revealed that they have been doing that for three years now. So Paul, all the updates here are obviously squarely focused on developers.
[00:05:32] That's no surprise. But maybe walk us through why do these matter for the non-developers listening?
[00:05:38] Paul Roetzer: Yeah, I mean the agent builder, it sounds like is, is still requires quite a bit of technical capability. I don't think the average person's gonna go in there and start, you know, building agents. But a few things that jumped out to me, one, they, they're now latching onto the 800 million chat GBT users.
[00:05:53] There's sometimes, not. Direct on how many users they have, but that was a talking point all [00:06:00] week. I listened to a couple of podcasts where they were using that number as well. So, you know that that's a, it's a big number. It's a big number of active users. Right. And so when they do things like this, when they introduce, you know, agents and apps and the connectors, they're doing it to a very broad audience, which means it can start to change the way that people behave.
[00:06:19] People build things, people get, you know, productivity done. the apps themselves, if you're curious, it's in settings and then you go to apps and connectors and that's where you can see them and turn them on. I assume they will make a more intuitive interface for that. Probably more along the lines of like how the GPTs work, where you can go in and search the marketplace, I guess.
[00:06:40] . But for these initial eight or 10 partners that they had at launch, including booking.com and Canva, and Coursera and Expedia, Figma, Spotify, Zillow, I think are some of the ones I saw. you just go in and you connect 'em. This, this was brought to my attention. I saw this on LinkedIn. it, Natalia I [00:07:00] think was, the lady that had tagged me in this, one of our, our listeners.
[00:07:04] And, you know, it's something we talk about a lot on the show about be careful before you connect, before you add these apps to your, your account. You have to understand what you are giving up when you do this. So my understanding right now is this is only on personal accounts. You can use these apps.
[00:07:24] I, they don't said they said in the release later this year, we'll launch apps to chat GBT, business Enterprise and EDU. So if you're in your business account, you're looking for this, you, you would might only see settings, connectors. You might not see the apps there. But in the personal account, I think is where this lives right now.
[00:07:39] . so when you go to connect one, so I went in and I was like, all right, let me just see what this looks like. And I chose Coursera to, to see, so the idea there is that Azure. Interacting with ChatGPT, you may come across like, oh, I wanna learn about this topic, and they may recommend to you a Coursera course.
[00:07:58] That's kind of how it would be [00:08:00] integrated to do that sort of thing. Data is getting shared both ways. . And so, again, sort of like user caution, you have to understand what the companies you're connecting to your ChatGPT account get access to what you're giving up. So when you go to, to connect one of these apps, it says you're in control chat.
[00:08:21] GPT always respects your training data preferences. Apps may introduce risk ChatGPT is built to protect your data, but attackers may attempt to use ChatGPT to access your data in the app or use the app to attempt to access your data in ChatGPT
[00:08:34] Mike Kaput: .
[00:08:34] Paul Roetzer: Data shared with this app. Now this is the important part.
[00:08:38] By adding this app, you allow it access to one basic information typically shared when you visit a website such as your IP address and approximate location. And two. Data from your chat GBT account, including from conversations and memories. Our policies require that apps only access relevant content to respond to your requests.
[00:08:59] So you're [00:09:00] giving them access in theory, to anything you do and say, and any memory that chat GBT has about you because that enables them to then serve up more targeted recommendations and eventually adds to you. So again, it's, you know, cool tech, but as this tech moves forward, we just always have to keep in mind what is it that we're actually giving up and do we trust the third parties that we're connecting to?
[00:09:24] And not only that, when it starts to get into the business situation, Mike, where we can turn these things on for employees, what is the risk of the data getting leaked out now? and it just compounds. And this is why, you know, it's always important to have it involved, have legal involved to have the right parties at the table when you make decisions from a business perspective about what connectors and what apps you're gonna.
[00:09:48] Enable.
[00:09:49] Mike Kaput: Yeah, that's a super important reminder. And I know for a fact, unfortunately there's plenty of companies we've encountered or work with who don't have the, even the beginnings [00:10:00] of a policy or a plan for how you're supposed to be using the tools that they're actively turning on for employees, right?
[00:10:07] Paul Roetzer: Yeah. And it's just a, it's a gray area right now, like it's hard to know and it becomes more complex to understand even which ones do have access to your stuff. So I mean, even when I was trying to figure this out, I was going into our own account. I'm like, well, what do, who, what is currently connected?
[00:10:21] What, right. What usage is there with these things? So yeah, it's just really important to keep in mind.
[00:10:27] Mike Kaput: Alright, our next big topic this week is a new senate report is delivering one of the starkest warnings yet about the impact of AI on American jobs. So according to a new analysis led by Senator Bernie Sanders and the Senate Health Education, labor and Pensions committee.
[00:10:44] AI and automation they estimate could eliminate nearly 100 million US jobs over the next decade. This study, which we'll link to in the show notes, was conducted with ChatGPT assisted modeling, interestingly enough, and predicts that [00:11:00] up to 89% of fast food positions, 64% of accounting roles and nearly half of trucking jobs could vanish as what they call artificial labor, reshapes the economy.
[00:11:11] Sanders then, wrote an op-ed in Fox News, arguing that the technology's current trajectory will allow corporate America to wipe out tens of millions of decent paying jobs, cut labor costs, and boost profits. The report itself actually cites Amazon and Walmart as examples of this, noting their expanding use of automation alongside sweeping layoffs.
[00:11:33] Democrats in response here are calling for major policy interventions, including a 32 hour work week. Profit sharing and what they call a robot tax to ensure that AI's gains don't further concentrate wealth among billionaires. Republicans generally, by contrast, are warning that heavy regulation could slow down innovation and hand China and edge in the AI arms race.
[00:11:59] So Paul, I [00:12:00] mean, this couldn't come at a, a more, interesting time. You've been saying for a while now that AI is going to get political, especially as we head into next year's US midterms. Now, this really seems like Democrats are putting a stake in the ground on ai.
[00:12:16] Paul Roetzer: Definitely what we've been anticipating, I feel like the economy has become a weekly topic and I 'm not so sure that politics isn't gonna become, again, if you're new to the podcast, our approach is, political neutrality.
[00:12:30] Like we are all about what is the relevance of AI to the conversation regardless of what side of the aisle it's coming from. It's just. Trying to be as fact-based and neutral in all of this as we possibly can and present the information. So my, feeling has been for a while that the 2026 midterms in the US that AI was going to become a major campaign issue.
[00:12:51] And this is further evidence to me that, that that is definitely going to be, and this is the kind of language you would use when you're trying to [00:13:00] gauge how interested people are and can we move votes as a result of the conversation, because if they don't think it can move votes, they're not gonna talk about it.
[00:13:09] So again, both sides of the aisle. So in this case you have Bernie Sanders independent, who Cox is, I think is a Democrat, being very direct and I , oddly enough to me, this is on Fox News. Yeah. Like this is a, this is on Fox. So, a predominantly Republican leaning, media outlet that, that he is, you know, coming and saying, Hey, listen, it's gonna come after all of us, all of our jobs, regardless of Republican, Democrat, or somewhere in the middle.
[00:13:37] So. A couple of interesting segments of this. So I'll just read directly from the editorial that, that Bernie Sanders wrote. Everybody agrees that AI and robotics are going to have a transformative impact on our country and the world. There are strong disagreements, however, as to what those impacts will be, who will benefit from them and who will be hurt.
[00:13:55] One thing is for sure, this is an enormously important issue that has gotten the kind of dis, has not [00:14:00] gotten the kind of discussion that it deserves, which we obviously agree with the artificial intelligence and robotics being developed by these multi-billionaires. So he was basically talking about Bezos and, Musk and those people will allow corporate America to wipe out tens of millions of decent paying jobs, cut labor costs, and boost profits.
[00:14:18] we all, we all want more startup companies and small businesses. keep in mind again, if you don't know the data, like 99% of businesses in the US are small businesses. . They don't employ, like half of the employees work for bigger companies, but the vast majority of companies in the US it's like, I dunno, 26 million or something like that are small businesses.
[00:14:37] So we need them for the economy to be strong. so he says, we all want more startup companies and small businesses, but for workers that will mean very little if half of all white collar, entry level jobs are eliminated over the next five years, he's citing Dario Ode, the founder of philanthropic for that data.
[00:14:55] 'cause honestly, it's not just economics work, whether being a janitor or a brain [00:15:00] surgeon is an integral part of being human. The vast majority of people want to be productive members of society and contribute to their communities. What happens when that vital aspect of human existence is removed from our lives further?
[00:15:12] And now we get into some pretty deep stuff. The rapid de, developments in AI will likely have a profoundly dehumanizing impact on all of us. In many ways. It will actually redefine what it means to be human, fundamentally alter our relationships to each other, and the very nature of what we call society.
[00:15:30] Goes on to finish. Bottom line A and robotics will bring a profound transformation to our country. These changes must benefit all of us, not just a handful of billionaires. This is a campaign speech. Yeah. Like, I mean, you can see this three months from now on the campaign trail being echoed by, by people as they start to see, can we, again, can we move the vote with this talking point?
[00:15:55] this brings me back, Mike, to a topic that we talked about on episode 1 [00:16:00] 49 in June about this whole idea of, as we pursue AGI, as we, you know, start to look at this and the models start to get more and more advanced, what is it going to, you know, impact when we look at the total addressable market of salaries in the United States.
[00:16:16] So we talked about this last week on episode 1 71, the size of the economy. . and where the incentive lies to build AI into the economy. So we mentioned at that point we cited, what was the guy's name? Alex Rumpel from a 16 Z. He cited the worldwide SaaS market at about 300 billion per year in annual revenue and the labor market at about 13 trillion.
[00:16:40] Just to give some context to what that means. Now, the numbers vary depending on what you look at, but roughly 300 to 500 billion a year in annual revenue in the SaaS industry. So to make that tangible, Salesforce was 38 billion last fiscal year. Adobe 21.5 billion, ServiceNow, 11 billion, Shopify 8.8 Workday, [00:17:00] eight HubSpot, 2.6 billion.
[00:17:02] So we think about this large 300 billion plus market, and you start breaking it by individual companies, you can see the revenue that's generated. And so when you're building AI to go after jobs, you look at the SaaS companies. . But then when you look at the labor market, and that's anywhere between 13 and probably 18 to 20 trillion in the us.
[00:17:21] We're talking about registered nurses, roughly 300 billion in annual salary. Software developers, 200 billion accountants and auditors, 130 billion lawyers, 130 billion customer service reps, 120 billion sales managers, 90 billion. So you can, if you're building AI to make a bunch of money, you can go after the software industry and say, let's just replace the need for this software.
[00:17:43] But the bigger opportunity tenfold, bigger opportunity in, in some cases, you could argue, is to go after the labor itself to replace the need for accountants and auditors and lawyers and customer service reps. So you get into this debate like, well, would people actually do that? Would companies actually just [00:18:00] go straight after the labor?
[00:18:01] Yes, a hundred percent they do. So in episode 1 45, we talked about a company named Mechanized. we said, okay, so it was founded April 17th of this year by, AI researcher Toay, Besser Olu. And, the startup's goal, according to Bests, RO Glue is the full automation of all work and the full automation of the economy.
[00:18:25] When they announced the company, mechanize the startup focus on developing virtual work environments, benchmarks, and training data that will enable full automation of the economy. The investors, GitHub's, CEO, Nat Friedman, tech investor, Daniel Gross, Stripe, co-founder, and CEO Patrick Collison, podcaster.
[00:18:42] Dhar Kesh Patel, who we talk a lot about on the show. Google Chief Scientist, Jeff Dean, Choto Douglas, who we talked about last week, philanthropic guy and, a head hedge fund guy. so why do I bring up Mechanize again from episode 1 45? Well, because they published a new blog post this [00:19:00] week that says the future of AI is already written.
[00:19:03] Hmm. So I'm gonna, I'm just gonna read some excerpts here. Mike, if you want to go down this path, we can talk a little bit more. Otherwise I'll just leave it with people to like. Connect your own dots here. So this is a blog post from this week from Mechanize who has already told you, back in April, they wanna fully automate the entire economy and go after that 13 to 18 trillion a year.
[00:19:24] which by the way, is just in the us. they, by the way, mechanize says it's 18 trillion a year in the US but worldwide it's 60 trillion. . So that's the market they're going after. Okay. So here's the blog post from this week. The future of AI is already written. These are just a few excerpts.
[00:19:39] Innovation often appears as a series of branching choices. What to invent, how to invent, and when in our case, we are confronted with a choice. Should we create agents that fully automate entire jobs or create AI tools that merely assist humans with their work. Upon closer examination, however, becomes clear that this is a false choice.
[00:19:59] Autonomous agents [00:20:00] that fully substitute for human labor will inevitably be created because they'll provide immense utility, that mere AI tools cannot. The only real choice is whether to hasten this technological revolution ourselves or wait for others to initiate it in our absence. The future course of civilization has already been fixed, predetermined by hard physical constraints combined with unavoidable economic incentives.
[00:20:25] Whether we like it or not, humanity will develop roughly the same technologies in roughly the same order, in roughly the same way, regardless of what choices we make now. Then they provide a bunch of historical context to like basically say it's okay that we're doing this because it's gonna happen all the time anyway.
[00:20:42] People may try to steer the stream by putting barriers in the way, banning certain technologies, aggressively pursuing others. Yet these actions will only delay the inevitable, not prevent us from reaching the valley floor. We have far less control over our technological destiny than is often thought. We did not design this tech tree.
[00:20:59] It [00:21:00] arose from forces outside of our control. The evidence for this lies in two observations. First, technologies routinely emerge soon after they become possible, often discovered simultaneously by independent researchers who never heard of each other, and they give a bunch of examples there. Second, isolated societies converge on the same fundamental technologies when facing similar problems and resco resource constraints.
[00:21:23] We go on to say we do not control our technological trajectory. Full automation is inevitable. AI presents a powerful case for technology that can't easily be constrained. We on to say yet, there are many who believe or at least hope that we can seize the benefits of AI without making human labor obsolete.
[00:21:41] They imagine that we can just build ais that augment or collaborate with human workers, ensuring that there's always a place for human labor. These hopes are unfortunately mistaken in the short run. AIS will augment human labor due to their limited capabilities. But in the long run, ais that fully substitute for human labor will [00:22:00] likely be far more competitive, making their creation inevitable.
[00:22:03] And then they say full automation is desirable. Hmm. Even if you accept the inevitability, full automation, you might still think that we should delay this outcome in order to keep human labor relevant as long as possible. This sentiment is understandable, but ultimately misguided. The upside of automating all jobs in the economy will likely far exceed the cost, making it desirable to accelerate rather than delay the inevitable, and then they end with want to help accelerate the inevitable.
[00:22:29] We're hiring software engineers. So again, I share all of this in to provide context to everyone that whether you believe it's inevitable or not, there are a lot of very powerful investors and very powerful leaders who do see it as an inevitability, that AI will, in the coming decade, most likely be able to automate basically every job.
[00:22:54] And they want to get there first. They, they assume it's gonna happen anyway. And so we might as well get there [00:23:00] first, either for money and power or because they believe they have the better chance of shepherding it in, in a positive way for humanity. So they have some belief that it's gonna happen. Like let's go get there.
[00:23:13] Like an Anthropic kind of mindset. Let's go get there. 'cause like if we build the more powerful ai, then we can kind of figure out how to help society adjust to this. openAI's has similar mindset when you listen to Sam talk, it's kind of this, yeah, it's gonna happen. Like let's figure out how to benefit humanity and maybe all the jobs go away, but like, we'll figure it out.
[00:23:32] So again, like our, part of our goal on this podcast is to bring reality of what different perspectives are. . And there is a growing faction of AI leaders who probably agree with mechanize, but won't say it as directly as Mechanize says. It says it.
[00:23:50] Mike Kaput: Wow, that's a, a terror. A little terrifying, but really important to talk about.
[00:23:56] I think that what jumped out at me too, in its words repeating is the [00:24:00] fact that this Bernie Sanders kind of manifesto appeared in Fox News, and you are absolutely right. It's a campaign speech. It also strikes me they might be trying to peel away people from, you know, the, a different political side of the aisle.
[00:24:13] Yeah. This is an issue that could be used as a wedge in typically people that might typically never support someone like Bernie Sanders.
[00:24:20] Paul Roetzer: Correct. Yeah. And again, like you're, you're trying to find the topics that move votes, and we've talked about the complexity of the current administration right now, for them to admit that this is reality, that there, there's a chance in the next like one to three years we have like total disruption of the job market that's on their watch.
[00:24:45] And so the likelihood of them accepting that when they've already gone all in on ai, like this administration is a hundred percent in on ai. . Build it as fast and as powerful as you, as you possibly can so that we can win against China. That is the [00:25:00] mantra. If the byproduct of that is job disruption, which it likely will be to some degree, how can you have it both ways?
[00:25:09] So if you're the other side and you're saying, well, let's let, let's kind of take the opposite angle here. Let's talk about the job loss. I could see this getting, I think it's a big issue, but I could see it getting completely sensationalized for political purposes as well. For sure. Yeah. It's so, yeah.
[00:25:24] Eyes wide open kind of thing, I
[00:25:26] Mike Kaput: guess. Yeah. Yeah. It's interesting to note too, a, any administration, regardless of political leaning. You might think, well, why would they want all these jobs to go away and wouldn't that hurt them? It's like, well, they're paying attention to the stock market, not necessarily jobs in the stock market may very well go parabolic if you cut the, if you cut labor in this way,
[00:25:45] Paul Roetzer: unless you dramatically accelerate GDP, like, right.
[00:25:49] This is why. Yeah. I mean this is a wildly complex issue. We are not here to be the experts on every aspect of this. We are here to raise awareness for, for what is the [00:26:00] conversations that are going on. Yeah. So regardless of what you do as a listener, maybe you are an expert in, in the economy and like you're thinking deeply about this, or maybe you're the CEO of a law firm and you're thinking, do I need associates anymore?
[00:26:11] Like the whole point is to make people aware of this and to make you realize going into 2026, this is gonna be likely an issue. My guess is this will pull, well, they will find that people respond to job loss and the dehumanizing of society. Like those are some pretty powerful talking points. And my guess is the polls will show this is a good direction to push.
[00:26:36] Mm. and so you will get the extreme basically going into next spring.
[00:26:41] Mike Kaput: I feel like we'll be revisiting that prediction shortly here and saying, you were right.
[00:26:46] Paul Roetzer: We will see. I don't wanna be right. Like there, there's plenty of things we say in the show where I don't want to be. Right. Like I don't want the job disruption.
[00:26:53] I don't want it to be a major political issue. But yeah, I mean, you just kind of can look out ahead and some of this stuff [00:27:00] becomes relatively obvious, where we're gonna go.
[00:27:05] Mike Kaput: excellent. So our third big topic this week is that we're hearing now that openAI's is now claiming that they weren't ready for the storm of controversy around the release of Sora 2, their new AI video generator, which we covered in the last episode.
[00:27:21] The Verge reports that quote, openAI's wasn't expecting so's copyright drama. And it didn't realize people might not want their DeepFakes to, you know, be in videos or say offensive things. And CEO Sam Altman conceded that the company quote didn't anticipate how visceral some of the reactions would be to Sora tos ability to say, generate copyright material, or turn you into your own deepfake, which others can use in videos.
[00:27:47] Well, now the company is dealing with even more fallout from Sora too. So in the past week, the Motion Picture Association blasted openAI's for putting the burden on studios to opt out of copyright infringement, and they demanded [00:28:00] immediate and decisive action from the company CAA. One of the industry's most powerful talent agencies issued a statement saying, Sora two posed serious and harmful risks to their client's intellectual property.
[00:28:12] And that control and compensation are quote, fundamental rights. Individuals have also spoken out. Zelda Williams, the daughter of the late Robin Williams, the comedian, an actor condemned Sora, two video recreations of her deceased father. People are apparently creating these and sending them to her, and she kind of had some serious backlash against that.
[00:28:33] Opening eye does say it will soon give rights holders more control over how their characters appear and their likenesses appear. But it seems like for many in entertainment, it's only been a week or so, damage is already done. And Paul, I guess what jumped out at me is like, it's kind of baffling that openAI's didn't think deeply about the possible and like not that hard to figure out reactions to Sora too.
[00:28:56] Like are they being completely honest here?
[00:28:58] Paul Roetzer: I don't know. We talked, so [00:29:00] we talked at length about this in episode 1 71, right? Right. Mike? No. 172, right? . Is it
[00:29:05] Mike Kaput: 172? Yeah.
[00:29:06] Paul Roetzer: Okay. 172. So, you can go back and listen to that if you, if you miss that episode, we get into the legal side of this and all that.
[00:29:14] You know, I don't want to like repeat a bunch of stuff we said last week. I do find it very hard to believe that they couldn't predict the anger of rights holders. I mean, they dealt with this with their voice thing. They dealt with this with Sora the first time around. Like, this isn't new. It's not like this is the first time opening.
[00:29:32] I did something that trained out a bunch of copywriter material and then people weren't happy about it. So, yeah, I think I used the word disingenuous last time. Like, I just, I can't believe that they didn't see this coming. So, the context I'll add this week is I listened to a podcast on a 16 Z with Sam Altman.
[00:29:51] it was pretty far reaching podcast. There was quite a bit covered. I, we will put the link in the show notes, but he got into like his approach to like [00:30:00] doing these deals that have been going on the, his thoughts on ai, slop and Sora, the copyright thoughts, infrastructure bet that they're making, how they see one to two years out in the tech that other people don't know yet, which we always say like, they know what's coming, you don't see it.
[00:30:17] but specifically on the rights holders, he was pushed on this and he said, forced to guess from the position we're in today. I would say that society decides training is fair use, but there's a new model, meaning training on other people's ip . Is fair use. but there's a new model for generating content in the style of, or with the IP of.
[00:30:40] something else. So like a human author can, go and read a novel and get some inspiration, but you can't reproduce the novel on your own. That's kind of like the connection he's making, which is a pretty standard argument in, in the ai, model and company case. So then, Ben asks him, the interviewers says, you talk about Harry Potter, [00:31:00] but you can't like, spit out a Harry Potter movie, basically.
[00:31:03] And Sam says, yes. Although another thing that I , I think will change in the case of Sora, we've heard from a lot of concerned rights holders and also a lot of rights holders who are like, my concern is you won't put my character in enough. Now again, sometimes you listen to Sam and it's like, oh man.
[00:31:20] Like I , I don't know what the communications, team looks like at OpenAI. they, they don't, I can say right now, like they don't probably have very much influence in how Sam, Responds to questions. as someone who did PR for a good portion of my early career, I sometimes you can tell when people are just kind of ad-libbing responses.
[00:31:43] The rights holders thing is one where they're just kind of making it up as they're going and like saying whatever comes to their mind. So he said, they are getting calls from people who want their characters used more in Sora. He said, I want restrictions for sure, but I have this character and I don't want the character to [00:32:00] say some crazy offensive thing, but I want people to interact with it.
[00:32:03] That's how they develop the relationship and that's how my franchise gets more valuable. And if you're picking, his character over my character all the time, I don't like that. So I can completely see a world where subject to decisions of a right holder has they get more upset with us for not generating their character often enough than too much.
[00:32:21] And I was just like, come on man. Like. I get it. Like if we were talking about a, a, an emerging character Right. Or ip and like, you want to get that character, like Mark Cuban allowed himself to be cameo this week. I saw that. Yeah. And everything shows up as an ad for his company, which is hilarious. Yeah.
[00:32:37] Like very Mark Cubans to like pull an idea like that up. but there is no way Disney, like all these brands are calling and saying, oh yeah, like, bastardize our IP more. we really, so, I don't know, like, again, it's like a, it just doesn't seem very honest. Like, all this being said, [00:33:00] like, we're fans of the tech, like the tech's incredible.
[00:33:02] I could see it being transformative for social media and business. Like this is not a criticism of the technology itself or where it goes. We're in a very messy. Stage when it comes to intellectual property law and what the labs are gonna do to push the limits in the, in the near term. I think at one point I responded to somebody, there was like a VC or somebody who tweeted something about how brilliant it was to just put this out there knowing the backlash was coming, but they would seed it and they would get to them on the, like, this was just a genius strategy.
[00:33:35] And I said, it's unfortunate that the smartest strategy means the most unethical strategy. Right? So we're not debating what they did worked. We're not debating the tech's. Incredible. I'm just saying it's unfortunate that the point we've arrived at in society is that the AI labs have to do the most unethical things all the time.
[00:33:55] Because if they don't, it's like the mechanized argument. Well if we don't do it, meta's gonna do it. So we gotta do it [00:34:00] first. And it's, it's not gonna change. Like this is now the world. We're in this race to constantly wanna up each other, but we have to deal with these complex issues. Like the Zelda Williams thing you mentioned.
[00:34:10] Yeah. There's two quick quotes outta that one. she said, PLE, this is a quote. Please just stop sending me AI videos of dad. Stop believing I want to see it or that I'll understand. I don't, and I won't. If you're just trying to troll me, I've seen way worse. I'll restrict and move on, but please, if you've got any decency, just stop doing this to him and to me, to everyone.
[00:34:33] Even. Full stop. It's dumb. It's a waste of time and energy and believe me, it's not what he'd want to watch the legacies of real people be condensed down to this vaguely looks and sounds like them. So that's enough. Just so other people can churn out. Horrible TikTok, slop puppeteering them is maddening.
[00:34:52] . You're not making art, you're making disgusting overprocessed hot dogs out of the lives of human beings, out of the history of art and [00:35:00] music, and then shoving them down someone else's throat, hoping they'll give a little thumbs up and like it gross. . It's, it is an extreme, but I totally get it.
[00:35:10] Like you, you sympathize. Like I hadn't really thought about that. I was seeing like the celebrity, like Michael Jackson and like you start seeing all these people who are deceased and as someone who's not connected to them, it's just like, oh, that, that's a silly use of that. But then there's the human side of like, no, these people have kids.
[00:35:25] And now like they're, they have to, like, their parents are being brought back to life through these things and be, I don't know, it's like it, there's so much in society we have yet to face of where this technology takes us. And I think this was a very personal look at like what can actually start to happen as this just becomes spread and people don't think about the human side of it.
[00:35:48] Mike Kaput: I think the emotional side of this gets really messy. Really quick. I was talking with Claire on our team about the Zelda Williams article. Yeah. And we were like. Raising all these questions, like, who owns your [00:36:00] likeness when you die? Who, I mean, in the case of celebrity, there's an estate and stuff. What about like us, what about our parents?
[00:36:07] I'm sure there's gonna be battles between siblings at some point of like, Hey, should we create videos of mom and dad when they're gone? Right?
[00:36:16] Paul Roetzer: Yep. Yeah. And we've talked about that, you know, a while back, like, you know, my concerns around the more personal side of like people being, you know, recreated digitally in perpetuity and what that means, society and psychologically and yeah.
[00:36:30] I mean, there's just, there's endless paths to go to. An endless threads. And this is why we say with this show is like, our job is to present sort of this macro level of what's going on so that people who listen can be like, you know what, I'm, I'm really passionate about that thing. And then like, go and become like a, a subject matter expert on elements of ai.
[00:36:49] Like, that's the opportunity for a lot of people outta listeners is be the expert in your domain. Be the expert in your community. Be the expert in your family, like. Pick the threads that are interesting you and go deeper than we can [00:37:00] go, you know, on every thread in this show.
[00:37:03] Mike Kaput: Alright, let's dive into some rapid fire topics this week.
[00:37:05] Mike Kaput: So first up, Google has just launched Gemini Enterprise and they're calling Gemini Enterprise a comprehensive AI platform. That quote brings the best of Google AI to every employee through an intuitive chat interface that acts as a single front door for AI in the workplace. So this platform is powered by Google's Gemini AI models.
[00:37:26] Any user can use Gemini Enterprise to build custom AI agents with no coding required. And those agents can securely pull data from things like Google Workspace apps, Microsoft 365 apps and other tools like Salesforce and Box Gemini Enterprise also comes with pre-built agents for tasks like data science, software development, and customer engagement.
[00:37:49] And it comes with some new governance tools. There's one called, for instance, model Armor, which scans and filters prompts. To keep things secure and compliant across the [00:38:00] organization. Gemini Enterprise costs $30 per month per user and is rolling out now. So Paul, this seems like a pretty big move actually from Google.
[00:38:10] What does this mean for enterprises?
[00:38:13] Paul Roetzer: So we saw agents space demoed when we were at the Google Cloud event. I think it was in April this year. Yeah. And it's just incredible now. so like at a high level, this is the kind of stuff we've been talking about since spring of 2023. So like shortly after chat GT emerged, we started getting like previews of what Google and Microsoft planned to do to integrate AI technology into our productivity tools that we all use every day.
[00:38:39] This idea of being able to build agents with no code is incredible. So this is like very promising. Now, I will say as a Google Workspace customer, I have no idea. If we have this, if I Right. Have to go get this, I don't know if I have to change our plan, if I can only get it as Google Enterprise. 'cause we have [00:39:00] ai, we have Gemini in our Google workspace, we have a business standard account.
[00:39:04] I kind of tried to figure this out and I would consider myself relatively savvy on this stuff. I have no idea. And yeah, I spent like 20 minutes this morning before the podcast trying to solve this because there was one point where you could request access. And so I was like, all right, lemme try that.
[00:39:20] And so then it pops in, it's like, okay, you have access to Google Business. I was like, I already have Google Business. . Like, what's gonna happen when I, when I click the next thing? So then I went into the Gemini app and I was like, well, maybe I have to do this through Gemini. And I see something that's upgrade to Gemini AI Ultra, which goes from $20 a month to 200 a month per user.
[00:39:38] And I was like, what is that? Like, is is that different than, so I , I truly actually have no idea. Mike, if you and I can have access to this stuff at any point with our count, right? So to be continued, but just, that was one of my challenges. And I read the Sundar Post. I read the post from Thomas. I read, I read everything I could read, and I still don't actually know how to get access to this or [00:40:00] if we even can have access to this.
[00:40:02] All that being said, I will also say on the positive, I had, I wouldn't call it a life changing experience yesterday, but I had an incredible experience. So on Thursday, again, we're recording this on Friday, I am in crunch mode as everyone on our team is preparing for MAICON and I, the first day on the 14th, I have a three hour, yeah, I counsel workshop.
[00:40:22] And then I have a three hour, AI innovation workshop for the AI council workshop. We did a survey of AI council members and I , so I was set to go through dozens of, of, responses and hundreds of questions to summarize in preparation for that council meeting time, I don't have, I , I have to get the keynote done.
[00:40:45] And so I go into the Google form, which we used to complete the survey, and I see an option above the questions. And when I'm looking at the back end and it says summarize, and I was like, oh, there's a summarize button in here. I was gonna do this in Gemini, like copy, paste, copy. And [00:41:00] so I click the summarize button and in three seconds I have like five beautiful bullet points.
[00:41:06] I start scanning all the replies. It's like, oh my God, they nailed it. Like this is a perfect summary. So I did that for all nine sections of the survey and copy paste, copy paste, copy paste, putting it into the deck. And now we're just gonna talk through these as accounts. Yeah, so it's not, this isn't getting published.
[00:41:21] It's not like a final product, but that alone saved me at least two hours Thursday morning to just go through and be able to click the summary button. That's the promise of this. That ability where the Gemini capabilities are baked right into the applications, the software you're using every day. To the point where if I was a non AI literate person and I just saw a summarize and I just click a summarize button.
[00:41:45] I don't even have to know it's Gemini. I don't even have to know it was ai. Yeah. I just know that all of a sudden Google Forms just wrote this thing for me, and that's incredible. So that's the promise of where this kind of technology goes. again, I think just from a user perspective, a little more [00:42:00] clarity of do we have a, have this, can I get it?
[00:42:03] That, that would have been very helpful.
[00:42:06] Mike Kaput: Yeah. And for anyone listening who is not a heavy Gemini user, you might be sitting here thinking like, well, I've used Chad g pt, it doesn't like summarize it perfectly or whatever, go use Gemini. I'm not even just plugging 'em because, you know, we do some stuff with Google, but like Gemini is like quickly becoming my go-to model.
[00:42:23] It's so incredible. It's extremely good at not hallucinating things. It is extremely intelligent. It's. Breathtaking. So if you haven't used Gemini heavily, I'd highly recommend try it out. And we
[00:42:35] Paul Roetzer: do expect Gemini three within the next 30 days. Yes. Not because we work with Google and we know these things.
[00:42:40] This is like the public rumors is that Gemini three is imminent and likely, you know, certainly before Thanksgiving it sounds like maybe a lot sooner than that. And the other thing I'll say is if you're a Microsoft customer, this is the same kind of thing that they're doing there. Yeah, they just, like last week, they had an announcement around, integration into Excel.
[00:42:59] So you're [00:43:00] starting to see the AI assistance and agents become truly functional and valuable embedded into the productivity tools, regardless of what platform you're using.
[00:43:10] Mike Kaput: All right, next step. Google has also released something called Gemini for Home, which is an update that replaces the Google Assistant on your smart displays and speakers and upgrades, the intelligence that powers your smart devices in your home.
[00:43:24] So you can now just talk naturally instead of memorizing these preset commands. So something like, turn off all the lights except the office and the devices just work because they now have Gemini intelligence baked in. you could do things like ask for a half remembered song or tell a device to add ingredients for pad tie to my shopping list.
[00:43:43] And Gemini just figures all this out on its own. This upgrade also makes home cameras genuinely intelligent. So instead of generic motion alerts, Gemini now provides full AI written descriptions, things like, Hey, A-U-S-P-S driver just left a package. A new home brief [00:44:00] summarizes your day's footage. And you can search video history by simply asking something like, did I leave the car door open?
[00:44:07] for even deeper interaction. Gemini Live also now enables free flowing human-like conversations for things like brainstorming meals, parties, or routines in real time. The rollout of this begins this month, and the advanced features are bundled under a new Google Home Premium subscription starting at 10 bucks a month.
[00:44:24] Now, Paul, on the surface, this is a really cool addition to Google's AI capabilities. It certainly made me like start taking notes about like, maybe I should make a smart home with Google devices. This would be fun. But I think you'd also kind of flag this topic as one that has some like bigger picture lessons for where AI is going.
[00:44:42] Paul Roetzer: I do have nest cams. Yeah. At home and at the office. So I, you know, I have the, personal experience with the current generation. Yeah. the thing that I thought was interesting here, Mike, and just sort of the bigger picture is this thing I've, I've been sort of starting to call omni intelligence.
[00:44:57] This idea that AI is integrated into every part of our [00:45:00] personal and professional lives, but the key is it actually understands and can take action. So the idea with the Nest cam is not only can you talk to it, but it's able to actually go do things. it's able to change things on your behalf. This is the example I've used, with Teslas and Grok.
[00:45:17] So Teslas now have Grok X AI's AI assistant built in. It can't do anything that yet, though. It's like talking to ChatGPT in your car, but you can see where it goes. Like, so the best example I can give in Tesla is if I'm using the full self-driving in Tesla and it's, let's say I'm using it to drive, to my kids' school and I decide I wanna reroute it.
[00:45:42] I can't click a button on my car and say, take, take, take this route. Instead, I have to disengage this self-driving and I have to take the wheel and go the different route. You can't tell it to do something different. It can't take an action. It is very obvious that that [00:46:00] is what they're going to enable within the Tesla.
[00:46:02] That sometime probably the next three to six months. I will be able to talk to Grok and say, Grok reroute me through the valley. Grok do, do this. Or Grok don't change lanes. There's construction a half a mile ahead. I can see it. You can't see it yet. Mm, it doesn't do that yet. It doesn't take action based on our conversation.
[00:46:22] So what's gonna happen is you're now seeing it with Google. You're gonna see it with Apple and Apple intelligence within their home. You know, systems. You'll see it in cars where the AI now understands what you want and it can take actions through software and hardware as a result of it. So the idea of omni intelligence is that the AI is everywhere and in everything, but also the reason I call it omni intelligence, because that's what, when we got the four oh model from openAI's, that's what they called the o stood for, was omni meaning the model's ability to reason and eventually take action across all these modalities.
[00:46:59] Text, [00:47:00] audio, vision. So that's where I think we're going as society is this idea of omni intelligence, where the models are able to do things across all modalities, but they're also just embedded into everything we do and everywhere we are, and you're able to talk to them and they're able to do things on your behalf.
[00:47:15] So, yeah, I just think it's a, it's interesting to see this stuff starting to find its way into the hardware. I assume that's what Johnny Ive and and Sam are also working on is more of this omni intelligence kind of stuff that's always on, always listening, always there for you and can actually do things on your behalf.
[00:47:35] Mike Kaput: All right. So next up, more companies are relying on AI to screen resumes and job applications, but the New York Times reports that some candidates have started to hide secret instructions in their resumes and applications that tell AI tools to rate them as well qualified. Recruiters told the times that this trick has become surprisingly common.
[00:47:55] Greenhouse, a major hiring platform, estimates that 1% of all [00:48:00] resumes it processed this year included hidden AI prompts. Manpower Group, the largest US staffing firm detected concealed text in roughly 10% of applications scanned by its systems on social media. Users are trading tips for prompt hacking their way past automated filters.
[00:48:17] Some people claim it works. One recent grad said she went from a single interview to six after adding hidden prompts suggested by ChatGPT as one British recruiter put it. It's the wild, wild West right now. Now, Paul, it does sound like plenty of companies are now aware of the problems that AI can cause or the complexities it can introduce during the hiring process.
[00:48:39] But I can't shake the sense they might not be moving fast enough or thinking big enough when it comes to like re-architecting how their hiring processes work. What do you think? A
[00:48:49] Paul Roetzer: couple of levels on this one. One is just of the HR side, like, you know, I don't know that HR is moving fast enough for sure.
[00:48:56] Yeah. Like understanding this at a deeper level, understanding all the [00:49:00] nuances. Certainly a lot of larger enterprises are probably like very in tune to this and figuring it out, but my guess is a lot of SMBs and stuff have just no idea that this stuff's going on or how all this works. So that, that's certainly one.
[00:49:11] Item two is just, when new technology emerges, people find ways to take advantage of it. That a 16 z interview that I mentioned that, again, we'll throw in the show notes. Sam was asked about people trying to game the system to get their brands and information to show up within ChatGPT. Yeah. And he said honestly, like three, six months ago, it wasn't even something we were thinking about.
[00:49:34] And now an entire cottage industry is basically cropped up trying to, to game ChatGPT system just like happens in search where you're try and like find the hacks to get to the top of the search results. So. Human nature, people will always try to find shortcuts. They will always try to take advantage of systems, and the people with technological abilities and knowledge generally have an advantage [00:50:00] while everyone else is catching up.
[00:50:01] So I guess there's like a more of a macro level, moral of the story here in addition to the HR specific story.
[00:50:10] Mike Kaput: All right. Also related to careers and jobs, we're seeing a couple break big predictions about how AI is changing the nature of work and the skills that are needed to compete in the economy.
[00:50:20] So at the Masters of Scale Summit, LinkedIn's Chief Economic Opportunity Officer, Anish Raman, sent to the audience that the idea of fixed job titles and rigid hierarchies is fading away fast. He predicts companies will organize around projects, not departments. He says it's a work chart instead of an org chart where people shift sideways, up or down as tasks evolve.
[00:50:43] In this world, jobs become tasks and careers move fluidly between them. At the same event, Clara, she Salesforce's service cloud, CEO said she believes AI will collapse specialization, pushing Gen Z and Gen Alpha workers to become quote, professional generalists. [00:51:00] Rather than hundreds of narrowly defined roles, she expects most work to fall into just three categories, building products, selling products, and running the company.
[00:51:09] So both leaders though, see opportunity amidst this upheaval. With AI handling repetitive tasks and function barriers to launching and scaling businesses will drop sparking an explosion of entrepreneurship. So Paul, I thought this was important to highlight, given some of our recent conversations. It mirrors like a lot of what you've said about hiring and how job skills are changing.
[00:51:30] Thanks to a I 'd love to hear you unpack a little more this idea that we might all need to become professional generalists.
[00:51:38] Paul Roetzer: This is a, a great debate, you know, whether we specialize and go, you know, deep on specific topics and develop domain expertise in those areas, or whether generalists are the answer.
[00:51:48] I 've said before, like, I've always been sort of in the generalist camp, I've always, hired for generalists. I've always, you know, trained for generalists. I always wanted people with diverse knowledge sets that could, you know, connect [00:52:00] the dots between seemingly, unconnected things. Like you're, you're just always looking for those kind of people that have that forward looking mindset.
[00:52:07] I don't know. I think it's, it's fascinating to see these perspectives. I'd seen this tweet from Allie Miller, you know, when she shared it from being there live. And I was like, lot to unpack here. Yeah, I don't know, just like her tweet about it was there, there's just a lot of like big picture thinking.
[00:52:24] and then the quotes you had mentioned about the, you know, the mind shift that's going on and, you know, how we're looking more for more adaptable forward thinking, ready to learn, ready to embrace AI tools. I, you know, it's one of the big debates moving forward is, you know, really what, what are the most valuable skill sets?
[00:52:41] What human traits remain unique? What are the things that may make people, have tremendous opportunity in their career? And I, you know, I think we always come back to take whatever your interests are, whatever the domain is, and layer AI on top of it. That is the one thing we know is, yeah, no matter what you do, whether you're specialize in a specific area [00:53:00] or you are a generalist with abilities across domains, at minimum apply high levels of AI understanding and competency to that, and that's gonna help you the next few years.
[00:53:12] Long term, I don't, I don't know. The guy can't go back if mechanized as their way. None of it matters in seven years. I don't know, we probably shouldn't laugh at that, but it, I don't know what else to do at this point. so yeah, I 'm a big fan of generalists, but. With that being said, I don't know.
[00:53:30] And this whole idea, like the LinkedIn CEO saying that jobs are basically gonna become tasks, right. And they're not even gonna, titles won't even really be a thing. Like, that's a weird thing to wrap your head around.
[00:53:39] Mike Kaput: Yeah, yeah. Definitely. Seems like it could get very weird.
[00:53:43] Paul Roetzer: Yeah. I guess that that's, but the whole point, I guess, again, with this podcast is we're just trying to stay on top of what's coming.
[00:53:50] Because I think by talking about these things and by thinking about them, you have a headstart, you have a, you have a time span ahead of you to actually figure out what does [00:54:00] this mean to you and your career and your company. And that's the positive thing is, you know, all of us, you know, me and Mike doing this show, anyone who's in our audience that listens to this show, you're in sort of that top percentage of people who are thinking deeply about these issues and trying to solve for it.
[00:54:16] And I guess take some, solace in the fact that you're ahead of the curve. Like you, you have time to figure this out and then help bring other people along.
[00:54:25] Mike Kaput: Yeah. I also find these predictions really helpful, whether or not they come true to really help you figure out like what questions should you be asking and answering.
[00:54:34] Like, I have no idea if we're all gonna become professional generalists, but why would someone have that prediction? You would have that prediction because you'd start to think, well, AI can make me an expert at many things. Okay. Like that's maybe the question to ask, what happens when AI can make you an expert at product, at marketing, at service?
[00:54:52] What does your work look like? That's where I get to when I look at these. I don't care if we never become professional generalists, who cares? [00:55:00] But that question is a question we have to answer regardless.
[00:55:03] Paul Roetzer: Yeah. And I actually was personally thinking about this stuff last couple days because, you know, once I get through MAICON, I'm sort of diving back into being like the ceo, right?
[00:55:12] And president and CRO and like every other title I hold within our company at the moment. And I'm, I was starting to think about like, how do I restructure my own professional life? And given that I have an understanding of what these advanced AI are capable of, what does our senior level hiring really need to be in the future?
[00:55:34] If you take a collection of people who are very talented generalists, who know how to work with advanced reasoning models, how much can they accomplish without necessarily having to have traditional experts in these different departments and fields? Like, can I as a CEO function almost like an entire CC, C-suite by having AI [00:56:00] assistants that are trained to function like A CFO and a chief HR officer.
[00:56:04] Like, that's the kind of thinking I'm doing right now is like, well, maybe, maybe our company doesn't have to look like a traditional company. Maybe the customer success team doesn't look like a traditional customer success team. maybe the sales team doesn't look like a traditional sales team, or the comp models aren't even, don't look like that.
[00:56:20] Like maybe it doesn't look like any of that. And we're at the point where we have that luxury to explore that. And I'm very anxious to get into this fall and have brain cycles to think about it that way. Right. But I do think deeply about that. I don't think org charts look anything like they do. And I think there's a chance that generalist plus AI assistance is what happens, but I don't know.
[00:56:47] Mike Kaput: Alright Paul, so I'm gonna end up here with going through some AI product and funding updates to kind of close out the episode here. Sounds good. Alright. So first up, Google has unveiled the Gemini 2.5 computer use model, which is a system [00:57:00] that can literally use a computer like a person. This is built on Gemini 2.5 Pro.
[00:57:05] It lets AI agents click type scroll and fill out forms across real websites and apps, not just through APIs. this model works kind of in a loop. It views a screenshot, reasons what to do next about it and issues commands like click or. And then each result is fed back in letting it complete complex, multi-step tasks, even those behind logins or with dropdown menus.
[00:57:27] Next up, this one has been a long time coming. Paul, Google is adding a sharing feature to gems, which are the customizable versions of Gemini. So you can create specialized gems, very much like AGI, PT, such as like a coding assistant, a trip planner, whatever you kind of want to customize it for, and now you're going to be able to share it with others via a public link, which you could not do before, and was a huge limitation on gems.
[00:57:53] So creators can actually start sharing these on a public profile page as well and track how many people are using them. The [00:58:00] features rolling out first to Gemini, advanced subscribers.
[00:58:03] Paul Roetzer: Does the, I'm looking at that post right now, Mike. It's only three paragraphs. There's not much information here. Do you know There's like zero
[00:58:08] Mike Kaput: information to.
[00:58:09] Paul Roetzer: Yes. This doesn't even look like this applies to like Google Workspace people, so
[00:58:12] Mike Kaput: I don't think it does yet. I have it enabled. I have it enabled in my personal account and tested it and it worked fine, but I think workspace is coming later, but that's pretty, that's Google's mo unfortunately
[00:58:25] Paul Roetzer: killing me.
[00:58:27] Yeah, we don't, I'd be so all in on gems if we could share them even with our team. I know
[00:58:31] Mike Kaput: gems are really incredible and it's heartbreaking that it's, it's so hard to share them now.
[00:58:37] Paul Roetzer: Well, all right. It's a start.
[00:58:38] Mike Kaput: It's a start. Next up, Elon Musk's AI startup X AI is in talks to raise up to $20 billion in a new funding round that could value them at over $220 billion.
[00:58:50] The funding is reportedly tied to a deal for XAI to secure a massive supply of NVIDIA's next generation Blackwell. GPUs. The [00:59:00] robotic startup figure has unveiled its next generation humanoid robot, the figure o3. The new model is faster, stronger, more dexterous than its predecessors, featuring a new hand design for better manipulation of tools and objects.
[00:59:12] Figure o3 is designed for autonomous work and logistics, warehousing and manufacturing to help address labor shortages.
[00:59:20] Paul Roetzer: I'll just say again, robotics is early, but it, the advancements are coming very quickly and I just would not sleep on humanoid robots. Yes, like it's it, the way I think about this is like if we had this podcast back in 2017 and I was like, Hey, yeah, this thing called the Transformer was just invented by the Google Brain team.
[00:59:41] Sounds really important. Like, it could be a little while, but like it's probably gonna like matter a lot. I feel like we're around that time in humanoid robots. Yeah. Like, it's still probably gonna be three to five plus years before like all a sudden you're just seeing robots everywhere. But it's coming like they, they have.
[00:59:58] They've largely [01:00:00] solved the major issues to humanoid robots becoming impactful. And I would just, again, if it's an area of interest for you, I would start paying much closer attention to the progress being made on humanoid robots.
[01:00:13] Mike Kaput: I hope we're still doing this podcast when we can share, we each got our first humanoid robots or house or something.
[01:00:19] I'll go, I'll,
[01:00:19] Paul Roetzer: they'll be available for lease for like 200 bucks a month. I would totally get that just to play around with it. I, Hey, I blew like 3,500 on a vision pro. Like why not
[01:00:29] Mike Kaput: try a robot? Right. Alright. Last up. Andreesen Orens has announced a series A investment in further ai, which is a startup aiming to drag the trillion dollar insurance industry out of its PDF and Excel era
[01:00:42] insurance remains one of the most paperwork heavy sectors in business.
[01:00:46] It's got a ton of manual data entry document comparisons, compliance checks. So further AI's technology applies generative AI directly to these workflows. That's tailored for carriers and brokers. So the result is automation that actually [01:01:00] understands insurance. Alright, Paul, that wraps up a busy week in AI right before MAICON here.
[01:01:06] So thank you again for demystifying everything for us and unpacking what's going on.
[01:01:11] Paul Roetzer: Yeah. Again, next time you hear from us, we'll be coming out of MAICON. So, if you're listening to this and it's MAICON week and you're at MAICON, make sure to come say hi to Mike and I We are doing a live podcast. Yes.
[01:01:23] Well, I guess it won't be live, it won't be streamed live, but we're recording a podcast live during MAICON in the exhibit hall. So if you're on site for MAICON, make sure to come by and say hello. Otherwise, we will talk with everyone, next week. Thanks for listening to the Artificial Intelligence show.
[01:01:41] Visit SmarterX.AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses, and earn professional certificates from our AI Academy, [01:02:00] and engaged in the Marketing AI Institute Slack community.
[01:02:03] Until next time, stay curious and explore ai.