GPT-5.5 marks OpenAI’s first fully retrained base model since GPT-4.5 and it’s a strong signal the company is leaning hard into knowledge work, not just developer tools.
That shift doesn’t stop at GPT-5.5: Workspace Agents bring agent-building to non-technical teams, Google’s Gemini Enterprise Agent Platform is aiming at the same audience, Microsoft is pushing Copilot deeper into agentic workflows across Office. Meanwhile, Meta found itself in the spotlight for alleged employee surveillance tied to AI training data.
Also: The first joint interview with Sam Altman and Greg Brockman, Jeff Dean on what AGI still needs, the SmarterX State of AI for Business report completed in a day, AI Academy's HR spotlight, and a full rapid-fire round.
Listen or watch below and see the show notes and transcript that follow.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Click here to take this week's AI Pulse.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
00:06:45 — GPT-5.5 Launches
- Introducing GPT-5.5
- Introducing GPT-5.5 - OpenAI - OpenAI
- X Post from OpenAI: "A new class of intelligence for real work and powering agents"
- X Post from Sam Altman: "We believe in iterative deployment; although GPT-5.5 is already a smart model, we expect rapid improvements."
- Sign of the future: GPT-5.5 - One Useful Thing - One Useful Thing
- X Post from @levie: "GPT-5.5 is live. We've been testing the model over the last couple of weeks at Box on our most complex knowledge work..."
- X Post from @lovable: "We have been testing GPT-5.5 in early access. Our evals show it's the most capable model for people taking on complex b..."
- The "Great Reset" at OpenAI
00:17:28 — Workspace Agents in ChatGPT
- Introducing Workspace Agents in ChatGPT - OpenAI - OpenAI
- X Post from OpenAI: "Introducing workspace agents in ChatGPT—shared agents that can handle complex tasks and long-running workflows..."
00:27:13 — Agent Usage: Separating Fact from Fiction
- Jason Lemkin: Specialized Agents Beat All-in-One
- Microsoft 365 Copilot Agents Go GA
00:46:31 — Google Cloud Next '26
- Cloud Next '26: Momentum and innovation at Google scale - Google Blog
- Introducing Gemini Enterprise Agent Platform - Google Cloud Blog
- Workspace Intelligence: Contextual AI for the Enterprise - Google
- X Post from @ChanduThota: "At #googlecloudnext today, we are introducing Workspace Intelligence..."
00:55:07 — Meta's AI Employee Surveillance + Layoffs
- Read the full memo behind Meta's AI employee tracking rollout - Business Insider
- Meta to start capturing employee mouse movements and keystrokes as AI training data - Reuters - Reuters
- X Post from @Jason: "Studying of teams with AI is the trend of 2026..."
- Meta Layoffs - The New York Times
01:03:46 — Apple Leadership Transition
- Apple Bets New CEO John Ternus Will Bring Back Jobs-Era Decisiveness - Bloomberg
- Read Memos From Tim Cook and John Ternus on Apple CEO Transition - Bloomberg
- Apple's Cook Says He's Healthy, Will Be Chairman for Long Time - Bloomberg
- Apple to Focus Hardware Team on Five Areas Under Johny Srouji - Bloomberg
- Apple's Next CEO - Bloomberg
01:09:59 — AI Use Case Spotlight
01:16:28 — AI Academy Spotlight
01:21:41 — AI Product and Funding Updates
- ChatGPT Images 2.0
- Introducing ChatGPT Images 2.0 - OpenAI
- X Post from OpenAI: "Made with ChatGPT Images 2.0"
- X Post from @arena: "Exciting news - GPT-Image-2 by @OpenAI has claimed the #1 spot across all Image Arena leaderboards! A clean sweep with..."
- OpenAI Takes Aim at Google with New Image Model - The Information
- Gemini Deep Research Max
- Kimi K2.6 Open-Source Coding Model
- Tencent & Alibaba Eye DeepSeek at $20B+ Valuation
- Microsoft 365 Copilot Agents Go GA
- Adobe Unveils Business Agents
- Claude Managed Agents Get Built-In Memory
- SpaceX x Cursor Deal
- X Post from SpaceX: "SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI."
- Cursor partners with SpaceX on model training - Cursor
- SpaceX–Cursor Deal - The New York Times
- SpaceX and Cursor explored team-up with Mistral to take on AI rivals - Business Insider
- Amazon x Anthropic Expand Compute Deal
- Anthropic and Amazon expand collaboration for up to 5 gigawatts of new compute - Anthropic
- Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return - TechCrunch
- X Post from @amazonnews
- Google x Thinking Machines Lab
- Exclusive: Google deepens Thinking Machines Lab ties with new multi-billion-dollar deal - TechCrunch
- Anthropic's Live-Fire Pricing Experiment
- ChatGPT Apps for Spreadsheets
- OpenAI Scales Codex to the Enterprise
- Zapier Benchmarks for Real Work
This week’s episode is brought to you by MAICON, our 7th annual Marketing AI Conference, happening in Cleveland, Oct. 13-15. The code POD100 saves $100 on all pass types.
For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Mike Kaput: It's so hard to predict what is worth investing time into anyway in ai because a year ago someone would've been like, go build all your own agents, and you might've done really well with that, but then openAI's comes out with this and you're like, why did I waste any of this time?
[00:00:13] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:21] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career, join us as we accelerate AI literacy for all.
[00:00:49] Welcome to episode 211 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording at an unusual time this week. It is Friday, [00:01:00] April 24th, two o'clock Eastern time. We normally court on Mondays. I feel like I went through this already this week explaining a weird time, which I probably did that was probably this Monday.
[00:01:10] So, normally we record on Mondays, but Mike and I are both traveling on Monday the 27th. I guess that would be.
[00:01:18] Mike Kaput: Yes.
[00:01:19] Paul Roetzer: And despite our best efforts to coordinate schedules to do this on our usual time, it was not happening. So here we are on a Friday afternoon. Bear with us because I think both Mike and I have had a week.
[00:01:31] Like it's just, we were just saying before we jumped down. Like, I don't know you man, but I'm just mentally fried right now. A
[00:01:38] Mike Kaput: hundred percent.
[00:01:39] Paul Roetzer: And it doesn't help that we get new models, agents everywhere, like a lot going on. So we certainly weren't gonna skip this week. There was way too much happening to not do it.
[00:01:48] But we have a lot to talk about with a new model from openAI's, new DeepSeek model. everybody's rolling out something to do with agents this week, so we will do [00:02:00] our best as always, to cover it and, give you the best analysis we can to make it make sense and actionable for you. So today's episode is brought to us by MAICON, the marketing AI conference now in its seventh year, which Mike is hard to believe.
[00:02:14] we launched this conference back in 2019, believe it or not. So. This is our seventh year. It's gonna be October 13th to the 15th in Cleveland, Ohio. That is our home. That's why we've always held it in Cleveland. It's an amazing place to run an event, but it is our home base. And that's why, you know, I do get asked sometimes, why does it make in Cleveland?
[00:02:32] that's why it's our, it's our hometown and we wanted to build something that meant something to our local community and economy. And so we thought if we could build an event that would draw thousands of people, why not do it? You know, somewhere that mattered to us. So, so that's, that's why it's in Cleveland.
[00:02:46] In case you were ever curious, the conference is bringing together more than 2,500 marketers and business leaders focused on one thing, how to actually make AI work inside your organization. We've already announced two keynotes worth the triple [00:03:00] loan just this week. I'm extremely excited about both of these.
[00:03:03] Karen Hao, the author of Empire of AI's back, she was actually our very first keynote in 2019, and she's returning with a deeper story, how ideology, money and power shaped openAI's and why it matters to every business leader right now. Funny quick backstory, Mike, You'll probably remember this, but when I did the MAICON in 2019, I was trying to create the agenda for it.
[00:03:24] I had read an article by Karen at the time, she was working at MIT Tech Review, and she'd written an article called What is ai? And it was this super simple, beautiful visualization of like, what is and is not ai. And I reached out to her at the time and I said, Karen, have you ever done this as a talk?
[00:03:39] Because I need this talk at, at MAICON. It's like a great introduction. And she had not, but she turned it into a talk for us. And so back in 2019, before Karen, you know, blew up and become this bestselling author, and yeah, I think she wanted the Wall Street Journal, you know, at the time and just an amazing person, amazing author, amazing [00:04:00] researcher.
[00:04:00] And so she came and did that, that talk then, and then she led a panel for us on ethics, actually on AI and ethics back in the time. And so I've been trying to get her to come back ever since. and the stars aligned this year where she was actually gonna be in the country, for a few week period, and we were able to get her to agree to come back.
[00:04:16] So I'm, I'm extremely excited about. That one. And then Dan Slagen also will return. Dan was with us in 2024. He was on the main stage, at the time he was the chief marketing officer of tomorrow.io put on an amazing talk. He's now, senior vice president of marketing at Zapier. So he's gonna be back with a extremely practical, grounded view on what's going on.
[00:04:38] We've talked a little bit recently about some of the things Zapier is doing, especially on their like AI literacy and you know, how they're infusing it into their own employees and workforce. So, Dan's gonna have a great story to tell. I think we're still trying to figure out like. Which story to tell, you know, 'cause there's so many angles he could go with.
[00:04:53] So Dan will be back. and news speakers can be added every week. We have a couple other really big [00:05:00] keynotes we're working on right now, so, stay tuned. But MAICON aIt's MAICON.ai and you can use POD100 to save $100 off. Current current rates, I think the rates go up every 30 days or so. So, you know, get in early, get your tickets early, and you can save hundreds of dollars and then use that POD100.
[00:05:19] So again, it's MAICON.ai All right, Mike, AI pulse survey. So if you're new to the podcast every week we go through a, we put up a pulse survey and our listeners can go through and answer two quick questions. It takes about 30 seconds. so it's SmarterX.ai/pulse. We'll tell you this week's, pulse questions at the end of the episode today.
[00:05:42] But on last week's episode on two 10, we asked, is AI driven search ChatGPT cloud, Google AI mode starting to affect your website's traffic yet? 43% said Don't track it. 26% said not yet, but watching 23% said some impact. [00:06:00] And then major impact or clear decline was a small percentage. Mike? Yeah. I dunno what that is.
[00:06:05] Less than 10%.
[00:06:06] Mike Kaput: Yeah.
[00:06:06] Paul Roetzer: and then the second question was, are AI agents generally starting to change how your team works or is it still mostly chat based ai? So by far, biggest percentage, 53% said still mostly chat. 30% said early experiments, only 13% said agents are real for us. and then no AI yet is a very small sliver.
[00:06:31] Yeah. that one's gonna become more relevant. Today's conversation, Mike, because today is all about agents. all right. So let's get it kicked off though, 'cause we did have a new major model release from openAI's.
[00:06:45] GPT-5.5 Launches
[00:06:45] Mike Kaput: Yes, Paul. So OpenAI launched GPT 5.5 this past week. They call it a quote, new class of intelligence for real work empowering agents built to understand complex tools, used tools, check its work, and [00:07:00] carry more tasks through to completion.
[00:07:02] It is openAI's first fully retrained base model since GPT-4 0.5 and the first API model from the company to ship with a 1 million token context window. So pricing comes in at $5 per 1 million input tokens, 30 bucks per 1 million output tokens that roughly double GPT 5.4, there's AGI PT 5.5 pro variant at $30 per 1 million.
[00:07:26] Input $180 per 1 million output on a bunch of benchmarks. GPT 5.5 took the top spot on the artificial analysis intelligence index with it. Had a score of 60, which is three point ahead of Claude Opus 4.7 and Gemini 3.1 pro preview. It leads the browse comp, benchmark at 90.1% Frontier Math tier one through three at 52.4%, and it also posts an 84.9% on their GDP valve benchmark, which is measuring how AI is good, how good it is [00:08:00] at doing real work.
[00:08:02] Sam Altman framed this release as saying, Hey, we believe in iterative deployment. Although GPT 5.5 is already a smart model, we expect rapid improvements. There were a couple people also reported after having early access some of the results they were getting. So, aaron Levie, we talked about a bunch CEO of box said the model saw a 10 percentage point jump in accuracy on their most complex knowledge work evals.
[00:08:26] the lovable team, the vibe coding, tool lovable, they reported a 23% reduction in tool calls per request. I called it the most capable model for people taking on complex build with technical. So Paul, a lot of stuff we can kind of unpack here. just kind of curious about your broader thoughts here.
[00:08:46] I mean, just again, another new model, but there was a big emphasis, openAI's stated just outright about agentic coding, computer use, knowledge work and early scientific research. They said those were areas where these gains of the model were [00:09:00] especially strong. And I don't know if you could more succinctly put like a series of trends of like exactly where AI seems to be going.
[00:09:08] Paul Roetzer: We've talked a lot recently about openAI's refocusing, you know, they, you know, cutting the Sora app. they're. Thinking about robotics, but not heavily invested in it quite yet. They dropped the idea of having like a social network. So they're doing their best to try and refocus. I think in large part, due to the success of Claude, you know, if we go back to the start of the year, not only did Claude all of a sudden start getting a lot of headlines and a lot of attention for the quality of its work.
[00:09:37] not only in coding though, but in, in knowledge work. Like, and we talked about it so much on the show, Mike, of the ways we've been using Claude, and it just seems to have been, post trained really well to do knowledge work, to do strategy documents and research papers and, and so Open has been watching Anthropic making gains and seeing their revenue skyrocketing.
[00:09:58] And then a lot of it's coming from their work [00:10:00] with enterprises. And I'll share a little bit more about, you know, my last couple weeks, but you know, I was at the Google Next event this week and every person I talked to was. Using Claude? Yeah. Yeah. I mean, they have copilot licenses. They have Gemini licenses, but I didn't talk to anybody that wasn't at least experimenting with Claude as well.
[00:10:18] And in some of the cases I was talking to massive, like Fortune 50 enterprise leaders in some cases who are in charge of AI within their organizations. And they're giving people clawed access on top of everything else. Yeah. So like open eyes seeing this, they're hearing this. It's why they're, they're having to not only like do all these deals with the consulting firms, but they have to focus on the real work.
[00:10:40] And so when you read the post that they put out about this release, it's, it's very obvious, as you said, like where they're going. So yeah. It said, we're releasing GPT 5.5, our smartest and most intuitive to use model yet. And the next step toward a new way of getting work done on a computer. GPT 5.5 understands what you're trying to do faster and can carry more of the [00:11:00] work itself.
[00:11:01] It excels at writing and de deduct debugging code, researching online analyzing data, creating documents and spreadsheets, operating software, and moving across tools until a task is finished. Instead of carefully managing every step you give GPT 5.5 a messy multi-part task and trust it to plan, use tools, check its work, navigate through ambiguity, and keep going.
[00:11:22] Now, we're gonna talk a lot about agents on this episode, but this is the kind of stuff people have been using Codex and Claude code for and things like that, and Gemini. But the, the, what they're saying is. The average knowledge worker wasn't seeing those same capabilities, right? You had to be a developer, you had to be a technical person to get those capabilities, which is what we've been stressing on the show, is that these, like Claude Cowork Open Call, these things, they're, they're great for developers.
[00:11:49] Like you have to be technically minded. We're trying to talk to the people who are outside of that world who are trying to just go in and build an agent, and then they get into like an Agentspace. Like, what the hell do I do with this? Like, [00:12:00] it, it, it's not intuitive, so we're. open eyes obviously going here, is moving in that direction of bringing those coding capabilities in a more reliable, secure way, right into the platform that the average knowledge worker would use.
[00:12:14] So they continued, they said the gains are especially strong in coding, computer use, knowledge work, and early scientific research. because the model is better at understanding intent, it can move more naturally through the full loop of knowledge work, finding information, understanding what matters, using tools, checking the output, and turning raw material into something useful.
[00:12:33] And then just some quick context here, Mike. I listened to this core memory, podcast with Ashley Vance, which I think it's a new podcast. and if I'm not mistaken, it was a gated podcast. Like you couldn't get it. And then, someone had proposed like, well, why don't you raise money or something and make it open.
[00:12:52] And someone paid a hundred thousand dollars to unlock this podcast. And so. Just this episode. So it was with Sam Altman and Greg [00:13:00] Brockman. So, Ashley Vance sat down with the two of them, and I think it was the first time they've ever actually done an interview together. So on my flight back from Vegas on Wednesday, I listened to this and I'll just highlight a couple things because this came out as a prelude to 5.5, but Sam and Greg were obviously talking about some of the things they were doing.
[00:13:19] So Sam, I talked a lot about the tech, but said they haven't connected the dots enough on what the abundant future will look like. I thought this was fascinating because an episode or two ago I was saying how there was a PR problem in the industry. Yes. Yeah. And how they were all talking about this abundance, and yet no one understood what that meant.
[00:13:36] So I was fascinated to hear Sam basically echo exactly what I was saying, and he was like, we're, we're not doing a good enough job as an industry. Making it tangible for people what this amazing future is that we're envisioning. he also said they're not far away from a model that knows the complete complex, complexity and context of your life.
[00:13:57] And this is the memory component. And I think [00:14:00] this is a really important thing for people to understand. And so when you're using 5.5, you're, they're obviously starting to rely more on memory, but they're also relying more on the fact that the memory's just gonna get better. And so when you have models like 5.5 and eventually six, that have full context through memory, and they also like are able to continually learn, which I'll talk a little bit more in about in a minute, the need for prompting in the ways we've become.
[00:14:30] Adept at prompting goes out the window. Yeah. Like you, you don't need to do context and interview me, and all these things that have become standard ways of prompting because it knows everything already. And so prompting literally just becomes, Hey, do that report for me that, you know, I have to do on Sunday nights.
[00:14:46] And it's like, okay, it goes and does the, and then Greg along those lines talked about personal AGI, which is the first time I think I've heard him talk about it in this terms. So what they're saying is, rather than like a universal AGI [00:15:00] as this model and then, you know, the next generation models come out, it starts to know you so well that it feels like general intelligence to you because it does have this full context and memory and ability to learn from what you're doing.
[00:15:13] And so, in that, vein, they talked a lot about this idea of still this jagged intelligence that we still are on this age, where sometimes these things feel super human and then like it gets hung up on a stupid thing and you're like, oh, it's no smarter than a preschooler when it comes to this thing.
[00:15:29] It's superhuman at this other stuff. And then they just really talked a ton about agents. So Greg said at the moment, they're at the transition agents, agents are gonna do all the work. They, they specifically highlighted context, computer use and memory as the core components. They wanna bring codex, the coding capabilities of Codex to everyone.
[00:15:46] And that's what I think we're gonna start to see. We'll talk about the agents for, specifically for this new workspace agents in a moment. they want personal AI that is not only feels like AGI, but it's proactive. It actually anticipates what your needs are going [00:16:00] to be. And it does things in the background for you and surfaces.
[00:16:04] Things like, Hey, you asked for this last week, I went ahead and ran this for you. Like that kind of stuff. So the interviews worth listening to. it's nothing groundbreaking like I was expecting with the two of them together. They were gonna talk about a whole bunch of things they'd never talked about, but they did get into sort of the evolution or the relationship, the evolution of Greg's role and what he's doing moving forward.
[00:16:25] And then they did talk a little bit about. The Elon Musk lawsuit and how, how painful it was for both of them personally. And one because Greg's personal journals got like, yeah. You know, put in as evidence. So like real personal stuff was out there. but Sam did say at the end that his biggest fear right now is that Elon's gonna drop the lawsuit like the day before it starts, because Sam's like, we went through the hell basically for this.
[00:16:50] Like, I want it all out there now. Like all of our lives have been put out for everybody. Yeah, let's have this trial and let's hear, let everybody hear what really happened. So [00:17:00] it, I could totally see Elon dropping the lawsuit, just mess with them enough to like make their lives miserable and then like ask, screw it.
[00:17:06] but if this goes to trial, man, it's gonna get, it's gonna get messy and make for some pretty interesting conversations.
[00:17:14] Mike Kaput: Yeah. I bet Greg Brockman is regretting keeping a journal at this point.
[00:17:18] Paul Roetzer: Yeah, he, he kind of glossed over it. It's like it is what it is, but it's, I mean, no one wants their personal thoughts out there world.
[00:17:25] Like that's No, no.
[00:17:28] Workspace Agents in ChatGPT
[00:17:28] Mike Kaput: Alright, so our next big topic this week, openAI's has launched Workspace Agents in ChatGPT this past week. So the company kind of calls these an evolution of custom gpt and positions them as shared agents that can handle complex tasks and long running workloads across tools and teams.
[00:17:46] So teams build an agent once essentially and use it together inside ChatGPT, or Slack at the moment with the agent improving over time. Now, agents are being powered here by Codex running in the cloud, so they keep working [00:18:00] even when the user is offline, they can run on a schedule or they can be deployed directly into Slack channels to pick up requests as they come in.
[00:18:08] openAI's is shipping prebuilt templates here for finance, sales, and marketing agents with out of the box connections to things like Slack, Google Drive, Microsoft Apps, Salesforce, and Mors. The availability of these is a research preview right now for ChatGPT Business Enterprise, EDU, and teacher plans with a gradual rollout across business and enterprise over the next several weeks.
[00:18:30] The feature is off by default for enterprise workspaces pending admin enablement, and the pricing appears to be free for the next couple weeks after which they shift to kind of a credit based model, but they're still not, have not yet disclosed kind of the rates and things here. on the governance side, OpenAI is shipping with these role-based admin controls over who can create and share the agents.
[00:18:53] there's required human approval for sensitive actions like sending communications or modifying records, and a [00:19:00] compliant API that exposes every agent's configuration and runs and safeguards as well against prompt injection attack. So Paul, I know this is something you and I have been talking about quite a bit this week.
[00:19:12] You've done a little, initial experimentation with this. any thoughts like how big a deal is this?
[00:19:19] Paul Roetzer: I, so this is one of those things where you initially look at you like, this might be a really big deal. Yeah. and I'll, I'll give some brief context. So I was, as I mentioned, I was at Google next this week, and it was all about agents.
[00:19:32] Like I literally everything, every talk, from the leaders of Google about agents. Yeah. And one of the things they previewed was this agent designer. And then I actually sat in a masterclass where you could build agents with this agent designer. And I was like, this is slick. Like this is really cool direction.
[00:19:47] Unfortunately it's not available, like, I don't know when it's coming, but sometime later, I think it's in some sort of a rever research preview mode. So almost everything that Google showed was for developers. So it's like vertex ai, [00:20:00] Agentspace, things like that. And you need some elements of technical capabilities and you probably need it involved.
[00:20:06] So I was like, oh, like just that, this is cool. Oh wait, I'm disappointed again. And then that same day. OpenAI announces these agents, as does Microsoft announced their agents. So we'll get to that in a minute. So I see the ChatGPT one, and I'm like, oh my gosh, that's a, that's amazing. Is that actually available?
[00:20:25] Like, can we get this? So I go into my CHE account and sure enough, there, there it is. And I was like, awesome. And so you just click on like, so again, I'm, I'm in our, team account for ChatGPT, and I just click an agents, it's in the left column, and then I can click browse agents and I can do browse templates.
[00:20:43] And you immediately get a sense of what's, what's possible now. Yeah. It shows. There's also recent uses, so you can look and see that. You can see it built by me agents and the SmarterX directory in our case. So if you've ever gone into the custom GPTs area, it, it's kind of like [00:21:00] that. But four agents I would say, like it's the easiest way to kind of envision how this works.
[00:21:04] But the beauty here is they have these prebuilt templates, and I'll just read three of them quickly to you because it gives you a sense of what, what, what's gonna be possible. So they have a template and you can start with a template or you can create your own by just. Using words like, Hey, I want to keep a keynote ads strapp writer.
[00:21:19] So they have a chief of staff, and this is how they describe it. Prepare a high signal operating brief for, from schedule inbox and team chat context. Great. For users who want sharper priorities, meeting prep to do capture, source link follow up guidance and requested email or chat, follow through in one concise daily artifact and then you can connect it to Google and Microsoft Calendars, Microsoft email and teams and Slack.
[00:21:41] They have a data analysis one that's a, again, a custom or a template agent, a data analysis plugin arranged around the life of an analyst, rather than a tool checklist, use it to sharpen the question, write and improve SQLs and inspect the shape of a data set, beard, build clear visuals, prototype dashboards, and run a final quality path.
[00:21:59] So basically just [00:22:00] teach it skills that are specific to what that person would do. Yeah. In this case, the agent and then one other one. sales assistant agent used generalized sales workflows for account intelligence, competitive research, value engineering, meeting prep, follow up pipeline planning, seller coaching.
[00:22:16] Great for teams who want stronger prep, clear strategy and better execution across the deal cycle. And then it shows you a bunch of, bunch of capabilities. And I'll actually, I'll do one more. customer support agent. So this is a generalized customer support workflow for ticket triage, case investigation, response, drafting, escalations, customer research and knowledge creation.
[00:22:35] So now with each of these, you can connect it to things. So I just picked these because every one of those, if we connect it to HubSpot, completely changes our workflows and potentially our staffing plans.
[00:22:47] Mike Kaput: Yeah.
[00:22:47] Paul Roetzer: So if these things actually work in a reliable environment that I, as A CEO am okay with us experimenting with it completely evolves the way I think about how we're gonna do our hiring this [00:23:00] year and how we're gonna analyze it.
[00:23:01] And the thing I keep coming back to is this need for. To, to somewhat centralize, and we'll talk a little bit more about this in the next topic too, with this agent usage, but this idea of like centralizing the building of these things. Yeah. And so what I did is on the flight back, I messaged Mike and Jeremy on our team and I put a calendar invite for next week and I was like, we're just gonna run a lab on this, like, kind of like a hackathon lab and like, let's just take an hour together and figure out what these things can do.
[00:23:26] Mike Kaput: Yeah.
[00:23:26] Paul Roetzer: And so Jeremy, on our team's looking into the connectors and trying to make sure we're, you know, good from, from a perspective, like a safety perspective to do these things. And then we will, we'll actually. Do this. Like, we'll spend an hour next Friday, like hacking together and like, let's, let's pick a couple of these agents.
[00:23:40] Let's build something and see what happens. and it, again, like, I don't wanna overstate this, but if these things actually work, like this goes back to when we first got like some form of workspace studio agents in Google, and then it was, it's like they're, they're fine, but they, they really are just, this is a few months back.
[00:23:58] They're for like automating [00:24:00] email stuff and like some calendar things, but it's okay. They're just like rules-based things though. It's nothing too crazy. This is a different level. Yeah. This is truly doing the work and, you know, the ability to build agents for each role in the company. it really just starts to change how I think about this because it's so easy to do.
[00:24:20] Like you could literally train anybody to do this. Even, even somebody who's like, has been hesitant to do anything with ai.
[00:24:26] Mike Kaput: Yeah,
[00:24:26] Paul Roetzer: we could run an intro to AI 30 minute class. Here's what it is, here's how it works, here's what agents do, and like, let's build an agent for you in real time and you can just.
[00:24:34] Do these things in these lab environments. So I don't know, like I, you know, until we actually do this next Friday, Mike, and until we have time to like play around, I don't wanna, you know, say this is transformative per se, but it has all the signs of being a very important thing. And then Microsoft did the same thing.
[00:24:54] Google with this agent designer is going to do the same thing. Like, it's pretty clear that by fall of [00:25:00] this year, if not sooner, depending on which platform you're on, they're all going to enable a knowledge worker, a non-technical knowledge worker to build agents and run them.
[00:25:10] Mike Kaput: Yeah. Okay. You know, it's really interesting to read through the announcement about these and start playing with them because what really occurred to me is it was a subtly important point to read that it's powered by Codex huge.
[00:25:24] Because if you're an, if you're one of these more non-technical user, of which I am one, if you haven't used Codex or Claude Clode. This is why people are freaking out about those tools because it's a preview essentially, and it's a different modality and not exactly the same as these agents, but they basically do the same types of things for non-coding tasks.
[00:25:46] They do agentic work using files, code tools and memory skills to do skills to do way more than you can do with a prompt or just a chat. So I think people are about to wake up to what's [00:26:00] possible here and just to kind of connect the dots, like this is why we keep harping on about these tools because the game changes when you go beyond just chat, I think.
[00:26:09] Paul Roetzer: Yeah, and it changes many things in, organizational design, like I said. Yeah. Yeah, it's, it, again, I don't, I don't wanna oversell it, but I said, you know, if you go back to episode 1 41 and even go back to episode 87, prior to that mm-hmm. My projection was that AI agent explosion would happen 2025, 2026 would be the starting point of it.
[00:26:33] And then that would continue on. And by 2027, we would completely transform work with agents. So this is something we've been known was coming for multiple years, we've been talking about this. and I feel like we are, we are clearly in the very early stages of not just the agent capabilities for the technical people and for development work, but now bringing that to knowledge work to where, make it as simple as building AGI [00:27:00] PT, which leads me to like the usage and stuff because yeah, there's so many people who have never built AGI PT.
[00:27:06] So like even that is, is advanced for most average users of this technology.
[00:27:13] Agent Usage: Separating Fact from Fiction
[00:27:13] Mike Kaput: I want everyone to keep this discussion in mind as we get into this next topic because, you know, Paul, we've been, you and I have been talking quite a bit informally this week about, agents at large and how you actually deploy them inside a real business today.
[00:27:29] So a couple updates that came out, and then we're gonna kind of get into what this discussion about agents has looked like for us personally over the last couple weeks. First up, some things that kind of spurred this discussion. So first we saw Jason Lemkin at of SaaStr, who owns and runs SaaStr posted a pretty widely shared take this past week about their use of agents in how they run that event.
[00:27:52] And some really interesting stuff on podcasts and on posts online where he was basically talking about using all these specialized AI agents to [00:28:00] essentially run different parts of the company. They use, artisan for outbound, qualified for inbound agent force for reactivation. They use agents for new customer acquisition.
[00:28:11] At the same time, Microsoft also, like you had mentioned, made Copilots Agented capabilities available generally across apps and co-pilot. and also we talked about this, how openAI's is rolling out workspace agents. Google is hyping up agents at Google next, which we'll talk about. But these land in the middle of this bigger conversation you and I have been having.
[00:28:33] About kinda where we're at on all this and the open questions around AI agents, because there's like no shortage of voices. We hear from them out there asking these like some version of the question, why aren't you all on it? All in on agents? Like why aren't you doing every possible thing you can with agents right now?
[00:28:50] And Paul, I don't know, correct me if I'm wrong, right, we are not anti agent. I feel like they're a hundred percent the future and we're actively experimenting with general purpose things. [00:29:00] Agent like Claude Code and Codex, we have not gone all in yet on things like OpenClaw, but there's always like really important open questions and nuance.
[00:29:08] And I feel like people are just shoving under the rug here about like what does actual production usage look like? What about security? What are the specific use cases that actually matter for a business? And the usage question you just alluded to, like how do we price the usage of these things? So Paul, it's just like get into this like where do you want stuck here?
[00:29:29] Talked about.
[00:29:31] Paul Roetzer: Yeah. So I mean, really what happened is I got back late Wednesday night, you know, I'd already put this lab meeting on the calendar for Mike and Jeremy and I, and I hadn't had a chance to actually play with the agents yet. Yeah. And so I got in the office Thursday morning and I was like, all right, lemme just jump in chat bt real quick.
[00:29:49] So I jump in and I'm like, browsing these templates and looking at it connections. It's like, oh my God, like this, this might be it. This might be what we've been waiting for. Mm-hmm. And then Mike came in the office and I was like, dude, look at this. I'm like [00:30:00] showing him these sample agents in these templates.
[00:30:03] And so again, coming fresh off of Google Next, like all of this is fresh in my mind because. I met with some really interesting people, and it's just that random, like how, you know, sitting next to somebody at lunch randomly, or the person you're sitting next to at like the keynote and you just, you have these conversations and these are, you know, you might randomly run into like a person who's heading up gen AI adoption, managing token budgets at these major companies.
[00:30:28] Mike Kaput: Yeah.
[00:30:28] Paul Roetzer: And she's like, well, what are you doing? Like what, like what's happening at this company? What's going on with your developers? What's going on with like, marketing, sales and customer success? Like, how real is this stuff within enterprises? So these are the kinds of conversations I'll allude to often on the podcast.
[00:30:41] Like we're talking to the real people, like, and there's this balance between developers who are hardcore pushing the frontiers of everything that's possible, seeing into the future, a future that no enterprise is gonna touch for a while. Mm-hmm. Like, they aren't going [00:31:00] to do those things. And so when we're talking about this stuff on the show, we're trying to talk to the.
[00:31:06] The practitioners and the business leaders who are the non-technical people often who have to actually figure out what does this really mean? They're trying to solve for, what are the token budgets we're giving our developers? And you know, some people are like, oh, let's just do token maxing, like burn all the tokens you want and And then I talk to somebody who's in charge of tokens and they're like burning through like our whole monthly budget in two days.
[00:31:29] Like, I know, how are we supposed to budget for that? And they're going back to these vendors being like, we can't do this. This isn't a sustainable way to handle this. Then there's like the vendor selection. Do we go all in with Anthropic or do like, well it's ChatGPT-5.5. Like, is that a good model?
[00:31:44] Should we be using that? Or is this new agent designer from Google gonna be the thing and should we just put all of our eggs in one basket with Google? So these are tough choices. The pricing models. Getting back to the token budget, like I've been transparent before about this. I just went into HubSpot today and we're like, we're already outta credits.
[00:31:59] [00:32:00] I'm like. How the hell did you run outta credits already? The, the, it's like three days old is the billing cycle. Oh yeah. Like what, what did we do to run outta credits? And I actually like went in and I'm like, trying to audit where did the credits go? Like what are they being used for? And it makes no sense.
[00:32:14] And so I'm just like, God, this is so frustrating. And then you mentioned risks. The other thing, well hear about and the SaaStr episodes are amazing by the way. Yeah, well with the link into it, they're just like, here's what we're doing. We're using 20 agents for this, we have for that. And you start to realize that when you actually are on the frontier trying to innovate with these agents within a real business.
[00:32:35] How the hell do you govern them? Like, okay, now there's 20 agents running loose that have access to all these different connectors and these people have the freedom to just go get more whenever they want. And Mike can go get this subscription. Jeremy can. That's, and so like now you actually have to manage these things and these agents, they function off of knowledge base, they function off of skills.
[00:32:57] Those things get outdated, right? Like how are you managing [00:33:00] those and updating them? Is that in a Google sheet? Like where are we doing all this stuff? And then. At Google next I watched a, a demo from the co-founder of, of Wiz recent acquisition for, for Google Cloud. And he was showing how they're actually managing the risk of these agents.
[00:33:16] And it was, it was beautiful. Like it was incredible to watch, but also makes you, how, makes you aware of how unprepared most people are for everything that goes into running and governing these agents, right? As they get access to more and more data. so yeah, I don't know. I just keep coming back to like, I love these practical use cases like SaaStr is doing.
[00:33:39] It's inspiring stuff. Like it's really cool to hear these stories and in a real business that's like our business, I mean, run events, it's like it's close to home for me and I listen to what they're doing. It's like, oh, that's a pretty cool idea, but you also listen to 'em and they're being totally transparent about the fact that they're just figuring this all out because they'll build something and rep it and then like they launch it and then it breaks and they're like.[00:34:00]
[00:34:00] What do we do now? Like, how do we fix this? Like we have no idea what's even happening. and then they're going and talking to Claude and being like, what, what broke? Like, how do you, because they're not the people who would usually take those things to production. And that's another element of this agent stuff.
[00:34:14] It's like we're being empowered to build these things, but like, I don't know how to take things to production and I don't know, like how to deal with it if something breaks. So I don't know. Yeah. It's like we could literally go any direction with this, but those are just some of my thoughts for the week.
[00:34:31] Having spent the week seeing agents being debuted, hearing them talked about, and then talking with real leaders at massive enterprises who are, they're nowhere near prepared to, to do this stuff with agents, like outside of their developers. And even then it's like. But it, it's like a free for all and they have no idea how to manage the tokens and which vendors to use.
[00:34:55] And so, yeah, I don't know. It's, it is the wild west right now, [00:35:00] but the people are figuring it out are getting a really, fast adv event, a competitive advantage.
[00:35:04] Mike Kaput: Yeah. And you know, I wrote down kind of as we were preparing just a few big unanswered questions I have about agents, or let's call 'em at least not sufficiently answered.
[00:35:15] I'm gonna share them really quick just in case they're Yeah, go for it. Helpful to people. But like first is really, how can I more clearly think about different, let's call them types of agents, because like in a practical sense, the more I learn, the more there's not just one type of agent really. Like Claude Clode runs agents to do things in real time with periodic guidance, partnership handholding from humans.
[00:35:41] But that's like materially different in practice from something like OpenClaw, which can do similar stuff. But does so persistently and autonomously, and I don't necessarily think one is better or worse. It's just that like when I think about this, it's already nuance that people aren't addressing where I'm like, no, it's not just like AI agents.
[00:35:59] It's [00:36:00] like these are two at least very distinct paths to me. Yeah, and I'm sure there's others in here I'm missing, but like I think there's more nuances. Like just because I'm not using OpenClaw yet, or like a 24 7 persistent agent, I don't think necessarily means you're at a disadvantage. It just totally depends on the use case, right?
[00:36:17] So I think about that a lot and I'm still kind of trying to work that out on my own. I often also am thinking like, what are the actual use cases for always on agents like OpenClaw? That sounds really obvious to say I could rattle off 20 different ways you use these. And keep in mind, again, reference the previous segment.
[00:36:34] I am not bearish on these. I think this is the future, but. There's the real consideration, like if I have to worry about this thing all the time, if I have to manage it all the time or try to troubleshoot it, if it breaks regularly, how is it remotely worth it for me to spend time on this versus something like, shouldn't I just be building out even better and work passive skills for Claude Clode or building the workspace agents in [00:37:00] ChatGPT.
[00:37:00] I don't know the answer here, but like that's a real consideration for me. And then finally, you just hit on this, like how in the hell do you pay for 24 7 persistent agents? I feel like there was this like honeymoon period because I think until really recently you could just plug open Qu or something into your Claude Max account, right, and use it that way.
[00:37:22] So like you didn't have to just pay via a PII don't think, and you can't do that anymore. They like turned it off. So. How am honors? Am I gonna spin up like a $500 a month agent to like do my grocery list? I'm not think it would cost that much, but I have no idea. That's the point. It could cost 5 cents, it could cost $5,000 a month.
[00:37:41] I genuinely have no idea how to gauge this. And that's just like a personal experimentation like. How in the heck do you figure this out as a business? Like how would you, I mean, like that's what you're getting at, right? Is like there's no predictability here. You can't budget for this.
[00:37:54] Paul Roetzer: Right. They've already shown in the last six months they're gonna keep changing the pricing models.
[00:37:58] So then, you [00:38:00] know, and I'm not saying they're gonna do this in a deceitful way, but the way this traditionally works in business is you get somebody hooked and then you jack up the price.
[00:38:06] Mike Kaput: Yes.
[00:38:06] Paul Roetzer: So, you know, let's say for us, we go next week on Friday, we're like, oh my God, these agents are incredible. And then we build a team internally that basically goes department by department and looks at workflows and problems and goals and rocks and says, okay.
[00:38:20] We're gonna centralize the building of agents because it, it's gonna be too complicated if we have everybody doing their own thing. And let's get this small team together. We go through, we prioritize these things. We start tackling a couple workflows, a couple problems at a time. You build a bunch of agents, they're crushing it, they're part of our $20 a month per person plan, and then all of a sudden they're not, like, right now they're HubSpot's model where we're like burning credits and I have no idea where the credits are going.
[00:38:45] Yeah. And to your point, like maybe it's now 5,000 a month instead of 300 a month, but now I'm hooked. Like now these things are built into our, our workflows. So, and maybe they don't change it in two months, maybe it's in a year when they figure this out. [00:39:00] and it goes back to that pricing. And I think I, you, you know, I'd said this to you yesterday morning, Mike.
[00:39:04] I'm like, I don't get how this isn't eventually a human replacement cost thing. Like, it just seems like if there was a, a simple way for the labs to. Calculate the value of their own technology, which I don't think they're currently capable of doing. they would just charge more for it. So like, for example, if I go into these agents next week and I figure out like, wow, we can actually build like a customer success assistant that's gonna do these things each week, each month.
[00:39:38] And if I had to hire someone to do that, that would be like a hundred hours of work that's a full-time hire that this agent's basically going to do that work. And now let's go do the same thing for sales. Like we'll build an SDR agent and it's just going to basically do what an SDR would've done. or like an event market or whatever.
[00:39:55] Like yeah, if we figure out a way to actually do it, [00:40:00] then I like, I would happily. Pay, like if I knew as the CEO that that agent I just built, or a collection of agents working together is doing the work of three people and opening, I came to me and said, Hey, like built these agents, like, you know, the value of that would be 300,000 a year.
[00:40:18] we're gonna charge you 3000 a month instead of 20 bucks a month. I'd be like, yeah, alright, let's go. Okay. And so I feel like for, for finance to truly get involved in manage this process, as these agents become more prevalent within organizations, I can't imagine how a token or credit based budget where they're, you're constantly running into a limit is, is in all possible or scalable for anybody.
[00:40:47] And I keep coming back to it has to be simple. It has to be clear, it has to be understandable. I'm paying XA month. You are, I'm getting use of these things [00:41:00] and I don't know if it's just like, you know, these models get 10 x cheaper each year. Yeah. Yeah. So maybe it's at some point it solved
[00:41:07] Mike Kaput: over time.
[00:41:08] Maybe.
[00:41:08] Paul Roetzer: Well, yeah, maybe at some point you're just like, five point five's good enough, like these agents crush it. I don't need GPT seven and I know that it's gonna cost you the lab 10 x less to serve me this model in 12 months. So. Yeah, just keep, let me stay on the old model. I don't know. Or, or maybe that's where the open source stuff comes in.
[00:41:28] It's like once we have an open source model that's good enough, like the deep seek, the numbers on deep see are that it's basically like on par with some of these frontier models.
[00:41:35] Mike Kaput: Right.
[00:41:36] Paul Roetzer: And so does it go back to the open source? Does it swing back where you're like, yeah, I'm, I'm happy with fifth generation models.
[00:41:43] Like I don't, I don't need, I don't know, and I don't, I truly don't think the labs know because they've focused so much on building for developers that are cool with the token maxing model and we're just gonna pay for our tokens. 'cause they're used to that approach and I don't think they've yet solved [00:42:00] for how to charge the way SaaS traditionally would have.
[00:42:03] Like, what is the evolution of a seat based license? yeah. And then, yeah, then you're like HubSpot and you're like, okay, I'm just gonna build these agents and I'm just gonna connect them over to HubSpot. So I actually, I'm gonna get rid of a bunch of my seats because I don't, I don't need 'em anymore and I can just access it through Jet GPT.
[00:42:19] Mike Kaput: Yeah. There's a lot more nuance to it than just go use agents.
[00:42:24] Paul Roetzer: Yeah. Yeah. And I think like sometimes you get pushback on, you know, the not trying to belittle the capabilities of these agents are not given 'em enough significance. I just think sometimes people don't have the nuance of what really happens in an enterprise.
[00:42:39] Mm-hmm. And like how complex this is. And that's, we spend our time talking to these companies all the time who can't even get copilot rolled out or nobody's ever been trained how to build, even build AGI PT or analyze a workflow and figure out where AI can fit into it. It's so messy when you actually get into the real stories of adoption, it's easy to, [00:43:00] you know, just see the technology and think, oh my God, everybody should be doing this.
[00:43:03] And it's like, no they shouldn't. It's not, it's not ready for prime time yet. But if you can embed Codex and, you know, Claude code right into the user interface that the average knowledge worker can use them, it changes everything.
[00:43:19] Mike Kaput: Well, yeah, and to your point, you mentioned to me in the office, it's like even on the GPT front, it's like very few enterprises have fully explored what is possible simply with GPTs or simply even with connecting standard chat to valid useful data sources.
[00:43:35] Yeah, right. So it's like there's so much value to be accrued and created there. It's like, I'm not saying you don't need agents, and that's for sure where we're going, but why? Why does it just have to be that this is also a path where I think it was overlooked because we're all, you know, in the Twitter or X high Buffalo,
[00:43:54] Paul Roetzer: right?
[00:43:54] Mike Kaput: Yeah. Where everyone's like, oh my gosh, I'm running. My entire company with agents, which is amazing. Like I'm sure [00:44:00] some people are doing that, but the vast majority of people are not, not remotely close to that.
[00:44:03] Paul Roetzer: Yeah. If you're an AI native company and you can do that from the ground up, you can take those risks.
[00:44:08] You go for it.
[00:44:09] Mike Kaput: Yeah, for
[00:44:09] Paul Roetzer: sure. That's not the reality for the vast majority of companies, these, these ones that we want AI emergent, they're trying to figure out how to work within legacy systems, legacy talent, legacy governance structures, highly regulated industries. Like it's not the, it's not reality.
[00:44:25] Mike Kaput: Well, yeah. I mean, and I won't harp on this, but just one more consideration. It's like. You know, it's so hard to predict what is worth investing time into anyway in AI because a year ago someone would've been like, go build all your own agents and you might've done really well with that, but then openAI's comes out with this and you're like, why did I waste any of this time?
[00:44:42] Also, the architecture behind some like RAG and things like that. Yes. I don't wanna get over my skis on the technical stuff, like some of these methods are like totally out of date now, so I should have spent six months figuring this out when I should have really just been probably building CPTs or skills or something and then they flip the [00:45:00] switch and I can just click a few buttons and make an agent in ChatGPT.
[00:45:03] It's a very hard, I'm not saying like that's the right path, but it's really hard to predict, like should you just actually wait until it becomes a little easier to do? Right. Right. Alright, Paul. So, before we get into rapid fire, one more announcement for this week. This week's episode is also brought to us by AI Academy by SmarterX, which helps individuals and businesses accelerate their AI literacy and transformation through personalized learning journeys and our AI powered learning platform.
[00:45:30] We add new educational content weekly, so you will always stay up to date with the latest AI trends and technologies. And we wanted to spotlight this week our AI for departments collection, which right now features six core series and certificates designed to jumpstart AI understanding and adoption across departments.
[00:45:50] Right now we've got marketing, sales, customer success, hr, finance operations. I just actually wrapped up Paul AI for legal this past [00:46:00] week, so I believe that would be coming out next week. Don't quote me on that, but very soon we'll have that done. these are the ideal launchpad for organizations that wanna level up their teams and accelerate AI adoption and impact.
[00:46:13] I'm actually gonna share a little later in the episode a few quick insights from the AI for HR series, which I taught. Just as a note, individual and business account plans are available now, you can also buy single courses and series for one-time fees. So go to academy dot SmarterX.ai to learn more.
[00:46:31] Google Cloud Next '26
[00:46:31] Mike Kaput: Okay, Paul First Rapid Fire. This past week, you were at Google Cloud Next 26 in Las Vegas. That event wrapped up. Their headline announcement was Gen Gemini Enterprise Agent Platform. This kind of full enterprise stack for building, scaling, governing, and optimizing AI agents, that basically effectively absorbs and replaces Vertex AI.
[00:46:52] Going forward, this bundles a few things like a. Low-Code Agent Studio, an [00:47:00] upgraded agent development kit, agent runtime, a persistent memory bank and some governance tools. It also has access to 200 plus models, including Gemini, 3.1, Gemma 4, and also outside models, tech crime that's
[00:47:13] Paul Roetzer: in there, I think.
[00:47:13] Mike Kaput: What's that? I
[00:47:14] Paul Roetzer: think you get a claw
[00:47:15] Mike Kaput: in there too. Yes. You could claw it as well, is in there. and then TechCrunch actually framed the platform as Google's response to things like Amazon's Bedrock, agent Core Microsoft Foundry. they have a bunch of launch customers using this. they paired the platform with a refreshed Gemini Enterprise app.
[00:47:34] they made a $750 million commitment to their 120,000 partner network to accelerate agentic AI deployments. There's also a new agent marketplace. So Paul, you were at the event. What was your read on what Google announced? I mean, it seems, I think agent is safe to say probably the word of the year here at this point,
[00:47:55] Paul Roetzer: that's for sure.
[00:47:56] Yeah, so I'm part of the Google Cloud Leader Circle, so you get, [00:48:00] it's like an invite only thing, and so I get a day with like Google's leaders and so that was Tuesday. I got to sit through. Some pretty amazing talks, including the opening talk from Thomas Kurian, the CEO of of Google Cloud. and it was very apparent from the jump that they, they're all like, everything's agents.
[00:48:18] So he said the goal is to make Gemini Enterprise the best place to run and manage agents. And then, in his opening keynote at Google next, he said, bringing AI to every employee and every workflow was like the goal they were focused on. now the thing you always have to differentiate with.
[00:48:33] Google Cloud is like, again, when they're talking to developers and when they're talking to the non-technical users. And a lot of the things that they, traditionally would announce is like focus more on that developer audience. A lot of things they've built like Vertex and Agentspace, they are not for your average user.
[00:48:48] You really need technical capabilities to get into them. one of my favorite sessions at the Leader Circle event was Google AI at Google. So they were basically had some of their key people who are working [00:49:00] on AI transformation, AI strategy at Google, talking about what they're doing. So I highlight a couple of those real quick.
[00:49:06] So Ryan Vach, who's the VP of AI transformation, talked about these lighthouse workflows and so he was saying they're basically trying to focus on moving from. Just tasks into the actual workflows. And they want each business unit focused on two workflows. So they're all about like, prioritizing where the impact could be.
[00:49:23] And I really like this concept. It's something we've talked a little bit about ourselves internally. They wanna get past the cost savings, focus on growth and innovation, which, you know, I obviously love that thinking.
[00:49:33] Mike Kaput: Mm-hmm.
[00:49:33] Paul Roetzer: He talked about this analogy of going from fishing where you're throwing lots of linings in the water, trying lots of use cases to farming, where you're getting very, strategic and deliberate.
[00:49:43] And then, one of my favorite things, the echo that we say all the time, is this idea of re-imagining work. So seeing significant changes in how teams work together, they're starting to field experiments within AI native work labs, which I, that might've stuck with me when I was thinking about doing this lab.
[00:49:57] I don't know. Yeah. but he also talked [00:50:00] about how it's so difficult right now to predict change and that the lines between roles are starting to blur. We've talked a lot about that on the podcast. How like, as the CEO all of a sudden have the ability to do people's jobs because I can just go in and like,
[00:50:15] Mike Kaput: yeah,
[00:50:15] Paul Roetzer: use Claude.
[00:50:16] It's like, I'm getting annoyed. Something's not ready. Like, oh, I'll just do it tonight. I'll do it myself. And so they're seeing smaller teams emerging where these blurred roles are sort of allowed to blossom, I guess. It's like cool that everybody can kind of do each thing. there was also, Josh Spaniar, who's the VP of AI and marketing strategy Google, he said there, even within Google, they were struggling to get everyone internally to use the technology, which again is kind of counterintuitive to a lot of people.
[00:50:43] But it's, it's not, if you've spent time with these labs themselves, it's like they're just like us. Like they have marketers and salespeople and CS people who doesn't mean just 'cause you work at Google that you're like AI forward necessarily, right? So you're there to do your job. So, you know, he talked a little bit about [00:51:00] that and how they started a dedicated AI team that was actually in charge of like the contracts, the data sets, the tools, the systems.
[00:51:06] And so that team builds out a suite of tools that then is shared with teams to use. So it goes back to that idea of ChatGPT agents, like maybe we just build agents and we, we say, Hey, sales team, here's your three agents and here's CS team. And that's a big. Question for me moving forward, and I think for all of our listeners that we often think about is are we centralizing the building of AI capabilities and then like distributing them to teams?
[00:51:28] Or are we, allowing everybody just sort of do their own thing? They, he just said they relied less on individuals to figure things out. They made a big investment in ad creative development and testing and they're seeing a massive impact on cost and performance. And then he said something I thought was really interesting, no one joins Google and I wrote any or any company myself to, to just be efficient.
[00:51:47] Like no one's goal in their job is to be as efficient as possible,
[00:51:51] Mike Kaput: right?
[00:51:51] Paul Roetzer: So that's why they focus on trying to bring the creativity and innovation. And then the one other note I'll share, I was really excited about this one. So [00:52:00] Jeff Dean was the closing talk, at the Leader Circle the first day. And if you don't know Jeff Dean, we've talked about him on the show many times.
[00:52:08] He was the 30th employee at Google. He's actually the one who named Gemini. And the name came from the merging of. Google Brain Team and DeepMind. So it was like the sisters, like the Gemini. and he said that even then, so again, going back to this, how mature our agents, he said his words starting to see glimpses of the agent economy, meaning we are still early.
[00:52:31] He highlighted specifically the lack of reliability and trust agents that we should all have in agents right now for giving them access to credit card information, filing systems, all of these things. So again, we say this on the show, but this is Jeff Dean, an authority on the topic saying, agents are early.
[00:52:49] You have to be very cautious with them. You have to be conscious of what you're giving them access to. But they're getting really good and we're seeing glimpses of them making an impact. And then on, [00:53:00] breakthroughs for AGI, like how close are we Again, there's very few people in the world more qualified to actually talk intelligently about this topic.
[00:53:07] He said he thinks we're still one to two major way, which echoes what de says. And when talking about what does he think like that key is, he alluded to that he thought continual learning was likely one of them. Now having listened to Jeff and others for the last 15 years I've been studying ai. usually if they pick something it means that it's something they've been working on and they've made advancements on and continual learning.
[00:53:34] To me, I've said this many times in the last 12 months, I think that that is the unlock that they, that most of these labs think if they can solve for continual learning, that the, that model doesn't stop once it's comes out of its training. That it actually learns like humans do from experience and inputs and outputs.
[00:53:53] It constantly changes its own weights and it, and gets smarter and more capable. That is, that might be the final [00:54:00] unlock. Mm-hmm. And my guess is DeepMind has made progress on this and I would imagine the other labs have as well. It's also a very, complex thing to put into the world because it can lead to the fast takeoff concept that we're probably not prepared for.
[00:54:17] So really cool stuff. They do an amazing job at those events. I mean, Google's just incredible. and Google Cloud puts on the Leader Circle is great. And then the event itself, I was only able to stay for the first half of the first day of the actual next conference. Even that was, you know, awesome.
[00:54:33] And then a final note I did, so Sarah Kennedy, who's a, a good friend of mine, she led a panel with, Sean White and Bryson DeChambeau, Sean White, the Olympian, and Bryson the golfer. It was awesome, like hearing those two guys talk about what they're doing with AI and their sports, but just seeing them and their personalities was really cool.
[00:54:53] Like Bryson's kind of a, one of those people, like a lot of people like, like to not like Bryson.
[00:54:57] Mike Kaput: Mm.
[00:54:58] Paul Roetzer: I when you sit there and listen, like, I don't know how you [00:55:00] couldn't like the guy. I mean, it was, it was a really cool story and I was, I was like excited to kinda get to experience that.
[00:55:06] Mike Kaput: That's awesome.
[00:55:06] Paul Roetzer: Yeah.
[00:55:07] Meta's AI Employee Surveillance + Layoffs
[00:55:07] Mike Kaput: Alright, next up in less positive news, a leaked internal memos past week revealed that Meta is trying to basically install tracking software on their US employees computers, to capture mouse movements, clicks, keystrokes, occasional screenshots across a designated designated list of work apps and sites.
[00:55:28] The memo frames the rollout as a way to teach AI models to use computers by giving them real examples of how people actually use them. And CTO Andrew Bosworth described the end state as one where our agents primarily do the work and our role is to direct review and help them improve. The memo assures staff the tool will not read or read files or attachments, will not be used for performance evaluation and will not learn incidental personal information picked up from the screen.
[00:55:58] There are reports that [00:56:00] meta staff are protesting the rollout internally. I wonder why. separately meta also it leaked and then meta had to announce it. I believe that it is going to cut roughly about 10% of its workforce with layoffs beginning May 20th. There are additional cuts expected in the second half of 2026.
[00:56:18] A big part of this is the cuts are part of their effort. To run the company more efficiently. And the chief people officer, Janelle Gale, told people that it was to allow us to offset the other investments we're making. I would just like to, without over speculating, point to what other investments meta is making.
[00:56:37] There's exactly one that is quite large and that is its CapEx guidance of 115 billion to 135 billion that is spending basically on AI infrastructure that has nearly double what it spent in 2025. So AI, somewhat adjacently is probably responsible for some of this. so Paul, the kind [00:57:00] of reason we're talking about this, be curious about your thoughts first on, are they just basically trying to train agents to do what the humans are doing and then get rid of the humans?
[00:57:09] Also, like what do you think of the cuts and the layoffs due to the CapEx and investments they have to make to stay current?
[00:57:18] Paul Roetzer: Cuts and layoffs expected, I would expect more, not just them. That's obvious. And, that's gonna continue the, the monitoring of employees. I'll, I mean, it's not confident, so I, this isn't new.
[00:57:35] I'll, I'll say that. Yeah. Yeah. so there's another social media company. I did a talk two years ago and after I explained computer vision and the ability for things to be recorded and then analyzed and using training data, I actually had an employee from that diff, it was a different social network company [00:58:00] come up and be like, is that why they've been recording everything I've been doing on my computer?
[00:58:04] So, and then she explained to me. How they were using the data, she thought, but she wasn't aware that this was even possible. So this isn't new. I, you know, it's not surprising at all. I think that at some point, you know, you have to think about the kind of organizations you wanna run and the kind of talent you wanna recruit and retain.
[00:58:31] Yeah. And at some point, the best talent is gonna have choices to make about where they go to work. And if you, you, you know, if you're cool going to work for a company that you know, is tracking ev literally everything you do and likely using it to train your replacement, is that, is that motivating?
[00:58:52] Like, it goes back to that thing I just said about, you know, Google and saying like, Hey, listen, we're not, nobody comes to Google to work to be efficient. Like, it's [00:59:00] not like nobody wants to go work to like, watch an agent do their job. Like I'm picturing like getting assembly line and I'm just sitting here like.
[00:59:07] Just eight hours a day. I'm just watching it. Click around and do things. And that doesn't sound like a fun career. So, no, I don't know. Like I get what they're doing. I understand. This is, I mean, it, it's meta like they're gonna be on the edge of this and they're gonna do things that a lot of people are gonna hate and they're gonna get bad PR about it and they're gonna have pissed off employees.
[00:59:26] And that's the story of their history. Like they've always,
[00:59:30] Mike Kaput: yeah.
[00:59:31] Paul Roetzer: Done things that were, people felt were beyond the line of acceptability and they seem comfortable with that and it's just part of who they are. but I think every other company is gonna have to make these same choices because what they're doing is possible.
[00:59:46] Like if you wanna do a consulting firm or an agency, or you wanna pick operations or HR or finance in your own company, this tech exists. Like you can train them up and you can build agents based on what people do. They're, [01:00:00] there's a startup, last year, I forget the name of it, this is what they did. Like this is they sold this technology to enable you to do this.
[01:00:06] Yeah. So, yeah, if this is new to you, sorry. Like, this has been going on for a couple years and it's gonna get tons of funding from VCs to do this. It's gonna get ton of, tons of payments to consulting companies to implement this, and they will absolutely use it to reduce their workforces. Like there's no other reason to do it.
[01:00:27] So, yeah. I mean, and I'm not even trying to be hard on meta here like this, it, it's just the reality. Like, and that's so much of the time when we're doing. Things like this or having these like more hard conversations about the reality. We're just trying to share with people like what the reality is. And if you're working for a company that's doing this, there is no other reason either.
[01:00:48] It's either performance or to, to train on what you do for your job.
[01:00:52] Mike Kaput: Right?
[01:00:53] Paul Roetzer: I can't think of a third thing that you would do for the why, why else you would do it. [01:01:00] So I think it's just, it's just a, I guess, an awareness thing. And you gotta know the kind of company you're working for and what their intentions are with ai.
[01:01:08] And ideally you want to like, understand the responsibly AI principles and whether or not they're a human-centered company. that's why I think it's important just for people to have levels of awareness and then educate other people about these things. 'cause our, you know, listeners to our show are more likely to get this stuff like that.
[01:01:24] They already knew some of this.
[01:01:25] Mike Kaput: Yeah.
[01:01:26] Paul Roetzer: but all your peers, your family, your friends, they don't know this stuff. and so sometimes it's just us trying to do our part to share it so that, you know, other people can go and educate people about it.
[01:01:36] Mike Kaput: Yeah. And to be clear, that intention behind this segment is not to pile on meta specifically because it's not anything new that companies monitor what their employees do on their work machines.
[01:01:50] Often that has happened well before ai. I think what is just really fascinating to me is like, oh. This isn't just for security purposes anymore. They're just coming out and saying [01:02:00] there's another, to your point, there's one of two reasons. Either employees following guidelines or performing IE, are you doing work on your computer?
[01:02:07] Are you doing anything wrong on your computer? Which has existed for a decade now, major enterprises. But there's this new lane where it's like, oh, okay, this is training data.
[01:02:17] Paul Roetzer: Yes.
[01:02:17] Mike Kaput: For exactly. For computer use agents. That gets really murky really quick.
[01:02:22] Paul Roetzer: And if I'm not mistaken there, I don't think we covered this, but I'm pretty sure two or three weeks ago, Elon Musk like changed the terms of employees at XI and they had to agree to have everything.
[01:02:32] Mike Kaput: Oh really?
[01:02:33] Paul Roetzer: Yeah. So like, yeah, this is, yeah. For this purpose. Yeah. Like it's all about training data for Grok and that's the thing is like they're not even necessarily using this just for their own purposes. They're using this to train their models. Yes. Like so the work they do. So imagine if you can collect every interaction that your marketing team, your sales team, your CS team, whatever.
[01:02:54] And you also happen to be a company that trains AI models. You don't have to go license that data because what's happening in [01:03:00] other instances is the training labs like a scaling AI are paying. Lawyers and consultants to sit there and have their stuff done. Like so not for a company they work for, but saying, Hey, we'll pay you $500 an hour to like track everything you do on your computer for a few days.
[01:03:17] And then they're taking that to then train the models to do that, the job of those people. so yes, that is the new thing. To your point, Mike, it's like performance tracking and monitoring usage on that's not new. Using it as training data and data to then replace those people is new.
[01:03:33] Mike Kaput: Yeah, and I didn't even connect the dots until you just said it like this has to have Alexander Wag fingerprints all over it.
[01:03:39] Well, they've doing, totally doing this is exactly what, this is what they were doing
[01:03:42] Paul Roetzer: at scale. Yeah,
[01:03:43] Mike Kaput: you're right. Yeah. okay.
[01:03:46] Apple Leadership Transition
[01:03:46] Mike Kaput: Well our next topic this week, we actually, well I guess it was technically this week 'cause we recorded on Monday we covered Apple CEO transition, at the end of the last episode.
[01:03:57] 'cause that had, [01:04:00] Broken right before we started recording. Yeah. that John Turnas was going to succeed Tim Cook on September 1st of this year. in the days since a bit more has come out, especially on that kind of AI angle of what Apple's doing with ai. So Cook internist had an all hands at the Steve Jobs Theater Cook interestingly addressed some health rumors head on.
[01:04:19] He told employees, Hey, I'm Healthy. Energy's high plan to be in the role for a long time. Turn teased an incredible roadmap ahead. He said AI is going to create almost unlimited potential for the company. According to Bloomberg Turn, has already overhauled the hardware engineering organization around what he calls a new AI platform designed to speed up product development and improve device quality.
[01:04:42] on the same day as the CEO announcement, apple promoted John Johnny Sroui to a newly created chief hardware officer role combining hardware engineering and hardware technologies into one organization. CNBC read this reshuffle as kind of a sprint to build in-house [01:05:00] chips for devices with apple doubling down on silicon for on-device ai.
[01:05:04] Obviously we've talked about a bunch Apple's. New and improved AI powered Siri, which has been delayed a couple times, is now expected to debut at ww DC in June of this year. they have a multi-year deal now with Google. Reportedly worth around a billion a year to power the new sir on Gemini. so CNBC is kind of framing this tr transition as you know, turn facing this defining challenge, which is obviously Apple does more than just ai, but.
[01:05:33] His job is kind of fix the company's AI strategy, it sounds like. And Paul obviously it's so early here.
[01:05:38] Paul Roetzer: Yeah.
[01:05:39] Mike Kaput: But given the new details, like what is the, kind of your initial read on, do you think he's the right guy for the job to fix Apple's AI problems? where do you see this go?
[01:05:48] Paul Roetzer: Yeah, I mean, time will tell, but everything I've heard about him, from Paul online is just extremely positive.
[01:05:54] Sounds like everybody's known he was gonna be the guy. He, everybody's saying he's the right guy. I watched a [01:06:00] crazy clip where he was doing an interview about like the cinema, the cinema display, like the thing, you know, his first major project there. And, he was talking about when he was at the. I think it was at the manufacturer or whatever and they were piecing it together and they had designed the screws in the back of the display that no one ever is gonna see, to have like 21 grooves in them.
[01:06:18] It was like a very specific number and he actually like took the screw out, took a magnifying glass, and found that they had 30 grooves instead of 21 and made 'em redo it. Like it's just like they were trying to stress how, like what a perfectionist, like a Steve Jobs type product guy he is. So it seems like that's what they're getting.
[01:06:37] And like I said, probably in the last episode, I think like if they weren't comfortable with the roadmap, they have to execute, it wouldn't be the time. So they're obviously very comfortable here. Interestingly, at Google next, Thomas Kurian, when he was doing his opening keynote on the actual first day of the conference, he did mention Apple.
[01:06:55] They just put the Apple logo up and everybody's like cheering. And then he just said [01:07:00] about them being a preferred provider for their models and that was it. Yeah, like there's no big thing. He didn't go into a ton of detail. He talked a little bit about Siri, but it was basically like that partnership that we've talked about previously on the show.
[01:07:11] So, I don't know, like, again, as a long time Apple user fan, I'm excited. It seems like Wall Street's liked it so far. I don't, I mean, I think their stock's been doing pretty well since the transition, which isn't always a given when you have a CEO changed.
[01:07:25] Mike Kaput: Right.
[01:07:26] Paul Roetzer: So, yeah, I don't know. Everything seems positive and I hope I've said many, many times, like I just, I love a working Siri.
[01:07:32] I'd love Apple Intelligence to really be intelligent. Like, I think it, you know, it's billions of users that would get to experience AI in an entirely new way. And I think a very positive and exciting way if Apple solves how to do it the right way on the iPhones and all their devices, you know, AirPods and watches and glasses and everything else.
[01:07:51] They've got,
[01:07:52] Mike Kaput: yeah, I was gonna say a very longer term, but people, you know, we, we included have talked about apple's, like fall from grace and ai, but [01:08:00] like, they're also half a chance away from cracking the code on AI wearables. They're like the best people that do it. And if they do that, it's like, game over.
[01:08:08] Like it's a whole different cable game. Right.
[01:08:10] Paul Roetzer: Dude, the data they have is insane. Like, I don't, there's so many things Apple does where you don't, 'cause they don't feature it. You have to like find these things. And I was, I was analyzing. Steps the other day. Like, I love the health app in Apple. It's incredible.
[01:08:26] And I track everything. I've shared my personal story about my heart and how it, like, you know, kind of found something with that. but they track like things like distance between steps. Like, it, it, yeah. Just like, and it's like how, like it's all, it's either in my watch or my phone that they're getting the data from and the fact that it has this kind of data and like, you just realize the depth of data they can capture from these wearables or from the, you know, the phone in your pocket, whatever it may be.
[01:08:55] And then you start to imagine like, my goodness, like what could they do with that [01:09:00] data?
[01:09:00] Mike Kaput: Yeah.
[01:09:00] Paul Roetzer: If they have the intelligence baked in. So if you're, I'm serious, like if you've never done it before, go into the health app and just click, like show all data and just look at the metrics they have on you. It's wild.
[01:09:12] Mike Kaput: Then, an experiment I did, which, worked somewhat well is, then have Claude code go build some things to connect to that data. Interesting. And then tell you some stuff about it, which is interesting.
[01:09:22] Paul Roetzer: That's fascinating.
[01:09:23] Mike Kaput: it gel lot of trial and error involved. Not perfect. You probably just get the same thing you have Apple Watch, but it was a fascinating experiment.
[01:09:30] Paul Roetzer: Apple Watch is an if you've never have one like it. I was a watch guy before. Yeah. Like I collected watches, like nice watches I stopped because like, I really, the utility of wearing an Apple watch every day, I hate when I would not have the data for a couple hours. Like, you know, you put a nice watch on or whatever to go do a keynote or something and it's like, ah, damn, I don't have my heart rate while I was talking.
[01:09:51] And like, sometimes you wanna see that. It's like, does my heart rate go up when I'm on stage? Like I'd be curious. So yeah, I just, I love that. It's so good.
[01:09:59] AI Use Case Spotlight
[01:09:59] Mike Kaput: Alright, [01:10:00] so next up we have our, you know, now regular segment we're doing on our AI use case spotlights here at SmarterX, where every week we're trying to give you a quick look under the hood at some real.
[01:10:11] Uses for AI that we're exploring, building, or deploying in our own work and sometimes in our own personal lives. So Paul, I just have a really quick use case to share this week. If you have anything to share, we can kind of talk through that too.
[01:10:24] Paul Roetzer: yeah, go for it.
[01:10:25] Mike Kaput: So for me, actually, I stole this one. The use case is not mine, but I don't think the personal mind may steal it because it's actually from Taylor Rady, our director of research who's taking the lead this year on SmarterX ass state of AI for business reports.
[01:10:39] So typically we have done for five years in a row, a state of marketing AI report through marketing, AI Institute, and SmarterX, where we've surveyed hundreds and then thousands of marketers and business leaders on AI adoption and usage. This year, we decided to really expand that out to all functions of a business.
[01:10:56] So we've got, we just closed the survey. We have basically almost, I [01:11:00] think over 2100 responses, the most we've ever had spanning every function, industry, and company size. So we are knee deep in creating the actual report. It's really interesting because Taylor is taking the lead on this. This year. I'm kind of overseeing some stuff and reviewing it, but you know, I think I had shared last year or maybe early, yeah, last year that, you know, the report alone used to take hundreds of hours to do all the manual data analysis, writing, understanding synthesis that, you know, years and years ago in 2024 and 2025, I cut that down to probably a few dozen hours, which felt like an incredible win.
[01:11:42] Taylor did the report in like a day this year and I've looked at it at cursory fashion and it's really good and we're obviously going through the fine tooth comb with human oversight for this and there's human complexity and tone and write, re-writing and rewriting and reworking, [01:12:00] but like. She was able to cut this down another order of magnitude and how long it took.
[01:12:05] And the cool thing is, it wasn't just about time this year, 'cause you know, in past years I've been like, oh my God, thankfully it didn't take me this long. I gotta run to the next thing. This year, like me and Taylor have spent a huge amount of awesome time spent going really deep on two things. One, how can we ask even smarter questions of the data and go further and deeper on this stuff to create an even better report So we're not, say we're reallocating the time, we're not actually netting out with less time here, but it's gonna be 10 times better.
[01:12:36] But also as part of kind of building out our research function at SmarterX, like how can we blow the doors off activating this report both internally for sales, customer success, everyone else academy. Externally across a ton of different channels, which is an area we've historically struggled with because it takes so long to do all this stuff.
[01:12:56] So really it is night and day even. I was [01:13:00] blown away last year by what the models could do, especially Gemini and Claude with both data analysis and writing. This year it doesn't even come close. They just smoked what we were able to do last year, and it's just jaw dropping. I mean, I'm just continually reminded, like, I know this, I see this every day, but then something like this is just so cool to see how good this stuff has gotten.
[01:13:19] And it's really cool because like recurring use cases, compound, we've done this every year now that we've been able to for several years, like using AI for parts of this. It just gets more and more every year and the results just compound and compound and compound and it's, it's incredible to see. So, super excited about that.
[01:13:36] We're releasing this, in a few weeks here, so we'll have more on that, and more announcements around that. But we're really excited.
[01:13:43] Paul Roetzer: As, I can't wait to see it for one and two, as someone who has personally spent hundreds of hours in pivot tables building that report, I love to hear the stories of how we are solving for making it more efficient.
[01:13:57] Mike Kaput: Oh, and I will, I will just note too, at our [01:14:00] AI for Writer Summit, that is coming up, in a couple weeks, if you go,
[01:14:05] Paul Roetzer: May 7th, isn't it?
[01:14:05] Mike Kaput: May 7th. So if you go to marketingaiinstitute.com, go to events, you can see there's a free registration option. Taylor's actually giving a talk about exactly how she did this.
[01:14:15] Super tactical. Yeah, you can learn, you know, step by step how you can do this for yourself too. So go check that out.
[01:14:21] Paul Roetzer: That's awesome. Yeah. I'll just do a quick one. This is, I actually, I forgot I ran this. It's funny. so sometimes I'll just go into like ChatGPT and see what are the recent prompts I gave.
[01:14:31] So apparently, like I said, I forgot I did it. I think this was last night or this morning, I had seen a Jason Calacanis, maybe tweeted about like, how. We were gonna have all these like new companies created. Yes. And that was gonna create all these jobs. And, but not everybody's really made out to be an entrepreneur.
[01:14:51] And so just that like spur of the moment, like, I'm like, I'm not tired of this argument. Like I'm actually an advocate of this idea that [01:15:00] entrepreneurship is, is maybe the thing that balances out the job loss. But I found myself wondering, I was like, are we seeing any signs of that yet? Like, are we seeing an increase in startups?
[01:15:09] so I just literally went into deep research in Jet bt, and I gave it the prompt. I said one of the theories about how the economy will account for job loss is driven by AI is that we will see a rapid increase in entrepreneurship and the number of startups created. Is there any data showing an increase in startup creation, of the last 12 to 18 months?
[01:15:27] So I actually haven't gone through and read this whole thing yet, but it went through 33 citations and 341 searches and took 23 minutes to write me. A report on startup creation, AI displacement, and entrepreneurship since late 2024. And it has a bunch of charts and methodology and sources. And so I guess I'll just use it as a reminder of like, Hey, sometimes that's a great use for AI is like.
[01:15:51] Curiosity. It's like, I wonder, and it can be at the most random moment and you can just like set, I mean, deep research is an agent, like it's going and [01:16:00] doing its own thing. It's taking actions to, like, it builds a plan and then it goes and take actions. This is a form of an agent and it just goes to work and it does it for 23 minutes and then I forget, I did it.
[01:16:09] Like come back into here. but yeah, I mean sometimes those are the best use cases is just that spur of the moment, Hey, I wonder if I could do this thing or if I could come up with this idea or if I could like create this visualization and then just throw it in there and see what happens. So yeah, that would be, it'd be a fun one for me for the week and I gotta go read this.
[01:16:25] Yeah,
[01:16:26] Mike Kaput: right. That's the key.
[01:16:28] AI Academy Spotlight
[01:16:28] Mike Kaput: Alright, so one other recurring segment we've started doing is each week we spotlight one of the courses in the AI Academy to give people kind of real actionable takeaways from the course, whether or not, you know, you ever end up becoming an academy member, just to give you some of the value for free that we're creating in AI Academy.
[01:16:44] So Paul, I'm gonna go through this week, our AI for HR course series very briefly. and kinda share some takeaways there.
[01:16:51] Paul Roetzer: Sounds good.
[01:16:52] Mike Kaput: So what's really cool and interesting and also a little scary in AI for HR is that it is really at the front [01:17:00] lines of how AI is changing traditional systems. So, and in our research and in creating this course, I'm the one who taught this, we found that, you know, it's AI is creating chaos across the core work of hr.
[01:17:13] I mean, not only is it creating huge opportunities for HR as a function, but they're running into real issues where candidates and employees are using AI too. And it's not necessarily bad to use AI in your job search, but it leads to all sorts of like, really messy question because we're seeing hiring signals get really compromised because candidates are using AI to not only game the system, but also just really, really, kind of hack their way through the process.
[01:17:44] And it's like, you can't use these traditional signals. Anymore to see if someone actually knows what they're talking about. So, you know, employees themselves, even after hired, are using AI to do their work in ways managers can't see. This is affecting everything from resumes to performance [01:18:00] reviews, to just overall productivity.
[01:18:02] And HR professionals have a really tough job right now. And that's kinda the big macro trend. And one of the practical takeaways that we teach in this course is for your average HR person, this can feel deeply overwhelming. there's so much going on in ai, there's so much to learn. They're already dealing with the fallout in a negative way sometimes with how the hiring process has changed.
[01:18:26] One way that we kind of teach and walk you through in this course is just a really simple framework to get started thinking about, okay, I know cht BT does this. I've heard about Claude over here. Like how do I wrap my head around? The opportunities for me and my job. And we use this framework called just pretty simply the three A's and the three A's are this like sequential order to think about AI automation, augmentation and acceleration.
[01:18:53] So first you wanna start looking at things like where can AI handle low level, low hanging [01:19:00] fruit, repeatable work that you can literally have it do for you in order to save time? Because that's where 99% knowledge workers and HR professionals especially are really stuck is they are drowning in like reactive admin work that is not the best and highest use of their time.
[01:19:17] So automation is a key initial step. And you know, like back to that discussion, productivity is not everything, but freeing up some time so we can be more innovative can be really helpful. And then second is augmentation. So looking at, and we walk you through a series of questions on how to actually surface these opportunities.
[01:19:36] Augmentation is using AI as a co-pilot. So let's say you freed up time by automating some things with ai, you then can start doing more of the work you're meant to do. The more strategic, more high value stuff. Well, AI can actually augment you there to supercharge and just accelerate the value you create there with benefits you and helps you do better work, not just faster work.[01:20:00]
[01:20:00] And then finally, over time, after you are effectively automating and augmenting your function as the case may be, acceleration is kind of the bigger picture stuff, right? The AI agents, the more transformative projects, that's where we then walk you through thinking about not just what AI can do for you or how AI can make you better, but what AI can enable.
[01:20:20] That just was not possible before. So we're talking, you know, we go through a bunch of use cases and examples of that in the course. One really interesting one is, is I believe Shopify uses an internal. Talent marketplace completely driven by AI to match people internally to different roles. That's kind of a really structural long-term, almost sci-fi use of this technology that completely upends how the company actually works.
[01:20:47] So that's kinda the practical starting point's, kinda running your work and asking yourself series of questions through those three lenses to actually sequentially step by step without biting off more than you [01:21:00] can chew, actually see some real value from AI right out of the gate.
[01:21:04] Paul Roetzer: It's a lot of stuff we need to be applying to our HR
[01:21:07] Mike Kaput: smart rack.
[01:21:07] Exactly. Right, right. Yeah. I have to admit, I mean, some of the stories, even in this course and the case studies and even just some of the research, even stuff that didn't make it in, you're just like, I would not wanna have to figure this out that I talked to some HR leaders Yeah. Right.
[01:21:22] Paul Roetzer: Recently. Right.
[01:21:24] Or, you know, the, the, there's a lot of that overwhelm feeling. Yeah. Of trying to not only figure it out for yourself internally, how are we gonna use it? How do we manage and like recruit and hire people who are obviously using it in the process?
[01:21:37] Mike Kaput: Oh yes.
[01:21:37] Paul Roetzer: It's a very dynamic space right now.
[01:21:41] AI Product and Funding Updates
[01:21:41] Mike Kaput: Alright, Paul, last but not least, we've got a bunch of AI product and funding updates.
[01:21:47] So I've got the, a bunch of these teed up like last week. There are a lot of things going on. I'm gonna run through these real quick and if there's anything that jumps out, you let me
[01:21:55] Paul Roetzer: go for it.
[01:21:56] Mike Kaput: All right, so first up, openAI's launch ChatGPT [01:22:00] Images 2.0. Its first image model with native thinking capabilities that can search the web, generate up to eight consistent images from one prompt and produce.
[01:22:09] This is important produce readable text that is accurate at 2K resolution. It is widely seen right now as like the number one image model out there and is making a lot of waves. OpenAI also launched ChatGPT for Excel and Google Sheets. This is a sidebar app that lets plus pro business and enterprise users build, edit, and analyze spreadsheets.
[01:22:29] In natural language and pull in connected ChatGPT apps Alongside their data, OpenAI also announced Codex Labs plus partnerships with Accenture p wc, Infosys, and other global system integrators to deploy Codex across large engineering organizations. Anthropic and Amazon expanded their partnership with up to five gigawatts of new AWS compute for Claude, a fresh $5 billion investment from Amazon.
[01:22:57] There may be up to 20 billion more following on that [01:23:00] and a $100 billion 10 year commitment from Anthropic to AWS plus direct availability of the Claude platform inside AWS. Anthropic also added a memory feature to Claude Managed Agents, which is now in public beta that lets agents retain and build on learnings across sessions via file-based storage.
[01:23:20] Anthropic is apparently running a live pricing test on a roughly 2% of new signups with existing pro and max subscribers. Unaffected as the company experiments with how Claude Code access is packaged across tiers. So also figuring out that pricing problem we were talking about. Google rolled out an upgraded version of its deep research agent built on Gemini 3.1 Pro, adding a new max tier for extended asynchronous reasoning, MCP connections to proprietary data sources and native in report charts and graph infographics.
[01:23:55] Google also signed a new multi-billion dollar cloud deal with [01:24:00] Mira's Thinking Machines Lab, giving the startup access to Google Cloud infrastructure. Microsoft, as we talked about, has made co-pilots AGI agentic capabilities generally available in Word, Excel and PowerPoint. So this just a little more detail here and on the product side.
[01:24:15] This allows co-pilot to take multi-step app native actions directly inside document spreadsheets and decks for Microsoft 365 copilot premium personal and family subscribers. At Adobe Summit 2026 Adobe rebranded Experience Cloud as CX Enterprise and introduced CX Enterprise Coworker and yet another trend of agents and agentic AI layer that orchestrates customer experience workloads across Adobe's stack.
[01:24:43] SpaceX struck a deal giving it the right to acquire AI coding startup cursor for $60 billion later this year, or pay 10 billion if it walks away from the acquisition while the two companies collaborate on model training using X AI's. Colossus [01:25:00] Super.
[01:25:00] Paul Roetzer: That's a wild one.
[01:25:01] Mike Kaput: That is
[01:25:02] Paul Roetzer: a wild one. I, again, I'm not gonna get into it 'cause we're running on time here, but, that one might be worth unpacking.
[01:25:09] There's, there's a lot to that story.
[01:25:10] Mike Kaput: Yeah.
[01:25:12] Paul Roetzer: Another time.
[01:25:14] Mike Kaput: Tencent and Alibaba are in talks to invest in Chinese AI lab deep seats. First ever funding round. At a valuation now of more than $20 billion with 10 cent reportedly pushing to take as much as a 20% state Moonshot ai, which we've talked about in the past, released Qmi K 2.6, a new open source coding model that claims state-of-the-art scores on certain benchmarks and can run 4,000 plus tool calls across 12 point 12 plus hours of continuous execution.
[01:25:46] And finally, Zapier launched Zapier benchmarks, a new AI evaluation suite anchored by automation bench that tests agents on end-to-end business workloads across sales, marketing, operations, support, finance. [01:26:00] HR using deterministic scoring, grounded in 2 billion plus monthly tasks from 3.7 million Zapier customers.
[01:26:09] Paul Roetzer: We will have Dan Slagen and talk about that at MAICON.
[01:26:11] Mike Kaput: Maybe. That would be awesome. I would love to pick his brain about that.
[01:26:16] Paul Roetzer: I dunno if that's Dan's domain, but
[01:26:17] Mike Kaput: Right.
[01:26:18] Paul Roetzer: Zapier's got a lot going on right now. We were talking about their, with their internal literacy stuff, not like a week or two ago. Right?
[01:26:23] Mike Kaput: Yep.
[01:26:25] All right. So Paul, that is it for this week. One quick final announcement here. Like we said at the top of the episode, this week's pulse survey will be live when you listen to this at SmarterX.AI/pulse. We're going full on agent to, this week, just like the topic. So we're gonna ask about things about where's your organization at, when it comes to deploying AI agents today, and also what is holding your organization back from deploying AI agent more than you are already today.
[01:26:54] So I'll be very interested to see that, Paul, based on the answers from this week as well. But [01:27:00] thank you for breaking everything down for us. Another busy week. I know we've done two episodes this week, so I feel like I've, I've got a pretty good pulse on what's going on in
[01:27:10] Paul Roetzer: Yeah. And I was actually home for like two days in a row for the first time in all for, right, right.
[01:27:14] Well, yeah. And so, yeah, next time we're together, we'll, we will, well actually, we'll be back in town, so enjoy your travels. Good luck with the experience inbound. And I'll be off to, I think at the time this drops, I'll be doing the AO Engage event, and then we'll be back, we'll be back in Cleveland.
[01:27:31] I'll see you for our ChatGPT Agents Lab next week. That will record now, wait, yeah. On the next episode. Yeah, I'm, I'm super anxious. I hope it's everything I think it could be.
[01:27:41] Mike Kaput: I'm very excited. Yeah.
[01:27:43] Paul Roetzer: All right, everyone, have a great week. Thanks for listening to the Artificial Intelligence show. Visit SmarterX.ai to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded [01:28:00] AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy and engaged in a SmarterX Slack community.
[01:28:11] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
