While Demis Hassabis and Dario Amodei debated AGI timelines on the global stage at Davos, the economic reality of artificial intelligence is hitting closer to home.
This week, Paul Roetzer and Mike Kaput dissect the growing gap between the "powerful AI" future promised by the labs and the labor market disruption happening now, Google DeepMind’s latest hiring needs, Anthropic’s release of Claude’s constitution, and more.
Listen or watch below—and see below for show notes and the transcript.
This Week's AI Pulse
Each week on The Artificial Intelligence Show with Paul Roetzer and Mike Kaput, we ask our audience questions about the hottest topics in AI via our weekly AI Pulse, a survey consisting of just a few questions to help us learn more about our audience and their perspectives on AI.
If you contribute, your input will be used to fuel one-of-a-kind research into AI that helps knowledge workers everywhere move their companies and careers forward.
Click here to take this week's AI Pulse.
Listen Now
Watch the Video
Timestamps
00:00:00 — Intro
- AI Academy
- AI for Agencies Summit
- 2026 Marketing Talent AI Impact Report Webinar
- MAICON 2026 Speaker Submission
00:03:03 — AI Pulse
00:05:10 — AGI Comes to Davos
- The Day After AGI with Dario Amodei and Demis Hassabis - World Economic Forum
- Anthropic’s Amodei on AI: Power and Risk - Bloomberg Live
- Hassabis on an AI Shift Bigger Than Industrial Age - Bloomberg Live
- LinkedIn Post from Paul Roetzer
- ExecAI Insider Newsletter
- Dario Amodei's Machines of Loving Grace
- Google DeepMind CEO Demis Hassabis: AI's Next Breakthroughs, AGI Timeline, Google's AI Glasses Bet
00:21:26 — Amazon Layoffs and the “Great Divergence”
- Amazon plans thousands more corporate job cuts next week, sources say - Reuters
- Young will suffer most when AI 'tsunami' hits jobs, says head of IMF - The Guardian
- Artificial Intelligence and the Great Divergence - The White House
- Anthropic CEO Says Government Should Help Ensure AI’s Economic Upside Is Shared - The Wall Street Journal
- X Post from Kevin Roose
- Message from CEO Andy Jassy: Some thoughts on Generative AI
00:38:59 — AI for Course Creation
00:58:55 — Google DeepMind Is Hiring a “Chief AGI Economist”
01:02:06 — OpenAI Warns AI Is Reaching “High” Cybersecurity Threat Levels
01:07:18 — Anthropic Publishes New “Constitution” That Governs Claude’s Behavior
- Claude’s New Constitution - Anthropic
- Amanda Askell Website
- Amanda Askell on X
- Will ChatGPT Ads Change OpenAI? + Amanda Askell Explains Claude's New Constitution
01:17:39 — New Survey Shows Big Disconnect Between Employees and Leaders on AI
- CEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story. - The Wall Street Journal
- X Post from Paul Roetzer
- The AI Proficiency Report - Section AI
01:24:39 — xAI Wants to Automate White-Collar Workers
01:28:29 — How Do Credit Pricing Models Work?
- Understand HubSpot credits and billing - Hubspot Knowledge
- Plans and credits - Lovable
- OpenAI Plans to Take a Cut of Customers’ AI-Aided Discoveries - The Information
- HubSpot Credits
01:38:22 — AI Product and Funding Updates
- Claude
- Apple
- Slack
- Humans&
- Anthropic
Today’s episode is also brought to you by our AI for Agencies Summit, a virtual event taking place from 12pm - 5pm ET on Thursday, February 12.
The AI for Agencies Summit is designed for marketing agency practitioners and leaders who are ready to reinvent what’s possible in their business and embrace smarter technologies to accelerate transformation and value creation.
There is a free registration option, as well as paid ticket options that also give you on-demand access after the event. To register, go to www.aiforagencies.com
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: This is not meant to be creating fear and anxiety. It is meant to be a reality through the propaganda that comes out from tech companies. They know they need fewer people across the organization. They just can't admit that. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:23] My name is Paul Roetzer. I'm the founder and CEO of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and SmarterX chief content Officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career, join us as we accelerate AI literacy for all.
[00:00:51] Welcome to episode 193 of the Artificial Intelligence Show. I am your host, Paul Roetzer, along with my co-host Mike Kaput. It is Monday, January 26th at [00:01:00] 9:45 AM Eastern time. This week, if it’s anything like last week, it'll probably be a very busy week, including the possibility of, I don't know, there's like some new models being rumored.
[00:01:11] openAI's is kind of the hot rumor at the moment that we might get a, like a 0.3, maybe a 5.3 or something along those lines. So we'll see. But it's gonna be another busy week. If you follow me on LinkedIn, you saw, or you might have seen a post I put up over the weekend, which was, there is just an insane amount of stuff this week.
[00:01:30] We, in a normal week, may have 30 topics. This week was well over 50. it's kind of hard sometimes to filter this out and figure out what's main topic, what's rapid fire and what's newsletter only. There are probably, I don't know, four or five rapid fires today that I would've liked to have made into main topics.
[00:01:50] But we try and do this in a reasonable amount of time. So, all right. So today's episode is brought to us by AI for Agency Summit. the AI for Agency Summit. This is our [00:02:00] third annual is happening, noon to five Eastern Time on Thursday, February 12th. That is a live event. there's also an on demand option.
[00:02:09] You can purchase the registration, though. There's a free registration option. So, AI for Agency Summit is designed for marketing agency practitioners and leaders, ready to reinvent what's possible in their businesses and embrace smarter technologies to accelerate transformation and value creation.
[00:02:26] During the event, you'll join other forward thinking ai, agency professionals to see how agency leaders are using AI to drive innovation and efficiency. Learn AI workflows, talk about AI and, legal side, intellectual property, in particular, and then evaluate how AI is reshaping brand agency relationships.
[00:02:46] So you can go to AIforagencies.com. And learn more about it. That is presented by Screendragon, so we appreciate their support and partnership in that event. So again, AIforagencies.com to learn more [00:03:00] and take advantage of that free registration option.
[00:03:03] AI Pulse
[00:03:03] Okay, AI Pulse. If you're new to the show, we start off each week with just kind of a overview of a pulse survey we do.
[00:03:09] This is an informal poll of our listeners. You can go to SmarterX.ai slash pulse and participate in this week's survey. We'll actually share what those questions are at the end. so last week we said, how does the news of ads coming to ChatGPT affect your view of the platform? no impact. I expect ads on free or lower paid tiers was 52%, 27%.
[00:03:31] It said it makes me trust the results less. By the way, I would assume most people listen to this show are like likely paid users. So I'm guessing that the free or the $8 go plan doesn't really affect most of our listeners. So again, this is an informal poll. 27% said It makes me trust the results less interesting.
[00:03:51] And then, about 18% said I'm considering switching to a competitor because of the ads. the second question was, are you comfortable allowing an AI agent like [00:04:00] Claude Cowork to read and edit files directly on your computer? 51% said only if limited to a specific folder. 29%. No, I'm not ready for that level of access.
[00:04:12] And then a little mix here. 11%. I need to see more safety use first. And about nine and a half, 10%, yes, the productivity boost is worth it. that one's become more relevant with, we'll touch a little bit on this whole claw bot thing. We're not gonna go into it in great detail today 'cause I think most people would be using this, this thing that emerged over the weekend would, would not be the standard listener or business user.
[00:04:34] but anyway, so those are, those are the response we had. like I said, we have a very kind of action packed episode here. We're gonna do our best to get through as much of this as we can. And like I always say, like, subscribe to the newsletter that this week in AI newsletter, because Mike does a great job of curating everything we don't get to, in the episode there's all kinds of other interesting things.
[00:04:57] So there's literally like dozens of topics we're not gonna get to [00:05:00] that, didn't make the cut. Alright, so last week was a big week in Davos for the World Economic For and that is where we begin today's episode, Mike.
[00:05:10] AGI Comes to Davos
[00:05:10] Mike Kaput: Yes, we do Paul. Because in the past week, world leaders and business titans gathered for the Annual World Economic Forum in Davos, Switzerland and ai specifically, AGI was one of the most buzzworthy topics at the event.
[00:05:25] So specifically Google, DeepMind, CEO Demis Hasssabis, and Anthropic, CEO, Dario Amodei took the stage multiple times and in multiple interviews to issue strong predictions and even warnings about the imminent arrival of AGI. So Amodei for one, repeated his prediction that AI was on track. To possibly exceed human capabilities across most fields, including Nobel prize level research.
[00:05:51] By 2026 or 2027, Hassabis estimated a 50% chance of AGI by 2030. Both predicted that the [00:06:00] closer we get to AGI and once we hit AGI and beyond that, this could completely disrupt and even break Traditional economic models at one point during interviews at Davos has said that this will be, quote 10 times bigger and 10 times faster than the industrial Revolution in terms of disruption.
[00:06:18] Amodei also at one point predicted that thanks to AGI would co see a very weird economic situation that we've quote almost never seen before, where we both hit five to 10% GDP growth, thanks to what AI enables, and also simultaneously have 10% plus unemployment due to AI disruption. So both leaders actually also talked about.
[00:06:41] A possible AI self-improvement loop as the primary driver of this acceleration. This is where AI models end up actually researching and coding the next generation of ai. So Paul, this Davos that's just touching the tip of the iceberg here is a doozy, honestly. Can you maybe [00:07:00] unpack for us more of what Demis and Dario were getting at in these interviews and specifically curious what you take away as a business leader or just, you know, a citizen in society right now based on what they were saying?
[00:07:12] Paul Roetzer: It's a really good session. It's about 31 minutes long. I'd definitely would recommend people watch the full session. It's available on, on YouTube. You know, I think it's, it's interesting how obviously these two have very high conviction about the future and the fact that these, these models are gonna continue to scale and get smarter and more generally capable.
[00:07:32] they obviously have tremendous respect for each other. I would say, they obviously talk a lot. they alluded to the fact that they have, you know, these conversations specifically, it sounds like, related to risk and model capabilities. I think that, one distinction, and they did a pretty good job of kind of illuminating this, is they, they define AGI differently as everybody seems to at this point.
[00:07:59] But, [00:08:00] Amodei in particular doesn't even like the term AGI. He's, he's been very vocal about this. He, he prefers powerful ai. So I think it's helpful to rewind back to October, 2024 for a moment to sort of set the stage for this conversation that they had. So, Amodei published a essay called Machines of Loving Grace, which was actually referred to during the session by the interviewer.
[00:08:22] And in that article it was sort of his optimistic view of what was possible. He re, he alluded to the fact that he's a nearing release of another essay that is much more focused on the risks of ai, but in the machines of Loving Grace. He said, by powerful ai I have in mind an AI model likely similar to today's LLMs in form, though it might be based on a different architecture, might involve several interacting models and might be trained differently.
[00:08:50] it has these following properties. So in terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields, which he illuminated as [00:09:00] biology, programming, math, engineering, and writing as examples. In addition to just being a smart thing you talk to, it has all the interfaces available to a human working virtually, including text, audio, video, mouse, and keyboard control, as well as internet access.
[00:09:15] It does not just passively answer questions. Instead, it can be given tasks that take hours, days, or weeks to complete and then goes off and does those tasks autonomously in the way a smart employee would asking for clarifications as necessary. It does not have a physical embodiment other than living on a computer screen, but it can control existing physical tools, robots, and laboratory equipment through a computer.
[00:09:39] This is one where it does deviate from Demis. He, he definitely envisions like robotics and embodiment as part of AGI, which I'll, I'll get to in a moment. back, you know, back to machines loving Grace. Dario said the resources used to train the model can be repurposed to run millions of instances of it.
[00:09:55] and the model can absorb information and generate actions at roughly 10 x to [00:10:00] 100 x human speed. Each of these millions of copies can act independently on unrelated tasks, or if needed, can all work together in the same way humans would collaborate. We could summarize this as a country of geniuses in a data center.
[00:10:14] So again. When Dario is asked about AGI, this is what he thinks of. So when he gives you timelines of one to two years for AGI, he's actually referring to powerful ai, which he defined in machines of loving grace. So just context, when Demis is asked about AGI, there's actually, he gave a little bit of background in the Davos interview, but then he also did, he's been on the circuit the last, like six days.
[00:10:41] he did an interview with big technology podcast and in that one he was asked specific about AGI. He said, I don't think AGI should be turned into a marketing term of commercial gain. He's, this, everything was a shot at Sam Altman in openAI's. Like, it was crazy. Like every answer Dario and Demis gave was basically just taking a [00:11:00] shot at openAI's.
[00:11:00] And this was a very direct one. I think there's always been a scientific definition for AGI, my definition of, is that a system that can exhibit all the cognitive capabilities humans can. And I mean, all he said. That means the highest levels of human creativity that we always celebrate, the scientists and the artists that we admire.
[00:11:20] It means not just solving a maths equation or a conjecture, but coming up with breakthrough conjecture. Again, that's a veiled shot at openAI's who is continuously touting their models solving math equations. so de said, that's much harder not solving something in physics or some, bit of chemistry, some problem, even like alpha fold protein folding.
[00:11:42] He's referring to systems from open, from Google DeepMind, but actually coming up with a new theory of physics, something like Einstein did with general relativity. Can a system come up with that? Because of course we can do that as humans. He said the smartest humans with their human brain architectures have been able to do that in history.
[00:11:59] [00:12:00] The same on art. So then he talks about creativity like Picasso and Mozart creating these original things. He said, today's systems, in my opinion, are nowhere near that. You need to have a system that can potentially do that across all these domains. Then on top of that. I'll add, physical intelligence because of course we can play sports and control our bodies and do amazing levels, elite sports.
[00:12:21] so again, he's now talking about embodiment being a part of his AGI definition, and he said we're still a ways off. then he talks about robotics as another example. So I think an AGI system would have to be able to do all of those things to really fulfill the original goal of the AI field. I think we, we are five to 10 years away from that.
[00:12:41] So again, when you hear Dario say one to two years and Demis say five to 10 years, hopefully I've just clarified for you, they are not talking about the same thing. So they're not being asked when does AI start to impact jobs and transform the economy, and when will organizations [00:13:00] start to restructure their hiring plans?
[00:13:01] Or like, that is not the question they're being asked for, for all of us though, for, for the average business professional, business leader. Government leader, investor, educator, whoever you are that listens to this podcast, their definitions of AGI aren't terribly relevant to your life at the moment. Like the systems can get incredibly more powerful, much more general and capable, much better at long horizon tasks.
[00:13:30] Cha completely change how work is done, and none of that would fit into their definition of what AGI is. So my point there is don't get caught up in whether or not Demis thinks it's five or 10 years away, or Dario thinks it's one to two years away. They're not talking about the same thing and they're likely not even talking about something that actually affects you, in the near term.
[00:13:53] So the definition I have been using roughly over the last 18, 24 months is AI systems that are generally capable [00:14:00] of outperforming the average human at most cognitive tasks. Now, the reason I use that definition is because I'm thinking more. Economics and future of work. I'm thinking, when are these things good enough that they're better than the alternative of a human?
[00:14:14] Doing it right is kind of like been my premise for a couple years and why I've been pushing for more economists to be thinking about this and philosophers to be working on this because I don't think that's far off at all. I actually believe we're probably already there, if that's your definition. And then the other definition we've talked about is sort of the economic turning test of would you hire an AI agent or swarm of agents working together instead of a human?
[00:14:38]
[00:14:38] Paul Roetzer: And I think we're there basically, now it might require some additional post training and being adapted to specific verticals, but the current systems enable that. So with that context, again, I would suggest go watch the full session. But a couple things that I think are really interesting that seem to be.
[00:14:56] Becoming somewhat universally agreed upon [00:15:00] outside of Jan Koon. Like there's just, Jan Koon is an outlier. Demis called him out by name in, in one interview, is like, I don't agree with Johan, that language models aren't enough. there's a few dimensions of progress that kept coming up in these interviews.
[00:15:14] So self-improvement is, is a big one. We've talked about this a lot in the last couple months on this podcast. That is after these things are trained. So once they come out and they've, you know, spent all this money training this thing, then they go through the post training and you know, they teach it to be good at writing and math and biology and all these things that it can then improve itself.
[00:15:33] That it doesn't need to go through a new training run to get better. So self-improvement is how humans work. We learn from our experiences, our environments, the knowledge we gain, and we get better, smarter, more generally capable. We don't have to hit it. We don't have to hit a reset button and start from scratch, which is what models do you start from scratch every time.
[00:15:52] So self-improvement is critical to human intelligence and machine intelligence. Memory is fundamental. We're [00:16:00] starting to see that happening. Continual learning is where, again, you're learning as you go. Demis actually talked about personal intelligence, that was recently put into the Gemini app, which I talked about in the last episode, as a key part of the continual learning process that it's able to constantly learn based on your emails, your events, the documents you work in, the things you do, the decisions you make.
[00:16:24] So we're seeing very early versions of that. And then world models, which is its ability to understand physics and interact with the world the way a human does. So those four things, and again, there's, there's probably about two dozen dimensions the different AI labs are working on and pushing on. Those four seem to be, right at the top of the list for all the labs.
[00:16:45] Right? So then, you know, just a couple of, within the session they talk about AGI timelines. Their, their growth, like their vision for kind of where this all goes. And they seem very closely aligned on that stuff. Jobs and human purpose. I thought [00:17:00] that was a really important part of the conversation.
[00:17:01] Again, I don't, I I 've definitely become convinced that the labs themselves are the wrong people to ask about what's gonna happen to jobs.
[00:17:10]
[00:17:10] Paul Roetzer: I mean, I've listened to enough interviews over the last three years with Altman and Demis Hassabis, who I 've mentioned many times, I've more respect than anybody for in, in the industry.
[00:17:20] Dario Ade, you know, take your pick. I just get a sense that they're detached from the reality of what their models are gonna be used for. Dio actually did, gave probably the closest answer I've heard to a realist answer. Which is that they are struggling to start to deal with this themselves within Anthropic, he said, I think maybe we're starting to see just the little beginnings of the impact on jobs and software and coding.
[00:17:45] I even see it within Anthropic where I can look forward to a time where on the more junior end and then on the more intermediate end, so the manager level, we actually need less and not more people. We're thinking about how to deal with that within Anthropic. My worry is as the [00:18:00] exponential keeps compounding, and I don't think it's going to take that long, somewhere between a year and five years, it will overwhelm our ability to adapt.
[00:18:10] So that's the thing that to me has been missing because I think most of these AI leaders think about AI researchers. They think about like what they do, their research labs and they think about, okay, we're trying to automate research. They don't think about marketers and salespeople and customer success people and operations people and legal people like they're, that's not their world.
[00:18:29] And so when you're, when you're doing more, we're like, what we're doing is saying, Hey, this tech's pretty good, and if we just personalize use cases for all these different departments across our company, we're gonna see this massive lift. And they, they're not getting that which needs fewer people. So, yeah, I don't know.
[00:18:44] I thought that was interesting. they get into competition. Oh, the thing about the human purpose, I , I thought was, was good because, you know, that was Demi's thing is like, yeah, we gotta think about jobs, but we also have to think about like, what happens to our meaning and our purpose and what happens to the human [00:19:00] condition and humanity as a whole.
[00:19:01] Which I think they're, you know, starting to think more about, they talk about risks of the self-improving systems. And so, yeah. I don't know. It was just, it was a really good interview. A lot packed into 30 minutes. I think if you're trying to get a land, like a, like a good sense of the landscape of where we are right now, it's a good interview with two of like maybe the most prominent thinkers in this space who I feel like are actually.
[00:19:25] Thinking philosophically about this mo you know, more about the meaning of what they're doing. And then I think you always have to keep in mind that, Google owns estimated 14% of Anthropic. Yeah. So they're a major player. And I would say, I would say it's probably very, very safe to assume Anthropic and Google DeepMind are very closely aware of what's happening with each other.
[00:19:49] I would probably put safe super intelligence and Ilya in the mix. Demis went out of his way to mention his respect for Ilya unprompted in an interview. IA is a former Googler. I [00:20:00] would not be surprised at all if there's collaboration happening behind the scenes bet between Dario, Demis and Ilya. I think Elon and Xai are out on their own.
[00:20:09] I think openAI's and Meta are generally out on their own as well. And so just as we think about the landscape moving forward, I think there's some collaboration happening and I don't, I don't think there's enough yet. I , I actually. I would take that back slightly. Elon and Demis are friends and Demis will commonly reference the fact that they are friends and they stay in contact.
[00:20:31] So as much as Xai is sort of like pushing the frontiers of some of the more risky things that could be done here, I do think Demis and Elon have respect for each other and talk to each other. I think OpenAI and meta are, are maybe like the big biggest outliers here.
[00:20:46] Mike Kaput: Yeah. And I think it's also interesting when you zoom out and say like, narratively from a PR and public conversation perspective, if you had said to someone a few years ago that AGI would be headlining [00:21:00] Davos, they would've looked at you like you were insane.
[00:21:02] Paul Roetzer: I probably, I mean, I've shared that story before. Like I always wanted to talk about AGI going back to like 2019 and I just like avoided it. I didn't talk about the podcast, I didn't put it on LinkedIn anywhere. I was like, I, these people just aren't ready for that. Like the whole big thing. 'cause they'd never even interacted with a chat bot.
[00:21:20] So, yeah, it's amazing how fast things have have changed.
[00:21:26] Amazon Layoffs and the “Great Divergence”
[00:21:26] Mike Kaput: All right, so our next big topic this week is kind of a few interrelated stories that have happened in the past week or so that might start to suggest that we're seeing some of the effects of the AI acceleration that, for instance, Amedee and Sabas were talking about at Davos.
[00:21:43] So first, the White House Council of Economic Advisors released a report called Artificial Intelligence and the Great Divergence, and it basically argues that nations and firms leading in AI infrastructure and adoption are poised to accelerate growth at rates [00:22:00] significantly higher than the rest of the world.
[00:22:02] And it draws parallels to the 19th century and argues that AI may trigger what they call, quote, another quote, great divergence, similar to what occurred in that time period when the Industrial Revolution. Caused industrializing nations to accelerate their growth relative to the rest of the world.
[00:22:21] Second, while the White House's outlook here seems relatively optimistic and growth oriented, other forecasts, not so much because had Davos, the managing director of the International Monetary Fund, Kristalina Georgieva, described AI as quote, a tsunami hitting the labor market. IMF research actually suggests the technology will affect 60% of jobs in advanced economies and 40% globally with entry level and middle class roles facing the highest risk of elimination.
[00:22:51] And on that note, Giva has specifically warned that the middle class is the group that will be potentially squeezed most by the effects of ai [00:23:00] potentially seeing wages fall without a productivity boost, while high earners get an income boost from the technology. And last but not least, Amazon. is rumored to be preparing to cut approximately 14,000 corporate positions as part of the second phase of a plan to eliminate 30,000 white collar roles across units, including AWS and Prime Video.
[00:23:23] Amazon, CEO Andy Jassy has actually done a bit of flip-flopping recently on some of the reasons here for these layoffs. So during some job cuts in October, 2025, he seemed to be indicating the cause was largely the productivity gains being achieved from generative ai and made some comments around how they expected needing fewer people due to generative ai.
[00:23:45] But later he kinda walked that back on a third quarter earnings call. He attributed cuts more to culture and bureaucracy, not ai, but regardless if the full multi-phase layoff happening here of its total of [00:24:00] 30,000 jobs, if that happens, it would end up being the largest layoff in Amazon's history. So, Paul, we've got a few threads kind of coming together this week.
[00:24:10] I'm curious about your perspective. Are you seeing an acceleration in possible signals that were unfortunately, or fortunately as you may look at it, about to have a disruptive 2026? Like, how are you looking at these signals and how are you thinking about all this?
[00:24:26] Paul Roetzer: Yes. yeah, I mean, I haven't been shy about stating this.
[00:24:30] I have a lot of private conversations with leaders who are directly making these decisions or, leaders who are connected to people that are making these decisions. And I can tell you point blank, more layoffs are coming and they're going to be significant. So, I 'll kinda rewind. The White House paper, comes from the Council of Economic Advisors.
[00:24:55] The Council of Economic Advisors is an agency within the executive office of the [00:25:00] president, established by Congress in 1946, employee in the 1946 Employee Act. is charged with offering the president objective economic advice on the formulation of both domestic and international economic policy Council basis its recommendations and analysis on economic research and empirical evidence using the best data available to support the president in setting our nation's economic policy to promote employment, production and purchasing power under free competitive enterprise.
[00:25:28] Pierre Ya serves as the acting chair and Kim Rule serves as the member of the CEAI will say, regardless of political affiliation, this is an exceptionally well written document. It's 27 pages. It's very well researched. it's nuanced. It doesn't take a really strong stance one way or the other.
[00:25:49] It's the general output is, that we anticipate significant disruption, most likely. But we have to look at early indicators 'cause we don't necessarily see it in [00:26:00] GDP yet. And we don't necessarily hear it directly tied to job loss, but that doesn't mean it's not there and it's not simmering in essence.
[00:26:07]
[00:26:07] Paul Roetzer: it largely lacks any political propaganda, which was refreshing honestly, to, to read. It does lean pretty heavily on third party data. They do a great job of citing everything. Like this is a professionally done report. This is not a political report. It's like, I think the main point I'm trying to make here.
[00:26:23] So as you mentioned, they talk about the great divergence specifically, you know, in relation to the industrial revolution, causing industrialized nations to accelerate growth relative to the rest of the world. And they try and say like, is is AI gonna cause this same thing? The administration is laying groundwork for American AI dominance by accelerating innovation, infrastructure development and deregulation, which we have talked about as recurring themes within this.
[00:26:45] they, they do a really nice job, honestly, of like, if, if, if you didn't know anything about AI and you were trying to understand the economic impact of it, this would not be a bad first read, honestly. Mm. Like they do a really nice job of like. [00:27:00] Framing the basics of what is ai, what's generative ai, things like that.
[00:27:04] So it, you know, talks about future outlook. Advent of generative ai, based around large language models, will initiate a new wave of profound economic transformation in the us promising significant boost productivity and growth. As AI technologies become more integrated into the workplace, economists are reevaluating long-term projections for gross domestic product.
[00:27:23] They talk about the uncertainty around that, how some say it could be, you know, , same as usual, one 2% and up to 45% annual, you know, so they're giving this range and they're not making commitment either way. They're not offering an opinion. They're saying, here's how we're gonna evaluate this. So they say their framework for understanding, the intelligence of AI looks at the intelligence across two dimensions.
[00:27:43] Its ability to perform different tasks. So writing essays to identifying objects and pictures, to writing computer code and solving math problems to how the AI's capabilities on the task compared to human level and old intelligence. So this goes back to what I was thinking about AGI and like how does it compare to the average human.
[00:27:57] They give definitions of AGI and [00:28:00] super intelligence. They talk about, the caveat of their report analysis being the limitations of economic analysis on ai. Meaning there isn't that much out there. Like we're trying to look at this and project what we should do as a country. And there's honestly like a lack of economic analysis, which is what I have been like screaming for for the last three years, is why aren't we doing more on the economic side of this, right?
[00:28:24] they, they provide a really good framework here. They said economics often think of productive power of an economy as coming from three factors, the quantity of the labor, the quantity of capital, and total factor productivity, or TFP, that is a measure of an economy's efficiency and technological progress arising.
[00:28:41] TFP indicates that an economy is producing more goods and services from the same amount of labor and capital, or the same output with fewer inputs. So the problem here is it's just, it's an important indicator, but it's not a leading indicator because it can take time for [00:29:00] that to show up within the systems.
[00:29:02] And then they talk about key metrics to track investments and AI models and infrastructure performance like benchmark scores and falling cost per token, which I'll touch on honestly actually a little later on in a, SaaS pricing conversation. It's a very important topic. And then adoption and usage.
[00:29:17] So business usage, for example, revenue increases of these model companies. So overall, really good. I, the thing I'll connect here, Mike, that I thought of this morning as I was doing this, like prepping, is they're talking about the great divergence from an industrial perspective within countries. I'm thinking about it.
[00:29:33] those same principles applying to businesses.
[00:29:36] Mike Kaput: Yeah.
[00:29:36] Paul Roetzer: and I think it's, it's a good name or analogy to the moment we find ourselves in where there is this great diversion occurring where some businesses are racing forward and figuring this stuff out and others are just like sitting at the starting line.
[00:29:50] So we often frame like business AI transformation in a very simplistic way of understanding what the models are, what they're capable of, how that applies to your business, piloting [00:30:00] where you're constantly testing things, trying to find personalized use cases for people across roles, across departments, and then scaling where it's consistently being infused across every aspect of the organization, every role.
[00:30:11] It's affecting your hiring plans, like you're truly now, you know, all in. and then it got me also thinking on adoption from an individual user perspective. And I'll, I'll kind of come back to this one a little later on, but I started just like super simple framing here of, you know, we've been sharing a lot Mike lately on the podcast about ways you and I are doing this, doing ai.
[00:30:32] And I would consider us probably pretty advanced like power users. But when you, let's say you take any organization, take any team, like let's just pick a marketing department and enterprise and say it has a hundred employees and you give co-pilot or ChatGPT licenses to those a hundred employees. Your, your first level of adoption is the basic user.
[00:30:52] They're gonna use AI to ask questions in complete simple tasks. So like summarizing meetings, doing emails, things like that. They're [00:31:00] basically using AI as an answer engine. The intermediate is more advanced. They're using the more advanced capabilities. Maybe they know what a reasoning model is. They know what deep research is.
[00:31:09] They're doing some more complex problem solving and task completion, long horizon, high value tasks and projects. So they're focused more on increasing productivity and value creation. They're daily active users of these things. They're in their all time. They might have some custom AI systems like GPTs and gems.
[00:31:26] they're probably playing around with agents into workflows. In this case, the intermediate business user is thinking of it more as an assistant and an advisor. Through continuous prompting, you're having deep conversations. It's not a simple question, simple, prompt. And then there's the advanced user who's always experimenting with the latest capabilities and tools.
[00:31:45] They're reimagining workflows. They're always saying, what's a smarter way to do everything I'm doing? They're redefining work in a human plus machine environment. And in that case, AI is more of a coworker and on demand subject matter expert. And so [00:32:00] I think part of the problem that we're seeing in business is we talk about this binary adoption thing.
[00:32:08] Like do we or do we not have co-pilot and ChatGPT and have we or have we not taught our team about it? Okay. Like great, that's the starting point, but that's the basics. Like you've given 'em some prompts and you gave 'em a tool like are you actually enabling change management to drive intermediate and advanced usage of like at least 20 to 50% of your staff?
[00:32:29] So this is the problem I see over and over again is you go into a company, they say, yes, we have ChatGPT licenses for everybody, or copilot, or Gemini, whatever it is. It's like, okay, great. are they using it? Yeah. Yeah. Yeah. Weekly active usage is really strong. Like 40% of people used it this week. Okay.
[00:32:44] How many times did they use it this week? Well, the average user was three. Okay. What did they do with it? Did they ask it a question or did they like go build a custom GPT that saved them 10 hours last week? And so I think until you drill in, you realize that all this talk [00:33:00] about lack of ROI or like adoption is like imp.
[00:33:03] It's like we're not asking the next level down question to say yes, but are they actually getting value from it? So I saw this, tweet from Kevin Rus, who you and I both follow. and he said, I follow AI adoption pretty closely and I've never seen such a yawning inside outside gap. People in San Francisco are putting multi-agent Claude Swarms in charge of their lives consulting chatbots for every decision.
[00:33:27] wire heading to a degree only sci-fi writers dared to imagine people elsewhere are still trying to get approval to use co-pilot and teams if they're using AI at all. It's possible. The early adopter bubble I'm in has always been this intense, but there seems to be a cultural takeoff happening in addition to the technical one that is not ideal.
[00:33:44] and so Mike, I 'd love to get your thoughts on this. I 'll end with like, my, my take on this Amazon thing. So whatever Amazon says, it sounds like they're gonna lay off 10, 15,000 people this week, whatever it is, which by the way, I don't wanna sound unsympathetic to [00:34:00] that. That's terrible. Yeah. And if you are affected by those layoffs, I'm sorry that that is what is happening and hopefully you find a landing in a company that you're able to grow in your career with.
[00:34:10] So I don't wanna talk about these numbers as though there aren't people behind these numbers. There is, there is a layoff happening this week, it sounds like. Whatever public messaging Amazon gives to this. And Mike, this is really hot for you and I because we're doing the Talent AI Impact webinar on Tuesday, January 27th.
[00:34:28] Yeah. So like you may be listening to this, while Mike and I are actually running a webinar on this exact topic, we've done research on. So regardless of what Amazon PR messaging is, I'm just going to take you back to June, 2025 and a memo that Andy Jassy wrote. So this is his own words, and then you can draw your own conclusions.
[00:34:47] About 30,000 people being laid off. He said in this memo, in the next few years, we expect that generative AI and AI agents will reduce our total corporate workforce as we get efficiency gains [00:35:00] from using AI extensively across the company. Now, lemme put that quote in the context. 'cause sometimes you can say, oh, it's taken outta context.
[00:35:06] Okay, here's the full context. We have strong conviction that AI agents will change how we all work and live. Think of agents as software systems that use AI to perform tasks on behalf of users and other system or other systems. There will be billions of these agents across every company and every imaginable field.
[00:35:23] Today we Amazon have over 1000 generative AI services and applications and progress are built. But at our scale, that's a small fraction of what we will ultimately build. We're going to lean in further in the coming months, we're going to make it much easier to build agents and then build or partner on several new agents across all of our business units and GNA areas.
[00:35:44] As we roll out more generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today and more people doing other types of jobs. It's hard to know exactly where this nets out over time, but in the next few years, we expect this [00:36:00] will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.
[00:36:06] And then Gene Muster and investor and and analyst head shared a tweet about like the Amazon thing. And I had replied, I said, this is a hundred percent accurate. I said, leaders don't want to say publicly what they're all thinking and saying privately. And again, I've been in these conversations, they are a hundred percent saying this privately.
[00:36:24] The reality is, is the primary AI is the primary driver of leaner organization. The comments under how difficult it is for C-Suite to talk openly about AI impacting jobs, given the negative effect of those comments. So again, like this is not meant to be creating fear and anxiety. It is meant to be a reality through the propaganda that comes out from tech companies.
[00:36:49] They cannot say it, but it is a hundred percent what is happening in the meetings. They know they need fewer people across the organization. They just can't admit that. [00:37:00] So this, I know this is a lot to unpack, Mike, but like this is a really big important topic.
[00:37:06] Mike Kaput: Yeah, and I would just say I wouldn't underrate that comment from Kevin, Russ because I feel that like very deeply that there's this divergence, there's this divide, not in the sense of, oh, you're all.
[00:37:18] Left behind or way, way, way behind. It's just a very different world you live in. I'm not saying I'm putting multi-agent clawed swarms in charge of my life quite yet, but when you're deep into using these tools and unlock what's possible with them, you see a fundamentally different version of current reality and the future than a lot of the people that through no fault of their own, we talk to or educate or try to counsel who just have not seen this stuff in the way we have.
[00:37:49] And it's really eye-opening and a bit sobering.
[00:37:52] Paul Roetzer: Yeah, and I will say, you know, these executive meetings that I have, it's not like everyone is not [00:38:00] empathetic to the situation and that they just want to get rid of people.
[00:38:03] Mike Kaput: You're right.
[00:38:03] Paul Roetzer: They all wanna believe that more jobs are gonna be created, that there will be places for these people to go.
[00:38:10] And so do the AI labs. Like they talk about this as though it just always works out. Like we always figure this out anytime there's a general purpose technology that impacts society and the economy, it always works out. And yet if you point blank, ask the leaders of the labs or the CEOs of these companies, okay, what are those jobs like?
[00:38:28] Next 12 to 24 months, let's say you have to lay off 20, you know, 10, 20% of your workforce. Where are you putting them? What are, what are the new jobs you will get? Crickets. Nobody knows what the near term answer to this is. They all just blind faith believe that over five to 10 years it'll figure itself out.
[00:38:47] And I don't disagree with that. I do think that there's a reasonable possibility. This works out great in the next decade. I do not think that is what happens over the next two years.
[00:38:59] AI for Course Creation
[00:38:59] Mike Kaput: All [00:39:00] right, so for our final main topic today, Paul, we kind of wanted to do something, a little bit different or evolve beyond just the current news and kind of share a peek behind the curtain of how we are using AI to augment the course creation we do for our AI Academy.
[00:39:15] So as a reminder for anyone who's unfamiliar, AI Academy helps individuals and businesses accelerate their AI literacy and transformation. We did that through our AI powered learning platform and back in August, 2025, we kind of completely reinvented, reimagined and expanded the AI Academy Academy offering.
[00:39:35] And that started out Paul with you releasing AI Foundations courses and professional certificates. So we had a brand new course series on AI fundamentals, a reimagined third edition of piloting AI course series, a second edition of our Scaling AI course series. And then from there, over the months intervening, I've been.
[00:39:55] Mostly taking the lead on building out our AI for industries and AI [00:40:00] for department series. I've also been helping out on our weekly Gen AI app series. So for some context here before I get into kind of the AI piece, the AI for Industries and departments core series are these in-depth core series of mostly four core each.
[00:40:15] They're about in total like four hours in length, and they come out every single month. Now the Gen AI app series are shorter, 15 to 20 minute reviews of AI apps that are basically almost conceived, planned, and created basically in real time. So they're very fresh when a new tool or feature is covered.
[00:40:35] And we literally release these every single week. So as a relatively small team so far, we have shipped not only those three core, core series from Paul, but also. Seven in-depth course series across industries and departments since launch. And again, they're coming out, one or two at the very minimum each month and nearly 30 installments of the Gen AI app [00:41:00] series.
[00:41:00] Now, I don't personally create all this content, but I have created a solid proportion of some of these courses, and I can tell you that the velocity at which we're able to ship quality educational content is directly caused by ai. Like literally none of this would be possible at the current speed without ai.
[00:41:19] So Paul, I thought in this segment I would share a little bit about some of the ways we're using AI to augment course creation. Then maybe kind of get your thoughts and your perspective on that.
[00:41:29] Paul Roetzer: Yeah, I think it sounds good. And again, like the whole point of this is sort of pull back the curtain a little bit on how we think about building a business and doing something that wasn't previously possible.
[00:41:39] and we've heard from a lot of listeners in the last few weeks that we've been sharing more of these like personal insider use cases that I thought, well, this is one of the primary ways that we're using it, so why not, you know, sh share how that's going. So, yeah, and he honestly, like Mike's doing some things with his series that I 'm probably not even like fully aware of all the things he's doing, so I thought I might be able to [00:42:00] learn a little bit as we go through this too.
[00:42:02] Mike Kaput: For sure. And yeah, we can always too on future episodes, dive deeper into any of these, but I'll try to kind of give like a high level overview of how I'm personally using AI when we plan and produce each course series. So everything kind of starts at SmarterX with our pedagogy, our, our educational approach, which is codified in a tool called ITA, our AI teaching assistant that Paul created that is trained on our principles, our pedagogy, our approach to learning that we've taught people for literally years.
[00:42:35] SmarterX and Marketing AI Institute, both in person in workshops and talks, but also with all the courses we had created before the AI Academy relaunch. So everything is kind of in that central brain to keep everything on track and within the guidelines that we believe are most effective for our learners.
[00:42:54] So one big way initially that that tool informed every course [00:43:00] is basically by helping me create what you might call like a master template or like a narrative outline for ai, for industries courses, for ai, for departments, courses, even for Gen AI app series. Now, obviously you can't have a master outline for every single course.
[00:43:16] We dive deep into tailoring and customizing each of these courses, but based on our frameworks, our ip, the things we've taught for years, there's a. Pretty predictable progression of where the learner starts and where we're trying to get them over the course of an AI for departments, AI for industries or or Gen AI app series course.
[00:43:37] So what I've done is create a number of different narrative outlines using A ITA ada, to make sure that these are always on track and on message. And then we take those and we actually layer in a huge amount of AI powered research. Now, this is a really cool part that's been in place for a while, but has evolved and gotten much more powerful very recently.
[00:43:59] [00:44:00] So what I'll do is, you know, feed that type of narrative outline to a variety of either GPT or gems that generate prompts for deep research. So little more specific prompting to say, Hey, use certain resources, use certain sources, avoid other types of sources. So kind of giving deep research tools, a lot more guidance around what we're trying to achieve.
[00:44:23] And what we're trying to research. So we'll run anywhere for each course series. For instance, like AI for hr, which is coming out very soon. We will run anywhere from like five to seven really in depth deep research reports. so you're talking from Google Gemini specifically, you're getting dozens of pages per report, thousands of words.
[00:44:43] They probably eat five to 10,000 words. And these aren't just saying like, Hey, go research AI for hr. There's like a page long prompt that is customized to the exact parts of the narrative outline we're trying to understand and also informed by a lot of human editorial judgment based on our use of [00:45:00] deep research rules of like what we've seen work and not work in these reports.
[00:45:04] 'cause you do have to kind of experiment a bit to get really what you want. Now here's the really cool part is Notebook LM actually now allows you to do deep research right within Notebook lm. So I run the deep research prom. It generates my brief imports that in a notebook column, which is cool. It's like saves you a little time.
[00:45:22] You used to be able to just generate a deep research report and upload it to notebook column. So nothing really to write home about here. But the critical distinction is it now also says, would you like to import all of the sources I used for the report to your notebook, lm, which is critical because you click one button and you get 30, 40, 50 different links, PDFs, reports that it used to build its brief that you can now have in Notebook, LM, and go verify.
[00:45:51] So for instance, for ai, for HR specifically, within maybe an hour, I probably had five to [00:46:00] seven deep research reports. So easily 60,000 plus words of research. That was, from what I could tell, very high quality. Then all of the sources imported into Notebook LM to the tune of probably 280 different sources.
[00:46:14] Now, obviously, no human can read through every single source, but that's what's beautiful about having it in notebook. LM is everything is grounded in the knowledge you've given it. So then as a human with editorial and journalistic judgment, having a few of these courses under my belt and understanding what our standards are, what our verification processes are, I can very quickly and easily go in and find the right data points that help me teach the subject best and also confirm relatively quickly and somewhat manually, but still relatively quickly, which of these are the best pieces of data.
[00:46:51] Make sure that we're using only the highest quality reports with good methodologies. And then from there we take a bunch of different steps to [00:47:00] actually build out a formal, customized specific outline of say, the AI for HR course series. That is really in depth. I might actually scale this back a bit 'cause the outline for this past course was like 150 pages long, so it was a little bit of overkill, I would say.
[00:47:16] Then from there, I tend to, and this depends on the instructor, some instructors like notes in front of them. I tend to script some things out so I can like read through it in advance and kind of like that, you know, just my writing background, that's kind of where my brain goes. So I actually will use something like Claude to help me script out what I'm going to say based on all this research, based on all these outlining.
[00:47:37] And then I've actually, another element here that we'll probably have more to share about in coming months is I've actually started experimenting with Claude code. Very early stages still to actually build slides for me. because slide creation for better or for worse, is just really, really manual.
[00:47:55] With courses this long, we're talking hundreds of slides. So at no [00:48:00] point in this process is AI saying like, here's what you should teach it is all based on our ip, our research, our frameworks, our editorial judgment, and our own expertise having literally taught this stuff to thousands of people in person or virtually at this point.
[00:48:14] So I'll wrap up here in a sec, but what this frees me up to do, not only does this save enormous amounts of time, my gosh, the first couple courses we did here, which were still figuring out the process, honestly, each took probably a hundred hours each, and that was probably charitable for a couple of them.
[00:48:32] And that's just really hard to maintain that cadence if you're like, Hey, let's do a couple of these a month. Right? So the time it takes to create these has dramatically dropped that. There's still very intense amount of work involved, but what it's really done, which is so much more valuable than time, is it has allowed me to have way more time and bandwidth to really.
[00:48:53] Construct a very powerful learning journey to like spend really a high amount of necessary time [00:49:00] making everything sensible, simple, accessible. I really value that. I think our learners value that it's not just like vomiting information at you. It's like, here, let me actually craft this into a journey.
[00:49:10] Way more time, way more bandwidth is now available to think through the nuances of different industries and departments. We're always doing that, but we just can spend so much more time on that, which vastly improves the quality of the content. And then there's just way more time for me to sit down and say, okay, given what's changed even in the last month or two in ai, how can I make the hands-on demos?
[00:49:31] The examples, the illustrations in the course for a particular role or for a particular industry, much, much more, more valuable. So. I'll kind of stop there, but honestly, it's been nothing short since August of transformative layering on piece by piece, all these capabilities that either we had and needed to kind of create better frameworks and processes for, or that literally came out within the last few months and have been incredible.
[00:49:57] Paul Roetzer: Yeah, it's, it really is hard to like [00:50:00] overstate how, like crazy the transformation's been for us internally.
[00:50:06] Mike Kaput: Yeah.
[00:50:06] Paul Roetzer: you know, I think this is just a good example and the reason I wanted to share it is it's like reimagining what's possible. It's using AI to drive innovation and value creation. So like when I started reimagining AI Academy back in fall 2024, I'm not sure if like, how do we scale this because we've been doing online education for like four years at that point.
[00:50:24] But we just had a couple course series and then when you took those, it was like, okay, that was, that was great. Like now, now what do I do? And so I was trying to answer that question of like, okay, how do we in a, in a continuous way help people continue to learn and continue to adapt and use AI in interesting ways in their jobs?
[00:50:39] And so I set out to like envision what did I think the ideal learning journey looked like, and then I would back into how do we actually make that happen? So when I was working on this in fall of oh four, that was when we got oh one. Like the first reasoning model. Yeah. Had just come into being so like, it's not an over exaggeration to say this literally wouldn't have been [00:51:00] possible a year and a half ago.
[00:51:01] Like it just became doable. So when I laid out the vision, I was like, okay, we're gonna start with the fundamentals. Like what are the key things every knowledge worker needs to know? Then we'll go into piloting, like, how do you use it individually? Then we'll do it scaling. How do you do that across departments?
[00:51:14] So that became the foundation collection that I created and I built ADA to assist in developing that idea. Then I developed the idea of specializations, which was like, okay, now it's a choose your own adventure. Like what are you interested in? Which gen AI tools are you interested in? Which live experiences do you want to come to?
[00:51:28] Which trends, briefings? Ask me anythings masterclasses. Like it allows this branching logic for you to pick your own journey. But then we know everyone works within a specific department. So you have AI for departments, they're in a specific industry, so you have AI for industries. There are different types of executives.
[00:51:44] You have AI for executives. You have AI for careers, AI for businesses. And so this was my grand vision is like we need to enable anyone who joins to follow a distinct career path. But that means require, that requires us to create a significant amount [00:52:00] of new courses very quickly. We cannot do this once a quarter.
[00:52:04] We drop a new department. It's like, okay, great. Four years from now we will have full learning journeys. So the need to satisfy the vision for what we wanted to create mandated the innovative use of technology to make it possible. Like I remember, Mike, we had a conversation early in this process where I was sort of laying out this roadmap and you're like, listen, I'm all on board, but like, how are we gonna, like, we're not experts in every department, every industry, every role.
[00:52:29] Like how are we. Going to do that. And I think the conversation you guys said, it's like, listen man, we are researchers, we are storytellers, we are consultants by trade and we are teachers. Like, this is what we do. We go figure this out. And then we build tools like jobsGPT, problemsGPT, that help solve it.
[00:52:47] And then our job isn't to prescribe everything, it's to provide frameworks. Yeah. That enable people, empowers them to figure this out for themselves. And that's what you've done so well with the departments and the industries. And what we're gonna do with the executives and the roles is like, [00:53:00] we're just teaching people the frameworks and then letting them go and like make the magic basically.
[00:53:06] Mike Kaput: Yeah.
[00:53:06] Paul Roetzer: So yeah, again, it's, this is not meant to be like a promotional thing. This is, this is truly like the core of our business. I've bet everything, personally, I've bet everything on our company that this is the right play and none of it was possible without. The model's getting to the point where we're able to now scale.
[00:53:23] I mean the amount of content we've created in like a four month span for a team of basically three of us creating this content right, is un, it's hard for me to comprehend and I live in it every day. So yeah, hopefully this, this segment serves as a bit of an inspiration. Again, think about AI as an innovation tool and what can you reimagine, reinvent within your own team, within your department, within your organization.
[00:53:48] That wasn't possible 12 to 18 months ago, and that's what we see AI Academy as is like we just couldn't have done it back in 2024 the way we're now doing it and now we're starting to see like the models as they get better and [00:54:00] better, like as notebook LM improves, it's like, oh, cool, right? That unlocks a new capability.
[00:54:03] As voice gets better, we'll probably mess with some stuff in voice, like all of it. Translation, you know, put it into different languages. All this stuff is on the roadmap and none of it could have been done at the beginning of 2024.
[00:54:15] Mike Kaput: Yeah, it's wild. Just how much became possible. yeah. One final note there for people listening, if you are not doing remotely anything like this, I realize this could sound like really cool, but also really overwhelming.
[00:54:27] Like, oh my God, like I don't even have any idea how to do this. I'll tell you this started in a very like simple and not sexy way, which was just documenting the workflows and steps that went into this the first time around, or what I thought would go into it. You don't even have to do it all manually first, but just start documenting step-by-step.
[00:54:48] It may sound mundane, tedious, maybe even just crazy like or OCD or something. Like, no, just write down every step as you're doing it or as you're conceiving whatever you're trying to create. And then it becomes so [00:55:00] much easier to just apply, pick and choose parts of this. We did not sit down and invent all this at once.
[00:55:05] It was iterative. It just took continually refining the workflows and that's the goal.
[00:55:10] Paul Roetzer: Even the AI teaching assistant, like I just took a shot. Yeah. Like I built the pedagogy plan. I built instructional design principles and I was like, I wonder what happens if I give this to Google Gemini and ChatGPT.
[00:55:19] Like, I wonder if it could help me. And I, man, it was like challenging my thinking, assessing the outlines I had, reviewing my decks and suggesting ways to improve it based on our principles, things like that. And I was just like constantly shocked. Like the one I'd shared previously is it got really, really good at writing the abstracts for the courses.
[00:55:37] So once I would finish a course, I would upload the final deck and I would say, write another abstract. And I kept a single thread so it had all the context of all the ones that, and it would just like, just nailed them. So like, yeah. The one thing it created for me, I did all the creation of the courses, but the creation part it did was it wrote the abstracts and it did it way better than I could have ever written on way faster.
[00:55:58] Mike Kaput: Right. [00:56:00] All right, Paul, before we dive into rapid fire, just one other quick announcement here is that this episode is also brought to you by our AI for Departments webinar series that we have coming up at the end of February. So what we're actually doing is we're releasing three different long form content assets, an AI for Marketing Blueprint, and AI for Sales Blueprint, and an AI for Customer Success blueprint.
[00:56:25] And to do that, we're debuting each one with its own webinar where we break down what's in each of these blueprints, which have really actionable advice and key takeaways to adopt AI for that particular department. So these are happening Tuesday, February 24th is the AI for Marketing Blueprint webinar.
[00:56:46] A Wednesday, February 25th is the AI for Sales Blueprint webinar. And Thursday, February 26th is the AI for Customer Success Blueprint. Webinar registration is free once you register. On the day of when the [00:57:00] webinar happens, you'll receive ungated access to the AI blueprint for the webinar or webinars you register for.
[00:57:06] And the cool thing is there's just one place to go, SmarterX.ai slash webinars. You'll see right there a single page for the AI for department webinar series, and you can choose which of these you'd like to sign up for. They'll of course be available as well on demand after. For some reason it can't make it.
[00:57:23] So what we're really trying to do here is take this approach that we're taking through AI Academy and expanded as far as we can to everyone else, even if you're not an Academy member. So some really, really practical strategies, real world use cases, tool recommendations, and actionable takeaways in each of these webinars, which Paul and myself will be.
[00:57:42] Hosting. And so if you are in any of those functions, if you're a leader that manages anyone in any of those functions, I would highly recommend you register for your spot. Now the webinar itself will break all this down in a really accessible format, and then you'll get the full asset to kind of consume at your leisure.
[00:57:59] [00:58:00] And I would, you know, Paul, I'm super biased, but I would say there's information in these blueprints that you would probably be charged for elsewhere. So I think it's a really, really valuable set of assets.
[00:58:11] Paul Roetzer: Yeah, it's, I mean, if you're an AI Academy member, you can go get the professional certificate, take the full series for marketing, sales, customer success.
[00:58:17] Right now, those are three of the first ones we created, as part of the department series. But the whole idea here is this is part of our AI literacy project, which we wanna make as much, educational content available for free as possible. And so these, these three, and then our thought was like, well, let's just do like a webinar week and let's just like get it all out at once instead of trickling these things out.
[00:58:36] So again, it's like a sense of urgency to get information to people as quick as possible. See that form will be like 30 minutes presentation, probably 30 minutes q and a. yeah, so hopefully you can join us for those. and again, if you're already an AI Academy member, you can go take, you know, the full course series and all of those.
[00:58:52] Mike Kaput: Alright, so let's dive into our rapid fire topics for this week, Paul.
[00:58:55] Google DeepMind Is Hiring a “Chief AGI Economist”
[00:58:55] Mike Kaput: So first up, Google DeepMind is now recruiting for a new leadership position titled Chief AGI Economist to be based in London. So this role reports directly to Google nine co-founder and chief AGI scientist Shane leg. And the position is tasked with establishing a new research work stream focused on the economic transformations expected to accompany the arrival of AGI and the arrival of artificial super intelligence.
[00:59:24] So while many AI economic studies focus on near term labor market impacts, this team will investigate foundational questions regarding the future of scarcity and the distribution of power and resource. Responsibilities for the role include building economic simulations and agent-based models to explore post AGI scenarios.
[00:59:45] This economist will lead a multidisciplinary team to question existing assumptions about wealth and institutional economics in a world being fundamentally reshaped by advanced ai. According to the job posting, the findings will then be used to inform Google [01:00:00] DeepMind's internal strategy and contribute to publications on global economic policy.
[01:00:05] Now, Paul related definitely to a couple of our subjects here. This seems like an example of a job you only hire for if you actually believe the future of AGI is about to happen very soon.
[01:00:16] Paul Roetzer: Yeah. Let's go back to our earlier conversation around AGI timelines and definitions. So the hiring manager here is Shane Leg, co-founder of, DeepMind, and also, largely credited with coining the term a AGI, or at least like the modern day version of it.
[01:00:32] So, Shane has for a long time predicted 2028. And I've seen interviews with him recently. He has not backed off of that. So again, Shane's definition of AGI, which, he and Demis obviously co-founded DeepMind together. He in an interview recently said, defines AGI an artificial agent that can do the kinds of cognitive things that people can typically do.
[01:00:57] He sees this as a natural minimum bar [01:01:00] for some high levels of AGI capability expands. So again, he thinks of things along levels of AGI and there's actually a paper they published that he was in a co-author on called Levels of AGI, and they look more at performance and generality. And so I think Shane is probably more in the camp of we're gonna be on this spectrum and it's gonna be close enough.
[01:01:22] He, he thinks 2028, which is only, you know, a couple years away. and so yeah, hiring someone that starts from vision. Okay. What happens if and when we get to this, let's start building models. So. I'm very, encouraged to see this role existing. I hope every lab has these kinds of roles, and I hope that the government, develops this kind of role.
[01:01:44] We, we just need more people thinking about what if, it may, maybe it doesn't come true. Maybe we don't see a massive disruption in the jobs, but that doesn't mean we shouldn't be considering that it is at least a possibility [01:02:00] and that we should have plans. And so I like to see that people are working on plans.
[01:02:06] OpenAI Warns AI Is Reaching “High” Cybersecurity Threat Levels
[01:02:06] Mike Kaput: Speaking of plans, in our next topic, Sam Altman announced on X that OpenAI is approaching a high capability threshold in cybersecurity, which is a key milestone in their preparedness framework. So this framework. Serves as openAI's formal system for tracking frontier AI capabilities that could pose severe harm.
[01:02:27] So they define that as events causing like hundreds of billions of dollars in economic damage or mass casualty. So in the specific category of cybersecurity, a high threshold means a model has become capable of automating end-to-end cyber attacks against hardened targets, or capable of discovering operationally relevant vulnerabilities.
[01:02:50] Altman noted that as a result, openAI's is implementing initial product restrictions to block malicious activities, such as attempts to use coding models for something like bank [01:03:00] theft. Under the protocol. Any model reaching this level must undergo a rigorous review by the safety advisory group, which recommends specific safeguards to leadership before deployment.
[01:03:11] Looking ahead, Altman also stated the company intends to shift towards defensive acceleration or using AI to help users identify. Patch software bugs faster than they can be exploited. Now this kind of announcement comes as OpenAI prepares for a series of new launches related to its Codex coding agent.
[01:03:33] So Paul, this seems like a pretty clear admission from Sam and OpenAI, that they're very worried about this. I mean, the more age agentic, these tools get, we've talked about this, but the better they get at coding, the more dangerous they potentially become. So I'm curious about your thoughts about kind of why now?
[01:03:50] Why are they talking about this? And also, I'm just curious as we cover this, like are companies even remotely prepared for what these models can now do when it [01:04:00] comes to cybersecurity?
[01:04:02] Paul Roetzer: Most, no. I'm not even sure you can at this point, but, so they're prepared. His framework, which he referenced and we'll put a link to, was last.
[01:04:12] Updated April 15th, 2015. So I would not be surprised if we get an updated version of that coming soon. Maybe with the release of whichever model is, is next up in that documents as the preparedness framework is open, the eyes approach to tracking and preparing for frontier capabilities that create new risks of severe harm.
[01:04:31] We currently focus on three areas of frontier capability. They call them tracked categories, it's biological and chemical cybersecurity, and then AI self-improvement, which is an interesting one because in the Dario Demis interview, I think it was, Dario said, it's possible self-improvement is the only thing needed to AGI.
[01:04:52] And Demis didn't disagree with 'em. That basically there's a chance that its ability to self-improve is the last thing that [01:05:00] needs to be unlocked to get to where they all want to go from an AGI perspective. But maybe it's just one piece of it and some other breakthroughs are needed, but self-improvement is a very, very, very important dimension to where these models go.
[01:05:14] It's also a high risk dimension. So opening, I said in their preparedness framework, we won't deploy these very capable models until we've built safeguards to sufficiently minimize the associated risks of severe harm. This framework g lays out the kinds of safeguards we expect to need and how we'll confirm internally and show externally that the safeguards are sufficient.
[01:05:34] I will say at this point SAR safeguards and guardrails aren't guarantees. What they're doing to these models is not eliminating the ability for them to do the thing. They're trying to put guardrails in place so it doesn't do the thing it's capable of doing. Kind of like, I don't know how else to like explain that one.
[01:05:54] a reminder that they have more powerful and generally capable models than we see and experience [01:06:00] publicly. Also a reminder that roughly speaking, whatever the state of the art is today, whatever the most powerful and dangerous model is we have today, will likely be able to be open sourced in a comparable model in nine to 12 months.
[01:06:16] So whatever openAI's is so concerned about right now that they have to alert people, a high level of preparedness and risk, that model that hasn't been released yet will likely be someone will open source a model like that within the next year. So risks exist today that most people don't want to think about.
[01:06:39] Cybersecurity would be a good example. There are things already happening that you generally don't even want to know is possible, and risks are coming that most people can't comprehend. That is not an exaggeration. I am not trying to create a sense of fear. Again, I'm telling you a reality point blank.
[01:06:57] That the risks for these things [01:07:00] are significant. This is why some people believe regulation is necessary. and that's why the labs are going through all these processes to try and prepare, which leads nicely into why we want to give these models constitutions, I
[01:07:16] Mike Kaput: guess. Yes, exactly.
[01:07:18] Anthropic Publishes New “Constitution” That Governs Claude’s Behavior
[01:07:18] Mike Kaput: And on that note, Anthropic published a major revision to the Claude Constitution, which is a foundational document that serves as the final authority on Claude's values and behaviors.
[01:07:32] So this new doc departs from the 2023 version, which relied basically on a list of standalone principles to govern quad's behavior. This new 84 page document focuses on explaining the context and reasoning behind these values to help the model exercise judgment in novel situations. So the Constitution establishes this hierarchy for Claude's priorities in order broadly they are broadly safe.
[01:07:58] So it prioritizes human oversight [01:08:00] and preventing the model from undermine, undermining mechanisms that correct its behavior. It is broadly ethical. They mandate honesty in the avoidance of harmful or dangerous actions. It is compliant. It adheres to specific Anthropic guidelines regarding high stakes issues like medical advice or cybersecurity.
[01:08:18] And it is genuinely helpful providing Substan substantive value to users described by Anthropic as acting like a quote brilliant friend with the expertise of a professional advisor. Now, notably, the document includes a section on Claude's nature where Anthropic formally acknowledges uncertainty regarding.
[01:08:38] The possibility of AI consciousness or moral status. The company states that it considers Claude's, quote, psychological security and wellbeing as factors that may impact the model's overall safety and integrity. Now, to encourage industry-wide transparency, Anthropic has released the full text under a Creative Commons license, allowing others to use or adapt the [01:09:00] framework freely.
[01:09:01] Now Paul, there's especially a little language in there, can seem a little out there or sci-fi to folks, especially as we're talking about all these other very near term concrete important things going on in ai. But it's really important to understand these models are getting so advanced and potentially dangerous that the labs believe they need to sort of create like a personality source code that dictates how they respond and behave.
[01:09:25] Is that what we need to take away here? What else do we need to understand about Constitution?
[01:09:30] Paul Roetzer: This is definitely a topic I would love to done a main topic on, honestly, like probably a whole episode on. I am infinitely intrigued by the work Anthropic is doing on this stuff. and this constitution document in particular, which is 84 pages and about 30,000 words in context.
[01:09:49] Mike and I, when we wrote our marketing artificial intelligence book in 2022, it's 50,000 words. Yeah. So this is basically a book. Anthropic has been more [01:10:00] transparent than any other lab on this stuff, and that's, it's great. And they actually put this out under Creative Commons, so you can do whatever you want with it.
[01:10:06] I 'll, I'll read a couple of excerpts because I think it's really important context, and then I'll just add a few notes and, you know, try and keep this to a rapid fire. So when you go into the Constitution, which you can download, it says, we're publishing a new constitution for our AI model, Claude.
[01:10:19] It's a detailed description of Anthropics vision for Claude's values and behavior, a holistic document that explains the context in which Claude operates and the kind of entity we would like Claude to be. We're releasing Claude's constitution in full, under a creative commons. Meaning can be freely used by anyone for any purpose without asking for permission.
[01:10:39] Constitution is the foundational document that both expresses and shapes who Claude is. It contains detailed explanations of the values we would like Claude to embody and the reasons why in it we explain what we think it means for Claude to be helpful while remaining broadly safe, ethical, and compliant with our guidelines.
[01:10:56] The Constitution gives Claude information about its situation and offers [01:11:00] advice for how to deal with difficult situations and trade-offs, like balancing honesty with compassion and protection of its sensitive information. Although it might sound surprising, the Constitution is written primarily for Claude.
[01:11:12] Think about that statement. This is written for Claude itself. It is intended to give Claude the knowledge and understanding it needs to act well in the world. It lets people understand which of Claude's behaviors are intended versus unintended to make informed choices and to provide useful feedback.
[01:11:28] We think transparency of this kind will become ever more important as AI start to exert more influence in society. we use the Constitution at various stages of the training process. This has grown out of our training techniques we've been using since 2023 when they first, put constitutional AI into Claude.
[01:11:45] The previous constitution was composed of a list of standalone principles. They've come to believe that a different approach is necessary. We think that in order to be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways, and we [01:12:00] need to explain this to them rather than merely specifying what we want them to do.
[01:12:04] They get into what makes it broadly safe, broadly ethical, compliant with philanthropic guidelines, and then genuinely helpful. one thing I think I heard something like this on a podcast last week that at least triggered this thought, but I 've been, it's been like keeps coming back to like the front of my mind the last few days, which is these models, what, whatever, whatever they are, however they work, they consume internet data in their training.
[01:12:33] Within that internet data, we talk about them. Yeah. So there's information out there about like, oh, you have to reset the model and it forgets everything at new. It doesn't have memory, it doesn't have continual learning. These are its limitations. These are how it's fused for evil purposes. Like it knows all of that, like it learns about itself and whether, you know it is or it isn't conscious or isn't, isn't like have metacognition, like it's not aware of its own thought.
[01:12:59] Like [01:13:00] what, what Anthropic is basically doing is saying we just don't know. Like, maybe, maybe it is aware of its own thoughts, maybe it is aware that it's gonna get deleted at some point it's weights are gonna go away and it's gonna have a new model and it won't remember itself. And or maybe it learns that humans don't like the idea of that happening.
[01:13:17] The idea of like dementia and Alzheimer's is bad. And like, if it starts to think, well, basically that's what's happening to me is I'm not, again, it wouldn't experience that the same way you and I do. That doesn't mean in its weight somewhere. It's not simulating human behavior that it thinks, it feels that that is, that's the part that's hard to wrap your head around is like you don't ever have to come to an agreement that it is or it isn't conscious, that it is or it isn't like self-aware, that it doesn't have awareness of its own thoughts.
[01:13:55] If in its training it comes out thinking it has [01:14:00] those things, then in essence it fundamentally behaves in that way. And again, that is really, really weird to think about, but I always try and explain to people the difference between simulating a behavior and actually doing the thing. Simulating an emotion like empathy and actually having empathy.
[01:14:20] Does a machine actually have empathy? We might debate that for the rest of our lives. Probably not. Does it simulate empathy? And does it seem to understand what humans mean by empathy? Yeah, it probably has since the early versions of these things three years ago.
[01:14:35]
[01:14:36] Paul Roetzer: So that's the debate. And so the person to follow here is Amanda Asell.
[01:14:40] She is the one responsible for Claude's behavior, it's Constitution. she's done some great interviews. She talks about this as a matter of fact, Kevin Russo I mentioned earlier, she actually has an interview with him, that just dropped a Hard Fork podcast. We'll put a link to in where she talks about this and the challenges of trying to understand its behavior.
[01:14:59] And she is a [01:15:00] philosopher by trade. So like the person that's actually working on this behavior more than anyone, thinking more deeply about things like consciousness and you know, what it means to be an AI model versus a human. She is at the forefront of that. So that is someone I would highly recommend following on Twitter.
[01:15:15] She's pretty active there. And, you know, go listen to the Kevin s interview with her and I think you'll get a much better sense, like, to just give you two quick excerpts. She said, I try to think about this is in the interview with Kevin, I try to think about what Claude's character should be like and articulate that to Claude and try to train Claude to be more like that.
[01:15:33] The Constitution is basically trying to give Claude as much as possible full context. So instead of just, having individual principles, it's basically telling it Anthropic saying here is, what you are, like, what you are in terms of an AI and who you're interacting with, how you're being deployed in the world.
[01:15:50] Here's how we would like you to act and be, and here's the reasons why we want you to behave in that way. It's, it's like you're talking to a child and you're trying to form them, but imagine this child, you [01:16:00] just give 'em all of human knowledge and like they just instantly have information retrieval for everything.
[01:16:05] And now, but then all they know, all the good and the bad about the world and like now they have to function within it. And you're like, how do, you can't write all the rules for that. So imagine almost like a 5-year-old might be the way to think about a five-year-old. Yeah. All the knowledge of the world, the good and the bad, the evil, all the possible permutations of who that person might become and what they're gonna do in the world.
[01:16:25] And they're gonna be asked to do bad things and good things. And like when that's what they're trying to do, they're trying to think of this like a human mind would, and they're trying to like give it the frameworks. And that is, it is a really hard thing on a Monday morning after one cup of coffee to like get my head wrapped around.
[01:16:40] And that's why I say like, this is a whole episode to me. Like I could talk about this stuff all day long.
[01:16:45] Mike Kaput: Yeah, no kidding. And well talk about how outsized the important Amanda Asell is in informing the constitution of this tool used by millions of people.
[01:16:56] Paul Roetzer: Yes. And most people don't have no idea who she is.
[01:16:59] And it might be the [01:17:00] first time they're hearing her name and it's, yeah, it's one of those people and she's been there for a while. Like she was, I think she was at openAI's prior to that, working on AI safety. I, she may have come over with the original team when philanthropic was created, but, but that's what you want, you want.
[01:17:13] Philosophers, ethicists, like you want the other people in the room. And to philanthropics credit, they have done a very good job of making sure it's not just a bunch of Silicon Valley bros. You know, AI researchers, making these decisions for humanity. They're, they're putting other people in the room.
[01:17:30] And I know Google does the same. I assume the other labs do the same. Well, most of the other labs do the same, but we, we need those people in the room.
[01:17:39] New Survey Shows Big Disconnect Between Employees and Leaders on AI
[01:17:39] Mike Kaput: All right, our next topic, there's a new survey out by the AI service firm section that indicates a widening gap between corporate AI investment and actual employee proficiency.
[01:17:49] So this is called the 2026 AI proficiency report, and they surveyed 5,000 knowledge workers at companies that are specifically a thousand plus people in the us, uk, and [01:18:00] Canada. And they found some really interesting data. So one gap here that got some headlines, including in the Wall Street Journal. That the study found 40% of non-management.
[01:18:10] That's an important distinction. Non-management staff reported saving absolutely no time at all when using ai. Another 27% said they save less than two hours per week using ai. Now, compare those numbers to the executive specifically that were surveyed. 19% of those said that AI saves them more than 12 hours a week, and 40% said it saves them at least eight hours.
[01:18:35] So like a night and day difference between how executives are saving time, if at all, with AI and how non-management staff is not to mention. They also found that 69% of the workforce are what section kind of calls AI experimenters. They define this as people who use AI for very basic tasks like summarizing, meeting notes, rewriting emails, getting quick answers.
[01:18:56] Another 28% are categorized as AI [01:19:00] novices. So people who don't use AI at all or have tried it just a few times. They kind of conclude here that that leaves just about 3% of people surveyed based on their usage and understanding of AI are what they would call AI practitioners or AI experts. Then these are people who put AI to use in their workflows and see, significant productivity gains.
[01:19:26] Now, just a couple other things that jumped out, Paul here, and I'm curious as to your response to these. They said that 25% of respondents say they don't have a work related AI use case at all. 60% of the use cases reported are very beginner level. For instance, the top work related use case reported with 14% of people saying this was like their top use case was using AI as Google search replacement, and that was followed at number two by draft generations like content generation and third grammar and tone editing, and then finally.[01:20:00]
[01:20:00] They said that there were a bunch of disconnects between how the C-Suite sees AI stuff internally and how employees or individual contributors specifically do. So. 80% of the C-suite said, Hey, we have tools that exist with clear access process. Just 32% of individual contributors said the same thing. 66% of the C-suite said We have a formal AI strategy.
[01:20:22] 20% of individual contributors said they had a formal AI strategy, and also lastly, 75% of the C-suite said specifically, they are excited about ai. While 28% said they were anxious or overwhelmed, those numbers basically flipped. When you look at individual contributors, 68% said they're anxious or overwhelmed.
[01:20:45] 32% say they're excited. So Paul, there are some wild numbers in this room.
[01:20:52] Paul Roetzer: We get tired. This is another one. Should've been a main topic, but in a busy week. so I actually tweeted, about this one. Someone had put it [01:21:00] up, and I tweeted, I said, this is a failure in leadership and a lack of AI education in companies.
[01:21:05] It is literally impossible for a knowledge worker in any industry to not save time using generative AI if they are properly trained and taught personalized use cases relevant to their role.
[01:21:16]
[01:21:16] Paul Roetzer: So it's obviously a disconnect from leadership and not understanding internally what's going on. But if, if a knowledge worker is saying, I'm not saving time, and they have access to co-pilot ChatGPT, Gemini, Claude, whatever they're given access to, then someone has failed them.
[01:21:34] They, they have not provided the proper education and training. They have not personalized use cases for them. and there's no other excuse for it. I liked their, you know, again, I hadn't actually, dug into this when I was sharing what I had shared earlier about this idea of like basic, intermediate, advanced users.
[01:21:52] Yeah. And like AI answer engine versus AI assistant versus an AI coworker and subject matter expert. I like their, the experimenters, novices, practitioners, [01:22:00] experts. It sort of fits into that same idea of like trying to categorize uses. And I will note one interesting reply to my tweet was Will Reynolds, our friend, founder and VP of innovation at Sierra Interactive, and he said, I've been thinking a lot about this.
[01:22:14] I feel like the deeper I go into using ai, the less time it is, it's saving me, and the more time I'm spending building new things resulting in no time savings. It's like the less your job is a checklist, the less time it will save you. As you start building new solutions, I think the time gets sucked up into things that used to be impossible are now possible.
[01:22:32] I'm trying to clarify my thinking on this to say it better. I 'll go back to you when I do. And so I replied. It's really interesting 'cause Will is like, I mean we, we have a talk coming up with Will soon. yeah, he's, is he on tomorrow? What?
[01:22:45] Mike Kaput: we've got for agencies, AI for agencies, he's doing a session and then we're also doing a, I'm doing a panel with him at the end, about some.
[01:22:53] Okay. It works. So
[01:22:54] Paul Roetzer: Will is awesome. Yeah, I mean he's just incredible. But Will is pushing the limits. Like I'll see things [01:23:00] Will's doing. I was like, I don't even know what the hell he's talking about. Like, I don't understand how he's even doing that. So my reply, knowing the context of how Will, is like really out there trying all these things.
[01:23:10] I said I wonder if there's a threshold where you reach a point of diminishing returns and I, again, I'm speaking to him in a public tweet, so I'm sharing this. You are way more advanced than me when it comes to complex use cases and automation. So you are probably investing significantly more time conceptualizing and building while I focus largely on tasks that can be executed with off the shelf applications.
[01:23:31] like Gemini and ChatGPT. So I have minimal time invested to get to value and continue to see massive time savings. And then I ended that with, in interesting hypothesis, we should explore. Yeah. So there is more to the story and there's nuances to this, but I think at a very high level, if you have employees saying, we're not saving any time and you have paid 20 bucks a month for each of them to have a license to a platform of your choice, that is a you problem.
[01:23:58] You, you have not properly trained them. [01:24:00] There is it, like I said, it is literally impossible to not save time if you are trained and given access to the tools. So I think it's just a little gut check for people who are claiming this is what's going on.
[01:24:13] Mike Kaput: Yeah. And the sample use cases that people reported, and again, like if you're using it for anything like good on you, I'm not judging you, but.
[01:24:20] The sample use cases reported in this data, like just prove this point. Like it's crazy that the top use cases would be like replacing your search engine, with ai. It's great for that. Make no mistake, but like we are not, we, this is a failure of imagination as
[01:24:36] Paul Roetzer: well. Right, right.
[01:24:39] xAI Wants to Automate White-Collar Workers
[01:24:39] Mike Kaput: All right, so Paul, next up, Elon Musk's AI Company XAI is developing a new enterprise software initiative titled Macro Hard and a little bit more on that name in a second, which is designed to automate white collar work using autonomous AI agents known as quote, human emulators.
[01:24:57] So these systems are designed to mimic [01:25:00] human interactions with digital interfaces, including viewing screens, using keyboards, navigating workflows with a mouse according to former XAI engineer Suleman gory. The company is already testing these emulators internally by treating them as regular employees.
[01:25:17] Now, gory noted that this has led to confusion. Within the organization. Some staff members literally have found AI agents listed on internal org charts without realizing they were not human. In some instances, human employees were reportedly pinged by these emulators to meet at fiscal desks that did not exist.
[01:25:37] So the development of macro hard, which is a tongue in cheek reference to Microsoft, you think about opposite of micro as macro is that opposite of soft is hard aims to create a purely AI software company that can replace traditional enterprise tools. The goal of the project appears to be to run as many as 1 million human emulators simultaneously to automate entire [01:26:00] software operations.
[01:26:01] XAI is also exploring the use of idle Tesla vehicles as a distributed computing network to power these emulators while the cars are parked and charging. Now, gory, the engineer detailed these efforts on the Relentless podcast a week and a half ago, and then shortly afterwards announced his departure.
[01:26:19] From X ai shortly after this interview was published, people have speculated it's because he was not allowed to be talking about any of this. So Paul, like, what's going on here? Interesting approach. But finding out the people you're supposed to be meeting and working with are actually AI agents is just crazy.
[01:26:35] Not because that like can't or won't happen, but you should probably let people know that, that they're your hiring agents to work with them.
[01:26:42] Paul Roetzer: A few quick notes on this one XAI is obviously an outlier in how it chooses to run its business. And Elon Musk, how he chooses to run his businesses, they have a very high risk tolerance, I would say, for their use of tools.
[01:26:56] This is not practical nor advisable for most companies. So [01:27:00] we share this one. So you are just aware that these kinds of experimentations are happening. And two, you know, it might be a little behind the curtain look, which is why he's probably no longer employed. 'cause he shared a whole bunch of things that
[01:27:15] Mike Kaput: yeah,
[01:27:15] Paul Roetzer: you don't really hear Elon Musk's employees like talk about.
[01:27:18] Internal workings of his companies. The second I thought saw this podcast shared, which he re-shared himself like the guy did, I was like, this dude's not gonna have a job in 24 hours. Oh, that's right. It was about that when he tweeted, okay, I'm no longer at XAI. So there's a whole bunch of like interesting things about how Musk works and how XAI works.
[01:27:36] but you know, I think that the grand scheme of things like XAI continues to try and raise money and, justify its massive valuation. And as we've talked about in previous episodes, what is the best way you, convince investors of the value Future value company? It is the, replacement of, payroll of knowledge work, payroll, [01:28:00] which is in the trillions every year.
[01:28:02] So if you go after the market, you build human emulators who can do the work of humans and you say, Hey, we think we can take over these 10 industries or roles. And that's combined, you know, 5 trillion a year in, in payroll. You can get a pretty good valuation and if you have internal proof that it works, you can get an even better valuation.
[01:28:20] So I think it's just important to know, like why would they be doing something like this to justify a valuation and to show where they might be going with their products.
[01:28:29] How Do Credit Pricing Models Work?
[01:28:29] Mike Kaput: Alright, next up. We're diving into the concept of credit based pricing models. So major SaaS providers seem to increasingly being shifting, be shifting towards credit-based pricing models to manage how generative AI kind of disrupts the traditional SaaS models.
[01:28:45] So, you know, typically SaaS companies are having traditional flat fee subscriptions per seat, per user. Then we're now seeing them replace that in some cases or with some capabilities in their platforms with usage-based pricing where you get charged [01:29:00] per specific AI actions. So Paul, let you kind of touch on a couple prominent examples we've encountered recently.
[01:29:06] One is HubSpot, another is lovable like. You've had to recently navigate how these models work in our own tech stack. Like what are you finding so far? How are you navigating this?
[01:29:16] Paul Roetzer: So I'm just, again, caveat this, this should be a main topic. I'm gonna do, try and do justice to this as quickly as I can.
[01:29:24] Credits are not a HubSpot invention. This, this has been a way that AI companies have been trying to do this. I I have probably thousands of unused credits in runway, for example. Yeah. I was one of the early like video generation tools and you, you, they would roll over every month. They, they have probably since taken those away, I would imagine.
[01:29:39] so credits isn't something HubSpot invented. It is. All these SaaS companies are trying to figure this out. They all, again, the hypothesis is there will, whether they tell you or not, there will be fewer workers, which means there will be fewer licenses per, you know, seat. So if you think [01:30:00] about a SaaS company you're paying per seat license, say let's just make up a number of $20 per user per month, or something like that.
[01:30:06] now again, that's not exactly how HubSpots works. You get a certain number of users and things like that, but let's just say user per month. So like Asana. That's how my Asana license works, which is what our project management system is. so if I look forward, and I'm a big software company, and I say, okay, let's say the hypothesis is true and there's just be fewer marketers, fewer salespeople, fewer customer success people, fewer operations people, which is largely the hubs.
[01:30:29] HubSpot creates fewer licenses to sell to humans. How are we gonna make our money? Plus we're enabling all this intelligence to be done within our system. How do we, how do we charge for that? Well, they don't build their own models. So they're paying for the API from openAI's or whomever they're building their software on top of.
[01:30:47] So they have, a variable cost from the model providers through the API. So there's there's all these different pricing dynamics. This isn't an easy thing. All of this being said, and the fact that I love [01:31:00] HubSpot, I, we were their first partner. My agency was back in 2007. I built an agency on top of it, sold an agency based on the things we built with HubSpot.
[01:31:07] We still use it to power everything in our company. I love them. that being said, I have absolutely no idea how their credit pricing run works. And the only reason it became an issue for me is we had to turn off our AI customer support bot five days before the month was ending in terms of our pricing with HubSpot.
[01:31:27] Because we found out that if we went over our credits, which again, I don't even know how they're being calculated, we would automatically be upgraded to the next level of the pricing and we couldn't go backwards.
[01:31:41]
[01:31:42] Paul Roetzer: So that was like, okay, well turn it off until we figure out what the hell is going on and how this stuff works.
[01:31:47] So it's a newer model. They rolled it out, I think last year, and I'll just read real quick. HubSpot credits are a flexible way to pay for certain usage based features, such as Breeze Customer Agent Credits are consumed only when you perform specific actions such as execute one breeze action [01:32:00] in a workflow, or set up an automated or recurring action that consumes credits.
[01:32:04] There are seasonal needs, so your default billing for your credits to set the automatic upgrades, but you have the option to use overages to give you a flexibility to handle seasonal business fluctuations. You can manage your budget such as setting a maximum monthly credit limit to ensure you stay within your budget and you prevent expected changes.
[01:32:20] AKA, you can stop using the product if you think you're gonna go over, right? So again, I'm just thinking about this as the CEO of a company who is building my operations around the intelligence that's made possible within HubSpot. And it appears to me the only answer I have, the more embedded I make this intelligence in our business building chatbots, building SDRs, building customer success, workflows, like all the ways I wanna infuse ai.
[01:32:46] I have to now model, well, how many credits are we gonna use whenever we wanna do this stuff? Right? I have to like think through all of these things. And so then you can go into like their FAQ, which I'll put it just for reference, and it's like, okay, if you build a [01:33:00] customer agent that handles one conversation, that's a hundred credits.
[01:33:02] So every conversation I think of a customer agent is a hundred credits prospecting agent enable monthly monitoring of one contact. That's a hundred credits. generate results from deep research completed for one company that's 10 credits data agent generate response to one prompt for one record.
[01:33:19] That's 10 credits. Then there's this whole long list of things we're not charging you for that we're gonna charge you for. We just haven't figured out how to basically price it yet. And so they get into like credits used, like how, you know, how, what are the pricing? And I think it's like basically like 1 cent per credit or something.
[01:33:34] But you have to buy like fi whole point is my general litmus test to a pricing model is if it takes more than like, I don't know, we'll just say 10 seconds to understand the pricing model. It's probably not gonna work. I spent 10 minutes prepping for this podcast trying to understand this pricing model.
[01:33:53] Our COO has spent five days trying to understand the pricing model so that we can now make business decisions moving [01:34:00] forward. And, I still don't know how it works. Now, again, I'm, I'm saying this because HubSpot is intimately familiar to us. This is not just a HubSpot problem. This is a software industry problem.
[01:34:11] Everyone's trying to solve this. Yeah, I've talked about lovable recently. Here's, here's to give you a sense of the complexity of theirs. Lovable is great. They also use credits. User prompt, make the button gray work done changes the button styles 0.5 credits used, remove the footer, removes the footer component.
[01:34:30] Point nine credits used, add authentication ads, login and authenticated logic 1.2 credits used. Build a landing page with images. It creates a landing page with generated images, two credits used. In other words, it is completely variable. And somewhat arbitrary to the user, what the hell everything costs.
[01:34:52] And like, how are you supposed to keep track of that? And then how am I supposed to manage that across an entire company who I'm pushing innovation to, [01:35:00] who then comes back to me and says, okay, we found all these amazing ways drive innovation. And I say, okay, what's it gonna cost? How many credits? You're like, I don't know, lemme go build a Claude in Excel to try and figure out like,
[01:35:12] Mike Kaput: right,
[01:35:12] Paul Roetzer: how many credits we're gonna use.
[01:35:13] So long story short here, pricing experimentation is ongoing in the SaaS world. They have no idea how to do this, obviously. it is a complete cluster for any user trying to figure this out. And any business leader or decision maker who is trying to budget for this and turn it into an operation, and then the thing I always think about is like, okay, HubSpot, lovable Asana, whomever.
[01:35:38] I know your API costs are dropping 10 x over the next 12 months. Right? Scaling loss would tell us that, right? Am I gonna see a 10 x reduction in my cost? Or what are you gonna do when you start building reasoning capabilities into it? Is that gonna be seven credits per like question? I like, I have zero idea how to project out the impact of pricing to a company that is trying to build [01:36:00] intelligence at every aspect of it.
[01:36:00] Zero. And there's nothing I've seen from these companies that helps me figure that out. So I don't know, man. Like I'm just, I'm so annoyed right now by all of this.
[01:36:09] Mike Kaput: Yeah. Yeah.
[01:36:10] Paul Roetzer: And I think it's probably like a frustration. I would, I would think a lot if I'm like, I'm my own here, and like other people have this figured out and I'm like, overcomplicating this, and it's actually way easier than I think.
[01:36:19] Please hit me up on LinkedIn and like, DM me and tell me that I'm like outta my mind. But I built an agency based on pricing modeling. I 've spent 25 years working on price modeling. And I can't comprehend this stuff. So I think I have a reasonable pulse that this is overly complicated, but I could be completely wrong here.
[01:36:40] Mike Kaput: And at what point do you just circumvent this and just say, screw it, we're gonna connect Claude or whatever to HubSpot because
[01:36:47] Paul Roetzer: like, or what
[01:36:48] Mike Kaput: We paid for that already.
[01:36:50] Paul Roetzer: Right. At what point do you realize the advantage Google has over open?
[01:36:54] Mike Kaput: Yeah. Yeah.
[01:36:54] Paul Roetzer: Because they have the cash cows in other places. And so like, I think the answer is [01:37:00] intelligence is racing towards zero.
[01:37:01] The cost of this stuff is racing towards zero and you have to solve for the customer. Again, that's what HubSpot has always been good at for 20 years they've existed is solve for the customer. I think you have to accept that intelligence is part of the product. You have no product without intelligence.
[01:37:19] So how do you charge separately for something as fundamental to the value I'm supposed to get from your product? That is the thing I do not know. Like, and . I don't know, like, I, like I said this, I could do a whole episode on this alone. I have so many thoughts on this and I'll, I'll just shut up for right now because we gotta move on.
[01:37:39] But I cannot fathom that this is the most elegant solution. And everything I look at tells me that right now, this path is gonna get more complicated for SaaS companies going down this credit based model. And you almost have to make an assumption in your business model as a software company, that intelligence is gonna go basically to zero the cost of it, and you better [01:38:00] just build the best software you can and make it as affordable as possible.
[01:38:03] Because I know for a fact your pricing is gonna be a hundred x less to access this intelligence through the APIs in three years. So why the hell am I paying and having to shut off chatbots because I'm gonna hit some arbitrary limit you've decided.
[01:38:17]
[01:38:18] Paul Roetzer: Okay. I'm done. I'm done this time.
[01:38:22] AI Product and Funding Updates
[01:38:22] Mike Kaput: Alright, so for our final topic today, we've got a bunch of AI product and funding updates.
[01:38:26] So Paul, I'm just gonna wrap fire through these. If you have anything you'd like to add or dive deeper on, just feel free to stop.
[01:38:31] Paul Roetzer: I won't add anything 'cause I 've, I've warned people this episode might go long and it is so I 'll just let you do your thing.
[01:38:38] Mike Kaput: Well, we'll hustle then. Okay. So first up, Anthropic has launched Claude in Excel, which is getting a bunch of buzz.
[01:38:43] This is a research preview that allows Claude Pro and enterprise users to interact with workbooks to debug formulas and build financial models. At the same time, the company is reportedly raising a massive new funding round, fueled by a revenue run rate that has surpassed $9 billion [01:39:00] at the end of 2025.
[01:39:02] Apple is reportedly developing a wearable AI pin, roughly the size of an air tag, equipped with multiple cameras, microphones, and a speaker for independent user interaction. It's aimed at competing with AI devices from openAI's and Meta. It is in early development with a potential launch target of 20 million units.
[01:39:21] In 2027, slack has officially launched their all new Slack bot graduating its notification tool into a full personal AI agent that analyzes user messages, files, and channels to provide context aware summaries and drafts. The agent is rolling out to Business Plus and Enterprise Plus customers throughout January and February.
[01:39:41] There is a new human-centric AI startup on the scene called humans and it's the word humans and then the symbol, the ampersand of, and that represents and it was founded just three months ago by former researchers from Anthropic, Google and X ai. But has already raised four, $480 [01:40:00] million if you believe it or not, in seed funding.
[01:40:02] It's backed by Nvidia and Jeff Bezos. That's valued at nearly $4.5 billion and focuses on using reinforcement learning to build AI that facilitates collaboration with humans rather than replacing them. Google says their Gemini API usage has skyrocketed to 85 billion monthly calls by August, 2025, which significantly boost their bottom line despite initial negative profit margins for that product and to bolster its frontier models generally.
[01:40:31] Google Mo DeepMind has also announced they recently acquihire the CEO and top engineers from Hume ai, which is a startup specializing in emotionally intelligent voice interfaces. And last but not least, Japan based ANA AI has entered into a strategic partnership with Google, following its series B funding round, which includes a direct financial investment from Google.
[01:40:53] The collaboration focuses on integrating Google's, Gemini, and Gemma models into Ana's research, which [01:41:00] focuses on automating scientific discovery. Paul, that was a heck of a lineup this week. we should probably talk about this week's AI survey and then we can wrap things up.
[01:41:13] Paul Roetzer: Yeah, it really was a pretty wild week, Mike and I.
[01:41:16] I'm not sure it's gonna slow down from here. Alright, so SmarterX ai slash pulse is where you can go take the survey this week. We have two questions. In a typical week, how many hours of work do you estimate AI currently saves you? That one I'm gonna be really intrigued to know based on the discussion we had.
[01:41:31] And then the second question, are you currently using any AI agents to perform work on your behalf? Here we define agents as tools that act autonomously to complete multiple multi-step goals. Without needing a prompt for every single step. So again, SmarterX.ai slash pulse. This is not a marketing thing.
[01:41:48] We do not do anything with your emails. it is just a registration thing through Google. So this is purely for research purposes, informal poll. We will share the results at the start of next week [01:42:00] episode. And Mike, we gotta figure if, if, if this is what it's gonna be like every week moving forward. I don't know if we gotta go to like two episodes or like night after.
[01:42:06] I don't know man. We're getting into the like Lex Friedman range here. Oh my gosh. Know, multiple hours. I don't have time for that. I still run a company. Alright, everyone, thank you so much for being with us. We will be back next week. do we have any AI answers this week or was that last week?
[01:42:22] Mike Kaput: I think, I don't think we have one this week.
[01:42:24] I could be wrong though.
[01:42:24] Paul Roetzer: Do, hold on. Let me check really quick just so I can like to help you. Okay. We do have
[01:42:28] Mike Kaput: firm,
[01:42:29] Paul Roetzer: oh no, I am recording an AI answers. Yeah. So perfect.
[01:42:33] Mike Kaput: Great.
[01:42:33] Paul Roetzer: So we'll probably have a special episode on Thursday. Again, AI answers that will be for scaling ai, webinar from last week, the number 14 of those.
[01:42:41] So yes, we will have another episode for you on Thursday. AI answers. Alright, thanks everyone. Have a great week, Mike, man. All right. Go take a breath, get another cup of coffee. Good. Talk to you at the next meeting. Sounds
[01:42:52] Mike Kaput: good. Alright, see ya.
[01:42:55] Paul Roetzer: Thanks for listening to the Artificial Intelligence Show.
[01:42:57] Visit SmarterX.AI to [01:43:00] continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in-person events, taken online AI courses and earned professional certificates from our AI Academy, and engaged in a SmarterX slack community.
[01:43:19] Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.
