Our latest AI Pulse survey, taken by listeners of The Artificial Intelligence Show, highlights a critical tension in the AI adoption curve: the gap between professional utility and personal privacy.
According to the data, which polled 85 professionals, the vast majority of respondents are hesitant to embrace OpenAI’s new ChatGPT Health interface. Despite the promise of personalized health insights, less than one in five respondents expressed a clear willingness to connect their medical records, signaling that trust, not capability, is now the primary barrier to adoption in high-stakes domains.
When asked directly about their likelihood to connect medical records to the new ChatGPT Health interface from OpenAI, the audience’s reaction was overwhelmingly cautious.
Only 18.8% of respondents said they were "Very likely" to do so.
The remaining respondents were split between skeptical interest and outright refusal:
When combined, 76.4% of this audience, a group that is generally pro-AI, expresses significant reservation about sharing sensitive health data with an AI model.
This hesitation aligns well with how this audience now defines its overall approach to AI adoption. We asked respondents to categorize their mindset when adopting features that require personal data.
The largest group, 44.7%, identified as "Pragmatists," defining their stance as: "I wait to see if the utility outweighs the privacy/security risks."
The rest of the field was divided:
The dominance of the "Pragmatist" persona explains the ChatGPT Health findings: users are waiting for the value proposition to clearly justify the privacy risk.
As part of this past week’s survey, we also looked at adoption rates for Claude Code, a tool used primarily for professional output. We found that:
With 45.9% of respondents actively using the tool, adoption for professional tasks is more than double the unconditional readiness for health data integration (18.8%).
In our ongoing AI Pulse surveys, we gather insights from listeners of our podcast to get a sense of how our audience feels about various topics in artificial intelligence. Each survey is conducted over a one-week period, coinciding with the first seven days after an episode is released. During that time, our episodes typically receive around 11,000 downloads.
Our survey results reflect a self-selected sample of listeners who choose to participate, and typically we receive a few hundred responses. While this is not a formal or randomized survey, it offers a meaningful snapshot of how our engaged audience perceives AI-related issues.
In summary, when you see percentages in our headlines, they represent the views of those listeners who chose to share their opinions with us. This approach helps us understand the pulse of our community, even if it doesn’t represent a statistically randomized sample of the broader population.