Digital health tools have never been more capable or more widely deployed. Apps, patient portals, wearables, telehealth platforms, and AI-powered interfaces are reaching people in remote areas, extending healthcare resources beyond the clinical setting, and supporting prevention and management of non-communicable disease at scale. The promise is real: personalized support, tailored interventions, and healthcare that doesn’t stop at the clinic door.
But capability and communication are not the same thing. A tool can function flawlessly and still leave its users behind.
The Risk Hiding in Plain Sight
The most common communication failure in digital health tools isn’t a bug. It’s a design assumption: that the user will meet the tool where it is.
Most consumer-facing digital health content is written at a level that exceeds federal health communication guidelines. That gap isn’t a reflection of users’ limitations. It’s a reflection of design choices that weren’t made with the full range of users in mind. When content is inaccessible, the tool doesn’t just underperform. It can actively undermine the autonomous health decisions it was built to support.
This is a communication gap AI can’t fix alone.
This risk compounds when tools are deployed globally, across languages, literacies, and health belief systems that the original design team never accounted for. What reads as clear and actionable in one context can be opaque, alarming, or simply meaningless in another.
Where AI-Generated Health Communication Tends to Break Down
Health organizations are turning to AI for good reasons. Better customer experience, faster follow-up, more responsive self-service, individualized support at scale. The use cases are compelling and the technology is moving fast.
But in healthcare, technical capability is outpacing the ecosystem’s ability to responsibly explain, govern, and operationalize it. The gap between what AI can generate and what real users can reliably understand is not a technical problem. It’s a communication problem — and it’s one that standard testing rarely catches before it reaches users.
AI-generated health content tends to break down in predictable places:
Real-world use vs. intended use. A message tested with one population, in one context, gets deployed to many. The assumptions baked into how the system was instructed travel with the content — invisibly.
How this could be misunderstood. Fluent language isn’t the same as interpretable language. AI systems can produce output that reads as clear, confident, and complete while using framing, terminology, or implicit assumptions that a significant portion of users will process differently than intended.
Unintended consequences. A well-intentioned follow-up message can create alarm. A risk explanation can produce inaction. A personalized summary can inadvertently reinforce a misunderstanding the patient brought into the encounter.
Trust and credibility issues. AI-generated content might not match a user’s experience. It might contradict what their provider told them. It might use language that feels automated and impersonal in a moment that calls for something else. When this happens, trust erodes. Often quietly, and often before anyone on the design team knows it happened.
I help organizations anticipate where AI-generated health communications could be misunderstood before they reach real users. By identifying the recurring patterns that create confusion, inaction, or unintended consequences, I help teams adjust how systems are instructed, reviewed, and explained — before issues escalate.
What This Means for Designers
The risks that surface in digital health communication aren’t primarily technical. They’re structural — built into design assumptions that are easy to miss because they feel like common sense.
- Content written for an imagined average user who doesn’t represent your actual user base
- AI-generated language that’s readable but not interpretable in the user’s own terms
- Interfaces that confirm engagement without confirming understanding
- Global deployments that carry the communication assumptions of their country of origin into contexts where those assumptions don’t hold
These are the ways well-designed tools quietly fail the people they were built for.
How We Can Help
We evaluate and improve patient-facing digital health tools — with attention to the communication risks that federal guidelines flag but rarely explain, and that standard usability testing often misses.
Our process is designed for designers and purchasers of:
- Websites, apps, wearables, and devices
- Telehealth platforms
- EHRs and patient portals
- Digital health affiliates and health IT products
We evaluate tools against federal agency guidelines and work to improve communication across health literacy, digital health literacy, numeracy, and Culturally and Linguistically Appropriate Services (CLAS) standards. We focus on where tools create barriers that users can’t see and designers didn’t intend.
If you’re building a tool that needs to work for the people who need it most, let’s talk. Just fill out the form below.