People and topics that I’m tracking right now, in the drinking-from-a-firehose space of AI and Health Communication:
Friend of our show Dr. Ashley Love, along with her colleagues, published a paper in SOPHE, “Artificial Intelligence in Public Health Education: Navigating Ethical Challenges and Empowering the Next Generation of Professionals.” Dr. Love previews the paper in a Linked In post: “Data matters. What is missing matters too. The future of AI in public health education will be shaped by the values we bring to it now. Ethics. Equity. Context. Human connection.” Check out my interviews with Dr. Love here and here.
Another friend of the show, Dr. Ayo Olagoke, wrote a thought-provoking LinkedIn post in which she shared an important point she’d made about framing in a recent class. “How we describe AI shapes how we use it and what we expect from it.” Dr. Olagoke gave a delightful list showcasing the range and variety of ways she has heard people describe AI:
“Jibiti (a Nigerian term for a fraudster),
😃 Big Brother,
🫤 A sneaky assistant or intern,
😥 A co-worker coming for your job,
😡 A short-tempered teacher,
😍 A listening friend.
Dr. Olagoke adds: “Each of these mental images carries assumptions. And those assumptions influence trust, skepticism, and ultimately, adoption.” Hear Dr. Olagoke speak with me about AI and health communication.
Also on the topic of trust was an article in STAT News called “The AI push in health care is deepening medicine’s trust crisis.” We’re all aware how quickly health systems are using AI for a variety of tasks. Yet this article reminds us that “Patients who have experienced discrimination in health care are significantly less likely to trust health systems to use AI responsibly. Rolling out AI systems without meaningfully involving patients and communities in the decision-making only repeats the pattern that led to the mistrust in the first place.” The authors assert that “Health care’s adoption of AI should move at the speed of trust, not investment.”

Everyone reading this is aware that most patient-facing information does not meet health literacy standards, and AI presents some unique challenges. What about when we, as professionals, use AI to write patient-facing material? Folks at IHA have been raising good questions about the need for organizations to have formal AI policies to establish safeguards to ensure accuracy of patient information, maintain health literacy standards, and have consistency across departments. Professionals who may all be generating their own materials may benefit from tools that have AI standards and guidance built in. But the policies, standards, and guidance need to be built, first.
Organizations do have resources to turn to as they consider building their AI policies. Thanks to Chris Trudeau for links to two policy templates that “address the technical and ethical risks inherent in healthcare AI”:
- The Coalition for Health AI (CHAI) Policy Template (The CHAI folks are also part of MedHELM, below).
Also on the topic of oversight, an article in MIT’s Technology Review points out the importance of third party expert evaluators for health-focused LLMs, as well as the need for evidence-based standards as the field evolves and tools are rapidly deployed. The author references Stanford researchers MedHELM (Holistic Evaluation of Large Language Models for Medical Tasks) evaluation tool, which includes evaluation of aspects of patient communication in LLMs. (Thanks to the UConn Center for mHealth & Social Media for drawing my attention to this article.)
Clearly this is an inflection point in how words, technology and health come together. Getting some grounding and clarity is essential in times of rapid change. I wonder what parts of this landscape you’re paying attention to? Connect with me on Linked In, let me know!