If you care about health literacy, the term multimodality may become a welcome addition to your toolbox.
In this episode, you’ll learn what multimodality is, some of what it has to do with health literacy, and 3 ways you can put it to work. You’ll also learn about the scholar we have to thank for helping put multimodality on the map.
Hi. This is 10 Minutes to Better Patient Communication from Health Communication Partners. I’m Dr. Anne Marie Liebel, and as a confirmed health literacy fan, I’m glad for the high level of awareness around health literacy these days. I am often asked some version of the question: what can we do better to support patients in growing their health literacy?
My goal in this episode is to help you with just that. I’m going to introduce you to someone you might not know, and some of his ideas you might not have heard of. And, to add some more to your health literacy toolbox, I’ll give you three specific tools you can use to help expand your patients’ health literacy.
You may have heard our big announcement! Over the years, as I’ve been talking with health professionals about communication, I’ve been asked more than once: do you have an app? I’m proud to say that now, the answer is YES! Health Communication Partners has an app! It lets you practice any strategic communication. right in your phone. That’s support that goes along with you. And we collaborate with you so it’s a good fit. I’m excited to share this with you so hit me up on Twitter or linkedin, or just visit Health Communication Partners.com and click on the banner!
Gunther Kress was Professor of Semiotics and Education at the Institute of Education, University of London. He was a leading contemporary voice in language studies, and died this past summer. I learned about his work from my professor Brian Street. In the show transcript, I have some tributes to him from social media, as well as links to videos and some of his work.
Kress is well known for his work in multimodality, and I believe some aspects of it may be helpful in health literacy. So I’ll give a quick summary of multimodality (and with references so you can geek out if you wish).
Generally speaking, a mode, in language and literacy terms, is a way that meaning is communicated. Examples of modes include speech, written text, images, signs, etc. I have a link to a video of Kress answering the question, What’s a mode? Any work that combines more than one mode is called ‘multimodal.’ For instance, think of how videos combine images and sound. Those are two different modes. Videos are, by nature, multimodal, whereas a photograph is monomodal.
It might be easier to understand multimodality if we consider what it’s not. Kress and his collaborator Theo van Leeuven point out that Western culture has historically preferred monomodality, writing:
The most highly valued genres of writing (literary novels, academic treatises, official documents and reports, etc.) came entirely without illustration, and had graphically uniform, dense pages of print. Painting nearly all used the same support (canvas) and the same medium (oils), whatever their style or subject. (Multimodal Discourse p.1)
Hmm…“graphically uniform, dense pages of print,” that are “entirely without illustration?” This makes me think of some discharge instructions I saw once. Anyhow…moving on. They describe how, as monomodality gave way to multimodality, it was still a monomodal scene for a while. This is because even though multimodal works were being made, it was a team effort, made by a group where each person was responsible for one mode. Everyone a specialist. For a newspaper story, for example, you have a writer, a designer, a data visualizer, etc. They’re acting in ensemble to make the newspaper article. Such works “were produced in this way, with different, hierarchically organized specialists in charge of the different modes, and an editing process bringing their work together.” (p. 2)
Certainly, this still happens. But now with digital media, the different modes “can be operated by one multi-skilled person, using one interface…so that he or she can ask at every point: ‘Shall I express this with sound or music?’ ‘Shall I say this visually or verbally?’ and so on.” (Those excerpts are from Kress & van Leeuven’s Multimodal Discourse and links are in the show notes here.) Simply because we’re alive in the 21st century, we all consume– and often produce–complex multimodal texts. Any work with more than one mode, more than one way of making meaning. If you’ve ever shot a video or added images to text, you’ve produced a multimodal work. That is to say, because of digital communication, we can all do multimodal work. The same is true for your patients.
Because of the rapid pace of technology, the expensive design suites that were once only owned and operated by specialists, are now available to all of us. And they’re on our cell phones! Think about all the multimodal work you can consume and produce with a cellphone. Emojis are added to text messages, images are modified with color and shape, videos are shot, edited, viewed, shared, social media is scanned and updated. For more examples, ask the nearest 10-year old. Multimodal communication is true of all of us. Including your patients.
Why is this good news? Because it invites us to reconsider what assumption we are making about the interpretational resources and practices of our audiences. So I’ll invite you to consider: what you are assuming about what your patient reads and writes, or produces and consumes? How is this shaping the way you are interacting with them? Your patients are likely making and interpreting multimodal work (maybe on their cell phone). Rather than worrying about a patient’s educational level or low score on some assessment, focus on the ways they are producers and consumers of multimodal works.
Multimodality is also good news for you in your practice. This is because it helps you focus on the parts of health literacy you can actually do something about. The in-person conversations you have with patients. Any digital patient communication. Any materials shared with patients. In short, multimodality invites you to think about any way words and images are used before, during, and after the patient encounter. What can you do with this cool new information when it comes to communicating with your patients–and the ways they communicate with you? Here’s three ideas to get you started:
- As I’ve said before, one of the most powerful ways you can help patients learn is through mixing your modes.
This can be simple and unfussy. For example, take a written text you use frequently. Read it aloud–the voice recorder on your phone works just fine–turn it into an audio file. Post the audio file on your website. You only have to do this once, to help many patients. They can read the text, listen to the audio file, or both.
2. How long has it been since you looked at the written materials you give to patients?
Make sure written materials are accompanied by images, and broken up into small paragraphs. Everyone finds this more manageable and memorable.
3. Apps are nearly always multimodal.
They are also interactive. Both of these traits are beneficial for learning. Apps can also be less intimidating than pages of solid prose.
I’ll challenge you to consider any communication as multimodal. Kress and van Leeuven warn us that pretending that language “is the central means of representing and communicating…is simply no longer tenable, that it never really was, and certainly is not now.” (p. 111) So pull on those other modes! Think: images, layout, color, motion, sound.
You and I are both trying to reach people, and I invite you to join me in using multiple modes. This has been 10 Minutes to Better Patient Communication. I’m Dr. Anne Marie Liebel.