鶹ý

Need to Answer a Routine Question From a Patient? AI Might Come to Your Rescue

— Artificial intelligence could lighten our load, if we can figure out how to use it wisely

MedpageToday
Medical technology concept with 3d rendering doctor robot diagnose in hospital.
  • author['full_name']

    Fred Pelzman is an associate professor of medicine at Weill Cornell, and has been a practicing internist for nearly 30 years. He is medical director of Weill Cornell Internal Medicine Associates.

The other day, after receiving seven patient portal messages with essentially the same question over the course of two hours, I decided to try something new.

The question I got was, "Should I get the new RSV [respiratory syncytial virus] vaccine?" The "something new" I decided to try was to see if using artificial intelligence (AI) could help me.

We've all been seeing a lot of talk about the promise of artificial intelligence in helping providers take better care of their patients, and hopefully make their lives easier. For the most part, however, we've yet to see much happening on the ground.

Here was a situation where I thought, "Maybe one of these AI programs could help generate an answer to a question I'm getting over and over, and deliver the service to our patients that we're looking for, without taking a lot of effort up front." It would be an answer that could be used each additional time this question came up, and would create something that providers could trust.

Getting the Wording Just Right

I opened up the ChatGPT app on my desktop, and sat there for a few minutes trying to compose the question in my head.

After a few different iterations, I finally found a query that seemed to do what I was looking for, and gave me an answer that actually seemed pretty reasonable.

I asked the program to provide, for a patient who has an underlying medical condition and qualifies for the RSV vaccine, my recommendations -- using shared decision-making -- on whether they should proceed with getting this new vaccine. I asked it to avoid a lot of medical terminology, and to write it at a high school reading level. What it spat out, after just a few seconds, was what seemed like a fairly well-thought-out response to this question.

It went over some clinical background about the virus and the vaccines, and mentioned the fact that it was a new vaccine that still carries some unknown risks. It also listed the reasons that a patient might consider getting the vaccine, as well as some benefits and potential concerns, and said that this was ultimately a decision that the patient would have to make for themselves. Sure, it seemed a little formulaic, a little cut-and-pasted, but in the end it actually read fairly reasonably, and seemed to be something that would probably satisfy the patient who was asking the question.

We can argue whether a chatbot reply to a question is really "shared medical decision-making." Does that answer provide a true back-and-forth between the patient and the provider, an opportunity to ask and answer questions, a place where the patient can express their values and interests and hopes and fears about something like a new vaccine? Is that really what patients are going to get through a portal message?

Most of the time, when patients send us questions like this: "Should I get my flu shot vaccine this week or wait 'till later?" "Should I get the COVID-19 booster before my flu shot?" "How long after having an infection with COVID-19 can I get the COVID booster?" "Are there any other vaccines I'm due for?" We probably send them pretty bland, terse, and straightforward answers: "Sure." "Seems fine." "3 months." "Shingles vaccine would be good."

No Absolute Right or Wrong Answer

For many of these questions, there is no absolute right or wrong answer; our own judgment and the wishes of our patients sometimes come into play, and patients are certainly free to reply to our responses with additional questions. But since this is a lot of busy work that gets added onto our day -- and a lot of work that's unreimbursed -- I think most of us are just interested in getting these answered and out of our in-baskets.

Sure, we want to be responsive to our patients, and make sure they get the answers they need to help them move forward and stay healthy. But for the most part, we don't want to engage in a lot of back-and-forth, endless discussions about the risks, benefits and alternatives, or long explorations about their own internal model of health, their own implicit biases, and their deep-seated worries and concerns.

Because these things often come to us in waves and there's no one else here to help us, it would be great if we had the resources we needed to have someone screen these questions and give out some pretty standard answers. But most of us in primary care are left high and dry to deal with these on our own, on our own time, after hours or between patients or meetings.

So it's no surprise that our answers end up being short, snappy, and to the point. And if we can find a way to build a smarter system that helps us take care of these fairly straightforward requests in a manner that is concordant with our own medical care, then perhaps we'd all be a lot better off.

A colleague of mine who has been working on the IT side of our electronic health record has told me that they are getting close in the development of an artificial intelligence system that will do just that -- pre-screen our portal patient messages and actually suggest an answer. I can foresee a time in the future where these could actually be based on our own historical responses to questions from patients, written in a language that we would use, with our medical voices.

More Empathic?

Apparently in some of the earlier tests of these systems, patients have reported that they prefer the artificial intelligence response to their portal message to the one that they would receive from a clinician directly, finding them more empathic.

My suspicion is that this is probably true because we get dozens and dozens of these a day, we get asked the same questions over and over, and each one takes some time and mental energy and diverts our attention from something else that needs attending to. The artificial intelligence system is perhaps not as bothered by this.

I worry about letting these systems become too autonomous, turning out responses without letting us review them and approve them. When I sent out my reply to the patient who wanted to know about the RSV vaccine, I added a tagline at the end that said "Written by AI/edited by ME". I hope that this will allow some degree of latitude as patients see these message replies, and that they don't become too frustrated that a machine is doing part of the work here.

Think of the future possibilities, where these systems will not only be reading and trying to re-create my own style in a patient response, but also delving into the patient's electronic health record (EHR) to find clues about helping the patients move forward on health issues.

The system could scan the chart to see whether patients had had the vaccine or were due for it; check for a medical contraindication to the vaccine based on laboratory findings, allergies, or items on their medical problem list; and even suggest additional things that the patient should set up, such as a follow-up on lab testing or healthcare maintenance items they might be overdue for.

I envision a future where we all have an artificial intelligence assistant working alongside us, pulling in data points not only from within our local EHR, but querying insurance claims data, pharmacy records, outside lab reports, and even information buried in scanned old paper notes. "Mrs. Jones, I see that you are overdue for colon cancer screening. Here are some options that are available to you; which would you like to pursue? Here are Dr. Pelzman's recommendations based on your prior choices and family history, but any of these others might be satisfactory for the following reasons..."

Once the patient has accepted a particular option, then the system could go ahead and place the referral, order the colonoscopy or at-home testing, send a message to their insurance company to get authorization, send a prescription for the colonoscopy prep to their pharmacy along with detailed instructions and FAQ's, schedule an Uber to pick them up on the morning of the procedure, and then update their health maintenance in the electronic health record and make sure they don't miss any necessary follow-up.

I can only hope that folks out there who are much smarter than me are working on this kind of thing. The potential is there, and those of us on the front lines hold out hope that this can truly make things better.

Let me know in the comments section if there are things you have tried using AI for in your clinical practices -- ways to improve communication with patients, write your notes, do billing, give guidance to patients in the patient portal, or provide patient education or a smarter way to follow up on lab results.

And maybe next week I'll let one of these chatbots write my column for me, all on its own. "Written by AI/edited by ME?"