鶹ý

My Medical AI Holiday Wish List

— Can artificial intelligence protect me from my own human fallibility?

MedpageToday
A photo of a female physician holding tablet displaying an AI icon.
  • author['full_name']

    Mary Meyer MD, MPH, is an emergency physician with The Permanente Medical Group. She also holds a Master of Public Health and certificates in Global Health and Climate Medicine. Meyer previously served as a director of disaster preparedness for a large healthcare system.

Holiday season 2024 has officially kicked off, and my emergency department is as busy as ever. The first cases of flu are popping up, COVID-19 seems poised to make its annual visit, and there's no shortage of heart failure patients on the heels of Thanksgiving.

Like many physicians, I'm not a technology innovator. In fact, I probably fall more into the late majority portion of the . But since generative artificial intelligence (AI) seems to be going the way of the horseless carriage, I recently decided to incorporate an AI tool into my practice. The tool my organization adopted functions somewhere between a scribe and an administrative assistant. I turn it on when I enter a patient's room and the tool then spits out a history and physical, complete with an assessment and plan, when I return to my desk.

It has some quirks. For one, it seems to think all pain is severe. To be fair, it can't see patients' facial expressions or read their body language; nor has it spent two decades, as I have, interpreting what patients really mean when they describe their pain as unbearable. It also seems to struggle with context, or -- dare I say it -- common sense. For instance, the tool might include a patient's smoking and vaping choices when they're not particularly relevant. As in: The patient, with a history of smoking and vaping, presents complaining of severe ankle pain after a fall.

Or it might emphasize aspects of the history I wouldn't necessarily have included, such as why the patient's husband's best friend who is a doctor thinks they have diverticulitis. Conversely, it sometimes omits important information, like the fact that my patient with a right-hand tendon laceration is right-hand dominant.

What I'm getting at is that my AI tool doesn't speak human; it seems to struggle with the nuance and frequent self-contradiction that characterize so many people's attempts to describe what ails them.

I'm fine with this. Given AI's rapid rate of evolution -- the current tool is already an improvement from its immediate predecessor -- it's clear that its general utility in my practice will improve. I'm willing to humor it as it takes its first baby steps. This latest tool has, however, gotten me pondering what I really want out of an information network that has the potential to genuinely transform my profession. So, I created an AI holiday wish list.

This Year I Want...

I about the evolution of my practice. Since 2020, my emergency department has swelled with older patients who are sicker and arrive in ever-increasing numbers. The stakes are high and I am aware that it is dismayingly easy for me to miss important details about my patients' medical history. So, my first wish is for an AI tool that can probe the patient's chart and remind me of details that are both easy to miss and relevant to the current presentation.

Envision the patient with hip pain who has a remote history of septic arthritis in that joint. Or the middle-aged woman with abdominal pain who had an appendectomy as a child and a cholecystectomy 10 years ago. Or just about any patient on anticoagulation. Rather than spending precious minutes wading through a patient's last cardiology visit for the details of his three angioplasties and four-vessel coronary artery bypass graft, I wish for a tool that can instantly harness this information and present it to me as I walk into the room to ask him about his chest pain. I'm looking for thorough, succinct, and relevant.

The next thing I'd like my ideal AI tool to do is offer clinical decision-making support in the form of a broadened differential diagnosis and evidence-based algorithms. It should remind me to check the Wells criteria in my patient with dyspnea and then calculate the Wells score for me. Or tell me about the latest disease outbreak in the country my patient with a fever just returned from. In short, it should nudge me to consider the outliers, those diagnoses that are uncommon but must nonetheless be considered.

After all, one of the most remarkable aspects of emergency medicine is the fact that I, along with all of my colleagues, repeatedly conjure up a differential diagnosis from my own memory on multiple patients during the course of a shift. I might sometimes consult a textbook or a colleague. But the vast majority of my practice is based on a more-or-less instant retrieval of medical knowledge and my ability to marry this knowledge with various aspects of a patient's presentation to create a coherent rendering of her illness.

This manner in which physicians arrive at a diagnosis -- how they think -- remains shrouded in mystery despite years of researchers' attempts to define and reproduce it. In fact, in a recent ChatGPT study, that it was only when they stopped trying to recreate how doctors think and instead focused on a computer's ability to predict language, that the diagnostic capability of computer models began to approach that of physicians.

The pitfall, of course, is that this recall process is fraught with individual practice variation and broad potential for error, sometimes . I find myself wondering if, in future decades, our current practice of relying on a physician's memory and experience to obtain accurate diagnoses will come to be seen through much the same lens that bloodletting is viewed today -- haphazard to the point of barbaric.

Which brings up an important point: accuracy. A future AI tool will be of no use to me if it reminds me to check the patient's immunization history only to turn around and then feed me the latest conspiracy theories on vaccination. Yuval Harari about the naive belief that information networks with more information will necessarily yield more truth. In fact, the opposite is often true, as evidenced by the mountains of misinformation and disinformation that pollute the online medical universe. For an AI tool to transform my practice, it will need to find that kernel of truth hidden amongst fields of conjecture and conspiracy.

My "Reach" List

My wish list contains a few, more far-flung items -- the stuff I'm not really expecting to get but still willing to ask for. Having achieved relevancy, accuracy, and a personalized understanding of the patient's medical condition, can future AI tools offer me a prognosis? What is the patient's likelihood of recidivism, of relapse, or a TIA that evolves into a stroke? Can future AI tools break down language barriers, like the in The Hitchhiker's Guide to the Galaxy?

And finally, every holiday list has one big-ticket item, that thing you secretly wish for above all the others. Here is mine: I want an AI tool that can protect me from my own human fallibility.

Like my patients, I too am filled with nuance and self-contradiction. has demonstrated that physicians are all too capable of a remarkable number of , from premature closure to confirmation bias to anchoring. But I don't need a study to tell me that all of these tendencies worsen when I am tired, when it's the 11th hour of a 12-hour shift, or when I am working my eighth shift in a row.

Can future AI models warn me when I am engaged in dangerous multi-tasking? Or simply too exhausted to accurately treat my patients? Can it warn my supervisors when I am spread dangerously thin? Can it notice when I am juggling too many critically ill patients and rebalance the distribution of acuity amongst the various providers in my emergency department?

My wish is for an AI tool that seeks to mitigate my Achilles' heels, rather than a network that views me as a cog in a system that can always be made more efficient. I suppose one might reasonably inquire if we are headed to an era without human physicians, given their obvious weaknesses. I don't think we are: consistently on the part of patients to be treated by , fallible as they may be. If we are to conclude that AI will ultimately evolve into a tool that assists rather than replaces human physicians, my wish is for a tool that makes me the best human doctor I can be.

This perspective is the author's alone and does not necessarily reflect that of any institutions or companies with which she is affiliated.