"Mark my words. AI [artificial intelligence] is more dangerous than nukes." -- Elon Musk
The other day, as soon as I walked into work, already running my usual 10 minutes late, I was hustling from my office to the clinic when I saw one of our physician assistants in the hallway. She is otherwise known for her calm composure, so it seemed odd that she was visibly distressed. The reason was that she had forgotten her laptop at home and could not find a loaner. I tried to save her day by offering her my laptop to work on. That would mean I would have a computer to work on in the clerical area, but I would have to see patients the entire day without having a computer in the room with them.
The PA resisted, but I insisted. We both went our separate ways to move on with our days.
I had not seen patients like this in quite some time now. By "like this," I mean without a computer in the room. It should not be a big deal, but I realized that a computer has now become an entity that is always a part of our patient encounters. If you don't have a stethoscope, you can get by, but you can't without a computer.
It's as if a third "being" is present in the room -- the first two being the doctor and the patient. We interact with patients briefly, but constantly work at the computer, from looking up their records, labs, and scans to placing orders and scheduling instructions.
And now that there is the "secure instant chat" feature, the computer talks back to us all the time. Other providers are messaging us constantly, and in a way, the computer keeps demanding our attention, robbing it of the patient in the room. It gives us alerts if there are medication interactions. It reminds us to change our passwords, prompts us to order tests, and stops us from closing patient charts if certain rules are not adhered to.
On this particular day, since I did not have this third "being" in the room, it was just the patient and me. I felt as if the encounter was incomplete. I kept wondering if the patient also felt incomplete because patients also typically feel reassured when they see their medical chart on the computer: "I don't remember which medications I'm taking and what surgery I had 10 years ago. It's all in the computer doc!"
Patients actually have a relationship with the computer, because now they look up their records themselves and try to make sense of things. Sometimes they do a good job of that, and other times they suck at it. They love the computer for that, but if the doctor spends too much time looking at the computer instead of the patient, they start feeling slighted as a jealous lover would.
On that day my mind kept telling me that since I will have to document everything on the computer and use it to place orders, I would have to hurry and end the visit with the patient if I had to stay on schedule. Then I asked myself, "How did you see patients just a few years ago, when there were no computers in the room -- relax!"
A sense of calm came over me. I forgot about having to answer constant messages and having to place orders or start documenting the visit. I was about to spend more time evaluating the patient to talk about their lives and share my own stories. I felt like a crowd had dissipated. The air in the room returned to the intimate doctor-patient relationship that had been the case for centuries.
We talked about how many cows my patient had at her farm. We talked about how many of them they end up eating and how many they give away. We talked about how 20 years ago, one of my patients walked in on her daughter hanging herself in the closet. Her wound is still so fresh that she truly believes that whoever says that time is the best healer is full of crap.
We talked about how one of my patients had a robust sex life, but he felt that the treatment of his prostate cancer had taken away from him the man he was. We talked about how my patient's nephew was found dead from a drug overdose. She was sad for him but happy that her children did not turn out like that. We talked about things that usually the third "being" in the room -- the computer, that is -- does not allow us to talk about because we are too busy with the computer more than with the patient.
We also talked about how I thought medicine would be practiced a few hundred years from now. How a patient will walk through a "booth" of some sort. His symptoms would be heard just like Siri hears us, his clinical signs photographed and interpreted. He would be scanned from the skull all the way down to the toes with all his internal organs anatomically scrutinized. A drop of blood taken by a painless finger prick would measure all sorts of lab tests, and the computer would churn out the most adequate diagnosis and treatment options and may even inject the veins with the most precise dosage of highly effective drugs against the illness.
The genetic profile of patients would be analyzed instantly, and mutations would be identified and edited to correct expeditiously. Complex surgical procedures would be performed meticulously by ambidextrous robots. Humans would rely more on these "booths" than their own clinical judgment. Like when you tell me how to calculate 89,573 × 74,823, I would rely more on a calculator than on my computing skills.
When Elon Musk warns us about the dangers of artificial intelligence, he is not referring to medicine in particular, but we can certainly analyze his statement in the context of the future of our profession. Will there be a day that this computer and this booth will become more intelligent than the physician's clinical judgment?
"Never!" we say. A computer has to be programmed by a human to give the results. A computer can never supersede a human's complex clinical judgment. Well, I would say that if you were to tell a human from 500 years ago that I will fly tonight from New York to Kuala Lumpur, and I could do it within just one night, he would no doubt laugh and ridicule us for wasting his time.
If the computers start treating us more accurately than we do, we would be happy to accept that. But if they start making decisions for us, no matter how, but if it indeed happens, how will the computers decide, say, when it's time to stop dialysis and go to comfort care? How will they decide how much pain is too much, when to give narcotics, and when to hold them when worried about addiction?
How will they make personal connections with patients and share anecdotes, and discuss hobbies? How will they go to funerals and shed tears together with patients and their families when nothing else can be done? How will these computers learn to give comfort and solace to these patients? And even if they do, will the patients accept that as they accept it from us, human doctors?
What if computers turn against us? What if they start selecting which pregnancies to carry and others to terminate? What if they start dictating the patients' advanced directives? What if the computers put a monetary value on the number of years lived? What if they limit the number of children we can have? What if they tell me that my child is not worth living because of her disability?
What if they tell me that my grandma is occupying a bed in a hospital that is needed for a younger patient and that she will be denied any more life-prolonging treatments? What if they tell me that it's OK to clone humans and select the "best" ones? What if they assign abortion and sexual identity choices to patients?
Some of you might say: Isn't that what humans are already doing to humans? Well, yes, you are right. But will we accept it if anyone other than humans -- in this case, artificial intelligence -- imposes these restrictions on us?
, is a hematology-oncology physician.
This post appeared on .