Support the news
This show originally aired March 12, 2019.
With Meghna Chakrabarti
Artificial intelligence will utterly transform medicine. Better diagnoses, but also privacy concerns. But one doctor says if done right, AI could put the "care" back in health care.
Dr. Eric Topol, author of the new book "Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again." Cardiologist and executive vice president at Scripps Research. Director and founder of the Scripps Research Translational Institute. (@EricTopol)
From The Reading List
Excerpt from "Deep Medicine" by Dr. Eric Topol
“You should have your internist prescribe anti-depression medications,” my orthopedist told me.
My wife and I looked at each other, bug-eyed, in total disbelief. After all, I hadn’t gone to my one-month post-op clinic visit following a total knee replacement seeking psychiatric advice.
My knees went bad when I was a teenager because of a rare condition known as osteochondritis dissecans. The cause of this disease remains unknown, but its effects are clear. By the time I was twenty years old and heading to medical school, I had already had dead bone sawed off and extensive reparative surgery in both knees. Over the next forty years, I had to progressively curtail my physical activities, eliminating running, tennis, hiking, and elliptical exercise. Even walking became painful, despite injections of steroids and synovial fluid directly into the knee. And so at age sixty-two I had my left knee replaced, one of the more than 800,000 Americans who have this surgery, the most common orthopedic operation. My orthopedist had deemed me a perfect candidate: I was fairly young, thin, and fit. He said the only significant downside was a 1 to 2 percent risk of infection. I was about to discover another.
After surgery I underwent the standard—and, as far as I was told, only—physical therapy protocol, which began the second day after surgery. The protocol is intense, calling for aggressive bending and extension to avoid scar formation in the joint. Unable to get meaningful flexion, I put a stationary bicycle seat up high and had to scream in agony to get through the first few pedal revolutions. The pain was well beyond the reach of oxycodone. A month later, the knee was purple, very swollen, profoundly stiff, and unbending. It hurt so bad that I couldn’t sleep more than an hour at a time, and I had frequent crying spells. Those were why my orthopedist recommended antidepressants. That seemed crazy enough. But the surgeon then recommended a more intensive protocol of physical therapy, despite the fact that each session was making me worse. I could barely walk out of the facility or get in my car to drive home. The horrible pain, swelling, and stiffness were unremitting. I became desperate for relief, trying everything from acupuncture, electro- acupuncture, cold laser, an electrical stimulation (TENS) device, topical ointments, and dietary supplements including curcumin, tart cherry, and many others—fully cognizant that none of these putative treatments have any published data to support their use.
Joining me in my search, at two months post-op, my wife discovered a book titled Arthrofibrosis. I had never heard the term, but it turned out to be what I was suffering from. Arthrofibrosis is a complication that occurs in 2 to 3 percent of patients after a knee replacement—that makes the condition uncommon, but still more common than the risk of infection that my orthopedist had warned me about. The first page of the book seemed to describe my situation perfectly: “Arthrofibrosis is a disaster,” it said. More specifically, arthrofibrosis is a vicious inflammation response to knee replacement, like a rejection of the artificial joint, that results in profound scarring. At my two-month post-op visit, I asked my orthopedist whether I had arthrofibrosis. He said absolutely, but there was little he could do for the first year following surgery—it was necessary to allow the inflammation to “burn out” before he could go back in and remove the scar tissue. The thought of going a year as I was or having another operation was making me feel even sicker.
Following a recommendation from a friend, I went to see a different physical therapist. Over the course of forty years, she had seen many patients with osteochondritis dissecans, and she knew that, for patients such as me, the routine therapeutic protocol was the worst thing possible. Where the standard protocol called for extensive, forced manipulation to maximize the knee flexion and extension (which was paradoxically stimulating more scar formation), her approach was to go gently: she had me stop all the weights and exercises and use anti-inflammatory medications. She handwrote a page of instructions and texted me every other day to ask how “our knee” was doing. Rescued, I was quickly on the road to recovery. Now, years later, I still have to wrap my knee every day to deal with its poor healing. So much of this torment could have been prevented.
As we’ll see in this book, artificial intelligence (AI) could have predicted that my experience after the surgery would be complicated. A full literature review, provided that experienced physical therapists such as the woman I eventually found shared their data, might well have indicated that I needed a special, bespoke PT protocol. It wouldn’t only be physicians who would get a better awareness of the risks confronting their patients. A virtual medical assistant, residing in my smartphone or my bedroom, could warn me, the patient, directly of the high risk of arthrofibrosis that a standard course of physical therapy posed. And it could even tell me where I could go to get gentle rehab and avoid this dreadful problem. As it was, I was blindsided, and my orthopedist hadn’t even taken my history of osteochondritis dissecans into account when discussing the risk of surgery, even though he later acknowledged that it had, in fact, played a pivotal role in the serious problems that I encountered.
Much of what’s wrong with healthcare won’t be fixed by advanced technology, algorithms, or machines. The robotic response of my doctor to my distress exemplifies the deficient component of care. Sure, the operation was done expertly, but that’s only the technical component. The idea that I should take medication for depression exemplifies a profound lack of human connection and empathy in medicine today. Of course, I was emotionally depressed, but depression wasn’t the problem at all: the problem was that I was in severe pain and had Tin Man immobility. The orthopedist’s lack of compassion was palpable: in all the months after the surgery, he never contacted me once to see how I was getting along. The physical therapist not only had the medical knowledge and experience to match my condition, but she really cared about me. It’s no wonder that we have an opioid epidemic when it’s a lot quicker and easier for doctors to prescribe narcotics than to listen to and understand patients.
Almost anyone with chronic medical conditions has been “roughed up” like I was—it happens all too frequently. I’m fortunate to be inside the medical system, but, as you have seen, the problem is so pervasive that even insider knowledge isn’t necessarily enough to guarantee good care. Artificial intelligence alone is not going to solve this problem on its own. We need humans to kick in. As machines get smarter and take on suitable tasks, humans might actually find it easier to be more humane.
Excerpted from Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, by Eric Topol. Copyright © 2019 by Eric Topol. Available from Basic Books, an imprint of Perseus Books, a division of PBG Publishing, LLC, a subsidiary of Hachette Book Group, Inc.
New York Times: "Opinion: The A.I. Diet" — "Some months ago, I participated in a two-week experiment that involved using a smartphone app to track every morsel of food I ate, every beverage I drank and every medication I took, as well as how much I slept and exercised. I wore a sensor that monitored my blood-glucose levels, and I sent in a sample of my stool for an assessment of my gut microbiome. All of my data, amassed with similar input from more than a thousand other people, was analyzed by artificial intelligence to create a personalized diet algorithm. The point was to find out what kind of food I should be eating to live a longer and healthier life.
"The results? In the sweets category: Cheesecake was given an A grade, but whole-wheat fig bars were a C -. In fruits: Strawberries were an A+ for me, but grapefruit a C. In legumes: Mixed nuts were an A+, but veggie burgers a C. Needless to say, it didn’t match what I thought I knew about healthy eating.
"It turns out, despite decades of diet fads and government-issued food pyramids, we know surprisingly little about the science of nutrition. It is very hard to do high-quality randomized trials: They require people to adhere to a diet for years before there can be any assessment of significant health outcomes. The largest ever — which found that the 'Mediterranean diet' lowered the risk for heart attacks and strokes — had to be retracted and republished with softened conclusions. Most studies are observational, relying on food diaries or the shaky memories of participants. There are many such studies, with over a hundred thousand people assessed for carbohydrate consumption, or fiber, salt or artificial sweeteners, and the best we can say is that there might be an association, not anything about cause and effect. Perhaps not surprisingly, these studies have serially contradicted one another. Meanwhile, the field has been undermined by the food industry, which tries to exert influence over the research it funds."
Wired: "Opinion: The Life-Threatening Consequences of Overhyping AI" — "On February 11, The New York Times published a story with the headline 'AI Shows Promise Assisting Physicians.' While the article focused on a scientific paper showing how an artificial intelligence system could help doctors diagnose certain conditions, it missed a key part of the AI story: Accuracy does not equal impact.
"As the Times wrote, the AI software 'was more than 90 percent accurate at diagnosing asthma; the accuracy of physicians in the study ranged from 80 to 94 percent. In diagnosing gastrointestinal disease, the system was 87 percent accurate, compared with the physicians’ accuracy of 82 to 90 percent.' The Times essentially sourced numbers from the first and third rows of a key table in the Nature article it was reporting on. Why not the row in the middle? The one that dealt with potentially life-threatening encephalitis? There, we see the AI was just 83.7 percent accurate, while the physician accuracies were all above 95 percent. In other words, the human doctors beat the AI system when it came to correctly diagnosing a more serious illness. The reporter doesn’t reference this point in the analysis, but I feel it’s vital to include this detail for consideration. [Editor’s note: Cade Metz, the Times reporter, is a former WIRED staff writer.]
"The Nature article also points out that the scientists tested the AI against five sets of physicians with different levels of experience. It does not claim the AI performed better than experienced doctors, and in fact says, 'Our model achieved an average F1 score [accuracy measure] higher than the two junior physician groups but lower than the three senior physician groups. The result suggests that this AI model may potentially assist junior physicians in diagnoses but may not necessarily outperform experienced physicians.' "
The Guardian: "Robots and AI to give doctors more time with patients, says report" — "Robots, artificial intelligence and smart speakers will ease the burden on doctors and give them more time with patients, according to an NHS report on the pending technological 'revolution' in healthcare.
"Developments in the ability to sequence individuals’ genomes – the entirety of their genetic data – will also spur on advances, according to the review published on Monday.
"The report, led by a US academic, Eric Topol, calls for fresh education for staff, with 90% of all NHS jobs predicted to require digital skills within 20 years.
"But those who fear robots could edge out human practitioners may be reassured by the review’s suggestion that technology will 'enhance' professionals, giving them greater time for patients."
Anna Bauman produced this hour for broadcast.
This program aired on July 24, 2019.
- Does Artificial Intelligence Need A Code Of Ethics?
- What Does The Future Of Tech Hold? AI Crisis Awaits, Former Google China Head Says
- Special Hour: Innovation In Medicine, Robotics, And The Arts In Greater Boston
- How Artificial Intelligence Is Changing Medicine
- For Some Hard-To-Find Tumors, Doctors See Promise In Artificial Intelligence
Support the news