This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
I just had a birthday, and you know what that means—I’m newly eligible for a screening colonoscopy. (#milestones!). I’ve been thinking about cancer screening a lot recently, because I’ve seen a handful of headlines in the past few months about how AI will revolutionize cancer detection.
Just last week Microsoft announced that it had partnered with a digital pathology company, Paige, in order to build the world’s largest image-based AI model for identifying cancer. The training data set for the algorithm contains 4 million images. “This is sort of a groundbreaking, land-on-the-moon kind of moment for cancer care,” Paige CEO Andy Moye told CNBC.
Well, it might be. Last month, results from the first clinical trial of AI-supported breast cancer screening came out. The researchers compared two methods for reading a mammogram: a standard reading by two independent radiologists, and a system that used a single radiologist and an AI to assign patients a numerical cancer risk score from 1 to 10. In the latter group, those who scored a 10—the highest risk—then had their images read by two radiologists. The AI-supported model reduced workload by 44% and detected 20% more cancers.
That sounds like a good thing. In theory, catching cancers earlier should make them easier to treat, saving lives. But that’s not always what the data shows. A study published in late August combed the literature for randomized clinical trials that compared mortality (from any cause, not just cancer) in two groups: people who underwent cancer screening and people who did not. For most common types of cancer screening, they found no significant difference. The exception was sigmoidoscopy, a type of colon cancer screening that involves visualizing only the lower portion of the colon.
Why this might be isn’t totally clear. It could be because of design flaws in the study. The trials the authors included in their analysis might not have followed participants long enough to see a difference. Another explanation is that the benefits of screening for some may be outweighed by the harms for others who don’t benefit. For example, if screening catches deadly cancers early, patients might gain precious time to successfully treat the disease. But if a screening is catching many cancers that aren’t killing people, the balance tips. The problem is known as overdiagnosis. I like this description from a team of researchers in Australia: “Overdiagnosis is not a false-positive diagnosis (diagnosing a disease in an individual who does not meet diagnostic criteria) or a misdiagnosis (diagnosing the wrong condition in an individual who does have an underlying disease).” The diagnosis is correct, but it will provide little to no health benefit for the patient and may even result in harm.
There is no question that screening programs have caught cancers that would have killed people had they gone undetected. So why worry about overdiagnosis? Screening can also cause harm. Patients undergoing colonoscopies sometimes end up with a perforated bowel. Biopsies can lead to infection. Treatments like radiation and chemotherapy come with serious risks to people’s health, and so does surgery to remove tumors.
So will AI-assisted screening lead to more overdiagnosis? I checked in with Adewole Adamson, a dermatologist and researcher at the Dell School of Medicine at the University of Texas at Austin. “Without reservation I would say ‘Yes, it will,’” he says. “People think that the goal is to find more cancer. That’s not our goal. Our goal is to find cancers that will ultimately kill people.”
And that’s tricky. For the vast majority of cancers, there aren’t good ways to separate nonlethal cases from lethal ones. So doctors often treat them all as if they might be deadly.
In a 2019 paper, Adamson explains how these cancer-detecting algorithms learn. The computer is presented with images that are labeled “cancer” or “not cancer.” The algorithm then looks for patterns to help it discriminate. “The problem is that there is no single right answer to the question, “What constitutes cancer?” Adamson writes. “Diagnoses of early-stage cancer made using machine-learning algorithms will undoubtedly be more consistent and more replicable than those based on human interpretation. But they won’t necessarily be closer to the truth—that is, algorithms may not be any better than humans at determining which tumors are destined to cause symptoms or death.”
But there’s also a chance AI might help address the problem of overdiagnosis. The Australian researchers I referenced above offer up this example: AI could use the information embedded in medical records to examine the trajectories of different patients’ cancers over time. In this scenario, it might be possible to distinguish those who don’t benefit from a diagnosis.
Adamson isn’t anti-AI. He sees value in simply adding a third category to the data that the algorithms learn from: “Maybe cancer.” This classification would encompass slides or images that provoke disagreement among experts. For those patients, “maybe you investigate treatments that are a bit more conservative.”
So it’s probably too early to make a ruling on AI’s role in cancer diagnoses, but we should probably read any future claims about AI cancer screening with a more skeptical eye. For his part, Adamson is tired of seeing headlines trumpet the power of AI to catch more cancers. “People get duped by those kinds of headlines into thinking that finding more cancer is better,” he says. “I want to rip my hair out, if I had any.”
Last week I wrote about what you should know about this fall’s covid vaccines. This week, I have another story on the site about who is expected to benefit most from the vaccines, which were endorsed by the CDC on September 12.
Read more from Tech Review’s archive
When radiologists and AI work together, they can catch more breast cancer cases than either can on their own. Hana Kiros has the story.
AI might also hold promise for skin cancer, Megan Lewis reports.
In a previous version of The Checkup, Jessica Hamelzou discussed the downside of letting AI make medical decisions.
From around the web
A decongestant found in many over-the-counter cold and sinus medicines—phenylephrine—is safe, but it doesn’t work. At all. (Note: phenylephrine is different from pseudoephedrine, the ingredient found in over-the-counter cold medicines that are stored behind the pharmacy counter or in locked cabinets.) (New York Times)
Scientists are one step closer to growing human kidneys in pigs. The organs might one day be used for patients needing transplants. (CNN)
Covid is here to stay. Now we have to learn to cope. (STAT and Washington Post)
No end in sight for ADHD drug shortages. (NBC)