Human radiologists don’t recognize fake images either
Artificial intelligence programs that check medical images for evidence of cancer can be duped by hacks and cyberattacks, according to a new study. Researchers demonstrated that a computer program could add or remove evidence of cancer from mammograms, and those changes fooled both an AI tool and human radiologists.
That could lead to an incorrect diagnosis. An AI program helping to screen mammograms might say a scan is healthy when there are actually signs of cancer or incorrectly say that a patient does have cancer when they’re actually cancer free. Such hacks are not known to have happened in the real world yet, but the new study adds to a growing body of research suggesting healthcare organizations need to be prepared for them.
Hackers are increasingly targeting hospitals and healthcare institutions with cyberattacks. Most of the time, those attacks siphon off patient data (which is valuable on the black market) or lock up an organization’s computer systems until that organizations pays a ransom. Both of those types of attacks can harm patients by gumming up the operations at a hospital and making it harder for healthcare workers to deliver good care.
But experts are also growing more worried about the potential for more direct attacks on people’s health. Security researchers have shown that hackers can remotely break into internet-connected insulin pumps and deliver dangerous doses of the medication, for example.
Hacks that can change medical images and impact a diagnosis also fall into that category. In the new study on mammograms, published in Nature Communications, a research team from the University of Pittsburgh designed a computer program that would make the X-ray scans of breasts that originally appeared to have no signs of cancer look like they were cancerous, and that would make mammograms that look cancerous appear to have no signs of cancer. They then fed the tampered images to an artificial intelligence program trained to spot signs of breast cancer and asked five human radiologists to decide if the images were real or fake.
Around 70 percent of the manipulated images fooled that program — that is, the AI wrongly said that images manipulated to look cancer-free were cancer-free, and that the images manipulated to look like they had cancer did have evidence of cancer. As for the radiologists, some were better at spotting manipulated images than others. Their accuracy at picking out the fake images ranged widely, from 29 percent to 71 percent.
Other studies have also demonstrated the possibility that a cyberattack on medical images could lead to incorrect diagnoses. In 2019, a team of cybersecurity researchers showed that hackers could add or remove evidence of lung cancer from CT scans. Those changes also fooled both human radiologists and artificial intelligence programs.
There haven’t been public or high-profile cases where a hack like this has happened. But there are a few reasons a hacker might want to manipulate things like mammograms or lung cancer scans. A hacker might be interested in targeting a specific patient, like a political figure, or they might want to alter their own scans to get money from their insurance company or sign up for disability payments. Hackers might also manipulate images randomly and refuse to stop tampering with them until a hospital pays a ransom.
Whatever the reason, demonstrations like this one show that healthcare organizations and people designing AI models should be aware that hacks that alter medical scans are a possibility. Models should be shown manipulated images during their training to teach them to spot fake ones, study author Shandong Wu, associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, said in a statement. Radiologists might also need to be trained to identify fake images.
“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks,” Wu said.