Hacking is not a new problem, but the problem continues to worsen as technology advances. Despite improvements made to block hackers and cyberattacks, there are evildoers in our world who work every day to find loopholes and create new hacks. And as more technology is used for more purposes, there are more opportunities for hacking to occur, including in medical settings.
Now a new study has come out that shows that hacking could occur in an unexpected place: medical images. Increasingly, cyberattacks have targeted hospitals and healthcare systems, and this is yet another way it could harm people.
One of the latest developments in medical technology is artificial intelligence systems that are capable of reading medical images such as mammograms with the aim of diagnosing cancer and other diseases. Researchers are now worried that such systems could be hacked or fooled by hacked images with the intention of harming individual patients.
Cyberattacks in a healthcare setting are harmful in a variety of ways. They can result in patient data being stolen or impossible to access when needed. They can mess with typical operations and lead to delays and mistakes in care. And in some cases, they could even be used to directly attack individual patients’ health. Hacked medical images could lead to incorrect diagnoses, causing patients to be treated when they don’t need treatment or to be declared healthy when they actually have cancer.
But why would they want to, you ask? There are any number of reasons, but a hacker may wish to do harm to a high-profile patient such as a politician. Or they might want to convince a hospital to pay a ransom by continually altering images to confuse hospital staff. They may also be hoping to get money from their insurance company or a disability program by altering their own medical scans.
In the study, published in Nature Communications, researchers from the University of Pittsburgh were able to demonstrate that a computer program was capable of removing or adding evidence of cancer to mammograms. They designed the program themselves, used it to alter mammogram and x-ray images, and then had artificial intelligence systems and five radiologists read the images.
The changes were often indistinguishable to both an artificial intelligence tool and human radiologists, with 70 percent of the images passing AI inspection and 29 to 71 percent getting by radiologists—some radiologists were better than others at spotting the manipulations.
“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks,” says study author Shandong Wu, associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh.
In 2019, a team of cybersecurity researchers did a similar study to show that hackers could feasibly add or remove evidence of lung cancer from CT scan images. The changes they made to their test images also fooled human radiologists and AI programs. It’s likely that there are a wide variety of scans and data that could be changed by a hacker who wished to harm someone.
So far, this type of hacking is not known to have occurred in the real world. However, a growing body of research shows that healthcare organizations need to be prepared for such attacks.
It’s important that healthcare systems and designers of AI models become more aware of this type of hacking and do everything they can to prevent it and protect patients from it. It’s also vital that radiologists and AI models are shown examples of this type of image manipulation during training so that they can learn how to spot fake or altered images.