top of page

Problems with using AI for Medical Imaging

Zoe Devine

image.png
Ezra Flash AI enhancement of MRI

Gal, Emi. "The Secret Ezra Master Plan (just between you and me)." ezra, 1 June 2023, https://ezra.com/blog/the-secret-ezra-master-plan-just-between-you-and-me.

Healthcare is rapidly evolving, and with that, there is new research introducing the idea of integrating artificial intelligence in the analysis of medical images. This idea raises the potential for AI to expedite the analysis of these images. Rapid and accurate analysis of X-rays, MRIs, and CAT scans can lead to a quicker diagnosis of medical conditions and allow for early intervention and treatment. The Food and Drug Administration (FDA) has already allowed Ezra Flash, an artificial intelligence software that enhances the quality of MRI images, to market its product (Hall, 2023). They promise this software will speed up MRI times and analysis and create clearer MRI images. Still, this software also raises serious concerns within the medical community. Using AI to analyze medical images, such as X-rays, MRIs, and CAT scans, may seem more efficient and reliable than human radiologists. However, this approach will cause a higher rate of false positives/negatives, force radiologists to integrate the use of AI, and raise concerns regarding patient privacy.

Using AI to analyze medical images causes a larger amount of false positives and negatives. One of the main issues when using AI for analysis is that any tiny, undetectable disturbance in an image of a sampling domain, will cause the AI to make a mistake when reconstructing the data (Antun et al., 2020).  Dr. Ben Adcock from Simon Fraser University continues to explain that, “AI techniques are highly unstable in medical imaging, so that small changes in the input may result in big changes in the output" ("AI techniques in medical imaging may lead to incorrect diagnoses," 2020). When radiologists used images analyzed and reconstructed by AI for lung cancer detection with chest radiography, data showed that false negative and false positive rates among the radiologists increased, but when AI’s analysis of the image was deleted from a patient's file, false positives decreased. This is because AI gives biased information to radiologists causing them to make a wrong conclusion about a patient's health when without the AI’s analysis, they would have made a right one. Another study had two sets of radiologists interpret mammograms. One set used Computer-aided design (CAD) while the other didn’t. When CAD’s results came back with a false negative, only 21% of the radiologists indicated cancer was present, but in the other set, without using CAD, they determined 46% indicated cancer was present (Bernstein et al., 2023). 

AI used to Enhance image from Mammogram

When using AI, the probability of an incorrectly reconstructed image that does not represent a tumor that the original image shows is extremely likely. Radiologists are trained specifically to find these tumors and diagnose patients, and incorporating AI into this process hurts them more than helps them. This is because we can't explain these processes using concrete logic that an AI would understand when searching for tumors and other things medical imaging is used to find. AIs lack human reasoning, which affects their reliability in the medical field. The slightest mistake can change a patient's life which is why the most reliable method must be used to diagnose life-threatening diseases.

image.png

"Breast cancer dataset annotation via V7 bounding box tool." V7, 9 November 2022, https://www.v7labs.com/blog/ai-in-radiology.

image.png

Example of low-dose contrast-enhanced MRI from a patient with meningioma. 

Although the tumor is large enough and the picture is clear enough for the AI to analyze and reconstruct, smaller tumors and blurrier images may make the AI mistake the tumor for a part of the brain.

"low-dose contrast-enhanced MRI." V7, 9 November 2022, https://www.v7labs.com/blog/ai-in-radiology.

Another concern regarding the use of AI in radiology is how it will affect experienced radiologists. Many people fear automation will result in the loss of radiology jobs and radiologists having to learn new skills around AI. In some industries, this has already been seen. For example, companies like MSN, Google, and Salesforce have laid off hundreds of workers and replaced them with AI (O'Sullivan, 2024). Although AI won't replace human radiologists, they will be forced to integrate AI into their practices, threatening the careers of radiologists who refuse to work with AI (Davenport & Dreyer, 2018). This becomes an issue because these AI technologies have been proven inaccurate, and as explained before, AI causes radiologists to make mistakes.

image.png

Because the use of AI in radiology has grown exponentially in the past few years, many medical students are choosing not to specialize in radiology (Liu et al., 2023). This is due to the fear that AI will drastically change the field that medical school isn't preparing students for. This means that experienced radiologists and medical school graduates will have no idea how to implement AI within their analysis and diagnosis process which could be dangerous to patients (Mousavi Baigi et al., 2023). Radiologists may fear being held responsible if an AI program misdiagnoses a patient. Evidence shows that radiologists provide more accurate results without the use of AI. Therefore, radiologists shouldn't have to adapt to integrating these unfamiliar AI systems.

image.png

Canadian medical students preference for radiology specialty increases by 20.3% when AI's impact isn't a consideration.

"Influence of Artificial Intelligence on Canadian Medical Students' Preference for Radiology Specialty: A National Survey Study." Science Direct, April 2019, https://www.sciencedirect.com/science/article/pii/S1076633218304719.

Will people want to go to the doctor if they are afraid their private info will be used against them?

People already fear using AI due to privacy concerns, but they may not realize the extent of data AI collects in medical imaging technology. Microsoft received significant repercussions when it was released that its database with around 10 million facial photographs was being used by organizations like IBM and Panasonic as well as Chinese surveillance firms. People within the database had no idea these organizations were using their image (Pearce, 2021). This especially becomes an issue when AI programs get ahold of medical images from MRIs, CAT scans, and X-rays. Patients may not be aware that these images will be used in future clinical data. Although this may not seem like a big deal compared to Microsoft's database, this is only because of the myth of anonymization. Yvonne Lui, MD, a radiologist at NYU Langone Health, explains that "A Mayo Clinic study showed that in 85% of cases, standard facial recognition software was able to identify the research volunteers based on their MRI reconstruction”. She continues to explain this is easy to do and that you only need to download an app on your phone (Klenske, 2021). This means these AI systems not only have MRI images but can also use these images to reconstruct facial features that can be used by any organization with access to this data.

Since the use of AI is exponentially growing, especially in the field of radiology, it is essential to know the risks of using AI and why the field is better off without these programs. Although these AI systems, like Ezra Flash, claim to be more efficient and accurate and hope to become a cheaper option for patients, these systems are proven to be more inaccurate than experienced radiologists. AI also threatens the careers of radiologists who refuse to incorporate AI within their practices and pose major privacy concerns for patients. The field of radiology is better off without the use of AI because radiologists won’t be influenced by inaccurate analysis and reconstruction done by AI, radiologists won’t be forced to use systems that they have no training or prior knowledge of, and patients will know that outside organizations won’t have access to their medical images and data.

Works Cited

Antun, V., Renna, F., Poon, C., & Hansen, A. C. (2020, May 11). On instabilities of deep learning in image reconstruction and the potential costs of AI. PNAS. https://www.pnas.org/

 

Bernstein, M. H., Atalay, M. K., Dibble, E. H., Maxwell, A. W. P., Karam, A. R., Agarwal, S., Ward, R. C., Healey, T. T., & Baird, G. L. (2023, November). Can incorrect artificial intelligence (AI) results impact radiologists, and if so, what can we do about it? A multi-reader pilot study of lung cancer detection with chest radiography. European radiology. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10235827/

Davenport, T. H., & Dreyer, K. J. (2018, March 27). Ai will change radiology, but it won’t replace radiologists. Harvard Business Review. https://hbr.org/2018/03/ai-will-change-radiology-but-it-wont-replace-radiologists

Hall, J. (2023, June 1). FDA clears ai software that may lead to 30-minute full-body MRI exams. Diagnostic Imaging. https://www.diagnosticimaging.com/view/fda-clears-ai-software-that-may-lead-to-30-minute-full-body-mri-exams

Klenske, N. (2021, February 15). Protecting patient privacy in the era of Artificial Intelligence. RSNA. https://www.rsna.org/news/2021/february/protecting-patient-privacy

Mousavi Baigi, S. F., Sarbaz, M., Ghaddaripouri, K., Ghaddaripouri, M., Mousavi, A. S., & Kimiafar, K. (2023). Attitudes, knowledge, and skills towards artificial intelligence among healthcare students: A systematic review. Health Science Reports, 6(3). https://doi.org/10.1002/hsr2.1138

O’Sullivan, I. (2024, February 6). Companies that have already replaced workers with ai. Tech.co. https://tech.co/news/companies-replace-workers-with-ai

Pearce, G. (2021, May 28). Beware the privacy violations in Artificial Intelligence Applications. ISACA. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2021/beware-the-privacy-violations-in-artificial-intelligence-applications

bottom of page