Intelligence Artificielle

What Providers Can Do to Minimize AI-Based Image Reconstruction Risks

Distortion risks associated with AI-based image reconstruction can lead to inaccurate diagnoses, and though the overall risk is low, providers need to be aware of the issue to ensure patient safety.

Artificial intelligence is increasingly being used to reconstruct images from data obtained during magnetic resonance imaging, computerized tomography, or other types of scans. While AI has been shown to improve the quality of scans and speed up reconstruction compared with standard algorithms, there are concerns that the new technique can distort images and lead to patient safety issues.

ECRI, an independent nonprofit focused on improving the quality of care, listed this concern as one of its top 10 health tech hazards for 2022.

« What we’re concerned about and why it’s in the top 10 is that AI is not a magic wand, » said Jason Launders, director of operations at ECRI, in a phone interview. « You have to be very careful in how it’s used. The manufacturers have to be open and transparent as to exactly what are the limitations of their specific AI methodology. »

Though some experts believe that this is not a significant issue in practice, there are still potential risks providers must consider when using AI algorithms for image reconstruction, including how the algorithm has been trained.

Limitations of AI-based image reconstruction

There are multiple reasons why AI is being used more often for imaging, including its ability to reduce the time taken for scans and improve the image quality without increasing radiation dose, Launders said.

But there are risks involved, including that AI is usually trained for specific use cases, explained Francisco Rodriguez-Campos, senior project officer at ECRI, in a phone interview.

« You think [about] the amount of data that will be required to train for every single possible combination of factors to be able to reconstruct those images, » Rodriguez-Campos said. « That’s part of this big issue at hand. »

If radiologists start using AI-based imaging technology outside the bounds within which they have been developed, there could be subtle changes to the images that the radiologist may overlook, Launders added.

For example, tiny perturbations during the image capture process may result in severe artifacts, which are features that appear in an image that are not present in the original object being scanned. This could obscure small structural changes, such as tumors, which can seriously impact diagnostic interpretation.

Read more

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières

Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…

1 jour ago

Cybersécurité des transports urbains : 123 incidents traités par l’ANSSI en cinq ans

L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…

1 jour ago

Directive NIS 2 : Comprendre les obligations en cybersécurité pour les entreprises européennes

Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…

3 jours ago

NIS 2 : entre retard politique et pression cybersécuritaire, les entreprises dans le flou

Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…

4 jours ago

Quand l’IA devient l’alliée des hackers : le phishing entre dans une nouvelle ère

L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…

5 jours ago

APT36 frappe l’Inde : des cyberattaques furtives infiltrent chemins de fer et énergie

Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…

5 jours ago

This website uses cookies.