The project began with a vexing problem. Imaging tests that turned up unexpected issues — such as suspicious lung nodules — were being overlooked by busy caregivers, and patients who needed prompt follow-up weren’t getting it.
After months of discussion, the leaders of Northwestern Medicine coalesced around a heady solution: Artificial intelligence could be used to identify these cases and quickly ping providers.
If only it were that easy.
It took three years to embed AI models to flag lung and adrenal nodules into clinical practice, requiring thousands of work hours by employees who spanned the organization — from radiologists, to human resources specialists, to nurses, primary care doctors, and IT experts. Developing accurate models was the least of their problems. The real challenge was building trust in their conclusions and designing a system to ensure the tool’s warnings didn’t just lead providers to click past a pop-up, and instead translated to effective, real-world care.
“There were so many surprises. This was a learning experience every day,” said Jane Domingo, a project manager in Northwestern’s office of clinical improvement. “It’s amazing to think of the sheer number of different people and expertise that we pulled together to make this work.”
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.