The future of healthcare is dependent on securing AI-powered medical devices
As more connected medical devices are built on AI, cybersecurity risks will increase as well – and it’s more important than ever before for manufacturers to implement advanced security protections in the design phase to ensure the safety of healthcare organizations, providers and patients.
Investments in artificial intelligence and machine learning are finally on the rise in healthcare.
While the industry has been slow to adopt AI in comparison to other sectors like financial services and manufacturing – with 70% of health systems yet to establish a formal program – a recent survey found that 68% of health system executives plan to invest more in AI in the next five years to help reach their strategic goals. And the investments are expected to be significant; the global AI in healthcare market size is estimated to reach $120.2 billion by 2028.
The opportunities for AI in healthcare are widespread, spanning both operational and clinical use cases including fraud prevention, voice-assisted charting, registration, remote patient monitoring and more. AI holds particular promise for connected medical devices and telehealth – an integral part of the Internet of Medical Things (IoMT) – as it enables faster triage, intake, detection and decision making.
In fact, new patient apps and connected medical devices leveraging AI are already being launched regularly. For example, Google recently introduced a new AI-powered dermatology app that uses image recognition algorithms to provide expert, personalized help by suggesting possible skin conditions based on patient-uploaded photos. A Philips device leverages insights from AI to diagnose and treat oncology patients. And Amwell’s new telehealth platform enables providers to receive alerts on their patients’ health status via an AI-powered, automated real-time early warning score system.
While there is significant potential for AI in healthcare, there are also limitations. The primary challenge that has not yet been widely discussed, however, is how best to secure AI-powered connected medical devices from increasingly frequent and complex cybersecurity risks.
Securing the IoMT in the age of AI is imperative
While AI can and often has been used for good, it can also be used to discover and exploit vulnerabilities. For example, the same type of algorithm being implemented in a medical device to more accurately and quickly diagnose cancer may also be used by a bad actor to attack that device. To illustrate, a 2019 study from Ben-Gurion University demonstrated how AI-savvy hackers could manipulate CT and MRI results of lung cancer patients – gaining complete control over the number, size and location of tumors.
Both radiologists and AI algorithms were unable to differentiate between the altered and correct scans. This kind of tampering has the potential to impact patient lives, and can also result in insurance fraud, ransomware attacks and other issues for both patients and providers.
Bad actors often need little more than an emulator — which enables one computer system to behave like another – and a piece of code from the system being targeted in order to successfully program AI to hack a device.
Cyber threats are clearly a significant and increasing challenge for the connected industries. In 2019 alone, cyberattacks on IoT devices increased dramatically, accounting for more than 2.9 billion events. And it’s estimated that 50 billion medical devices will be connected to clinical systems within the next 10 years, making the IoMT (Internet of Medical Things) industry an increasingly opportune target for hackers. Despite the repercussions of a cyberattack, data shows that many manufacturers are challenged to practice Security by Design due to shortage of knowledge and know-how. According to a recent survey we did, only 13% of IoMT leaders believe their business is very prepared to mitigate future risks, while 70% believe that they are only somewhat prepared at best.
However, there are steps manufacturers can take to protect their devices from the start.
How to ensure AI-enabled devices are secure
Although AI and machine learning models are expensive and time intensive to create, once they are built, they are very easy to replicate. Restricting and preventing access to a system is thus a critical first step in protecting systems from adversaries.
In order for bad actors to successfully attack a system built on AI, they need access to the system’s data, or a digital twin, for their algorithms to process. In most cases, machine learning ‘lifting’, or emulation of data is possible because the automated system answers thousands of questions without being flagged as a potential threat; with answers to these questions, the bad actors can easily use AI to replicate the system or program, even if it’s a complex medical device software or process.
Limiting access is thus crucial, and includes a few steps:
- Build access control layers, such as logins and passwords, to ensure that only those who are authorized to have access are able to see the information. This is equivalent to putting a lock on a door.
- Add anomaly detection to detect unusual usage patterns within what is considered the normal communication pattern. This type of protection identifies unusual activity so that the organization can act accordingly. For example, an unusual pattern might be a bot making a high number of requests. In this way, security professionals can help distinguish between someone legitimately using the system or device, and someone who is interrogating it.
Beyond access control and anomaly detection, it’s also important to harden connected devices against reverse engineering. Manufacturers can use many different tactics and solutions to make the code in their devices difficult to reverse engineer and thereby help keep them secure.
All of these protections should be built into devices during the original R&D process, as it is much more of an arduous task to add cybersecurity once a product is already on the market.
Additionally, it’s important for medtech manufacturers to ensure the regulatory readiness of their medical devices, particularly as the regulatory landscape continues to evolve. While 80% of medtech executives believe that regulatory compliance is the biggest business benefit of implementing a strong cybersecurity strategy, only four in 10 respondents rated themselves very aware or knowledgeable about forthcoming EU and U.S. cybersecurity regulations. Leveraging an assessment tool can help manufacturers examine their regulatory preparedness and identify any weak spots so they can address them before the device goes to market.
Machine learning has the power to be used both for good and unfortunately for nefarious purposes. As more connected medical devices are built on AI, cybersecurity risks will increase as well – and it’s more important than ever before for manufacturers to implement advanced security protections in the design phase to ensure the safety of healthcare organizations, providers and patients.