The Double-Edged Sword of Artificial Intelligence in Healthcare Cybersecurity

Artificial intelligence (AI) offers incredible potential to countless industries, with healthcare topping the list. As AI grows in popularity for simplifying workflows and even diagnosing patients, it’s important that healthcare leaders understand that using AI is also increasing among cyber attackers. For healthcare to leverage AI safely and effectively, leaders must understand how bad actors can use AI to target healthcare organizations and carry out attacks and how they can penetrate the AI systems used by healthcare systems. 

What is Artificial Intelligence (AI)? 

IBM defines Artificial Intelligence (AI) as “a field that combines computer science and robust datasets to enable problem-solving.” Artificial Intelligence has proven useful in many scenarios, making it more widely used than ever. The market size of AI was valued at $328.34 billion in 2021 and is projected to grow from $387.45 billion in 2022 to $1,394.30 billion by 2029. With its wide usage, we must understand exactly the extent to which AI can be used and how to use it responsibly.

Uses and Benefits of Artificial Intelligence in Healthcare 

AI has recently become more common in the healthcare industry, given its benefits in helping data become more available, making processes more efficient, and simplifying complex tasks. AI supports providing real-time data to patients and physicians, including real-time analytics and mobile alerts. These AI-supported features can help physicians diagnose and address medical issues accurately and in a timely manner. Additionally, processes can become more efficient by using AI to automate simple tasks. This allows medical professionals more time to assess patients, diagnose illnesses, and treat patients appropriately. By helping medical professionals work more efficiently, AI supports cost savings throughout the industry.

Artificial Intelligence has also been shown to have the capability to diagnose certain illnesses using imaging systems. For example, an AI system successfully identified growing prostate cancer on an MRI scan with similar accuracy to a radiologist with 15 years of experience. This model shows the potential for AI to be used extensively in diagnostics. While AI has shown clear benefits in healthcare thus far, research is still being done to show the true extent of its use cases within healthcare.

How AI is Used in Cybersecurity

AI has proven to be a valuable tool in cybersecurity, with many products available on the market using this advanced technology. AI-based tools can better detect threats and protect systems and their data sources, including antivirus/antimalware, data loss prevention, fraud detection/anti-fraud, identity and access management, intrusion detection/prevention systems, and risk and compliance management.

SaaS platforms like Clearwater’s IRM|AnalysisÒ leverage AI to help organizations make better decisions about where risks lie and what controls should be implemented to help mitigate those threats. IRM|Analysis specifically leverages AI to deliver predictive risk ratings,  drawing upon millions of risk scenarios analyzed within the software over time. AI features like this help organizational leaders make better risk-rating decisions and maximize limited resources by freeing analysts and managers to do higher-order work rather than getting bogged down in data preparation.

While we have a long way to go to understand AI fully, some interesting approaches are already being used in the cybersecurity industry. Frequently, AI is being used to detect and protect against attacks. In supporting detection, AI systems help reduce noise and identify more focused tasks for cybersecurity experts. AI also helps determine the priority of these responses and can even help in semi-automated responses to prevent attacks. Lastly, AI is being used to analyze the actions of attackers to understand better and predict their next move, which can help security professionals proactively protect their systems and data.

Based on all the technological advances and capabilities of using AI in cybersecurity in recent years, Acumen Research and Consulting estimates that the global market of these tools was $14.9 billion in 2021 and will reach $133.8 billion by 2030. This growth can be attributed to a few different trends in the industry, including an increased need based on an increase in attacks and the need for protecting at-home workers, given the increase after the COVID-19 pandemic. 

How Malicious Actors are Using AI

Unfortunately, bad actors are also taking advantage of the capabilities of AI and using it maliciously. For example, AI can identify systems’ weaknesses by recognizing certain patterns. These weaknesses can be exploited, exposing systems to additional threats, exposing sensitive data, and potentially causing harm throughout the system. If left undetected, a malicious user can gain entry to a system and sit dormant. Worse, the malicious user can set up back doors and other connections to worsen the effects of a system attack later. Additionally, research has shown that AI-generated phishing emails have a higher rate of being opened due to AI’s ability to recognize patterns and target users accordingly.

Cybercriminals are also leveraging AI to write malicious code. ChatGPT, an AI-powered chatbot hailed by many for it’s ability to answer questions, write, and even program computers, is also being used by cyberattackers to develop ransomware and malware. Even more troubling is the evidence that inexperienced cyberattackers are using ChatGPT for this purpose, indicating that this specific AI tool may lower the entry barrier to cybercriminal activity.

The unvetted and open-source nature of ChatGPT is particularly concerning when used by healthcare professionals, like the physician who used it to write an approval request to UnitedHealthcare. While not a direct violation of HIPAA, using unvetted AI technology like ChatGPT introduces heightened privacy and security risks. Clearwater Chief Risk Officer Jon Moore says it’s important that healthcare employees and medical providers remember that “Most, if not all, technologies can be used for good or evil, and ChatGPT is no different.”

When it comes to using AI to create and execute autonomous cyberattacks, it’s considered rare today but that is likely to change within the next five years according to this recent report.  It’s likely that cyber attackers will soon be able to leverage AI to plan and carry out cyberattacks with greater stealth to avoid detection, gather and mine data from infected systems, and increase the impact of their attacks.

Another way malicious actors are weaponizing AI is to infect the AI systems themselves. If a malicious actor gains access to an AI-enabled system, they can add false data to make the AI not work as expected. While the system began as a seemingly legitimate use of AI, it can become malicious. By infecting an AI-enabled system, a malicious actor could cause other harm, all while going undetected. Malicious actors have multiple AI capabilities, including building better malware, stealth attacks, password-guessing, human impersonation, and penetration testing tools.

Conclusion 

AI is a powerful tool changing many approaches to business and technology within the healthcare and cybersecurity industries. Thus, our security and risk professionals must understand AI, its best practices, and how to use it appropriately to support diagnosis and care delivery models and improve our systems’ security.

AI tools, like ChatGPT, are attractive to healthcare professionals thanks to the efficiency and productivity advantages it offers, but employees of healthcare organizations should approach these tools with extreme caution. And because it’s easy for employees to get their hands on these tools, healthcare organizations should leverage policies preventing employees from using new technology, AI, or others, for that matter, without approval. At a minimum, organizations should bar the entry of any ePHI or confidential information into these unvetted tools or systems.

Since the economics of cyberattacks favor malicious actors, they are likely to continue using AI for this purpose.

Until we know the full extent of AI’s capabilities to support attacks, we need to be ready to protect our systems in various ways and prevent new vulnerabilities from emerging as a result of AI’s growing adoption in healthcare.

Newsletter

Sign up for our monthly newsletter discussing hot topics and access to invaluable resources.


Related Blogs

Perspective on the Proposed Health Infrastructure Security and Accountability Act

Perspective on the Proposed Health Infrastructure Security and Accountability Act

The Health Infrastructure Security and Accountability Act (HISAA) introduced in the U.S. Senate on September 26 is another good step forward in addressing key factors contributing to the healthcare sector’s deficiency in establishing and maintaining adequate cybersecurity controls and risk management programs. While there are many in the sector that are already implementing recognized standards, having mandated standards would help to make sure everyone is playing by the same rules.

Connect
With Us