Determining the Trustworthiness of AI in Healthcare

As organizations across industries leverage generative AI in new and exciting ways, healthcare must be more cautious. AI holds incredible promise for countless healthcare use cases, from accuracy and speed in diagnosis and treatment to efficiency gains across countless administrative functions. However, healthcare providers and digital health partners critical to advancing healthcare are creating, storing, and transmitting precious patient information. Leaders across the healthcare ecosystem don’t have the luxury of experimenting with AI without seriously considering the potential impact on patient safety and outcomes should the AI be compromised or used to gain access to the systems and networks that are critical to delivering patient care.

Information Security Media Group (ISMG) recently published its first AI survey, spanning multiple industries. They also published a healthcare edition of their survey report.

ISMG’s AI Healthcare Edition found that in comparison with their cross-industry report, of which 15% of respondents said that they had already implemented generative AI in production, 0% of healthcare respondents said they had implemented generative AI in production. However, 30% of healthcare cybersecurity leaders who participated said that their employees are allowed to use generative AI for purposes of their own initiative. 57% of business leaders and 43% of cybersecurity leaders in healthcare organizations responded that they have plans to purchase AI-driven solutions within the next 12 months.

For a full breakdown of survey results and takeaways from the data, download ISMG’s report here.

Clearwater’s Vice President of Consulting Services, Dave Bailey, talked with ISMG about the survey results and what promises and risks AI holds for healthcare organizations.

Bailey says it’s rare that a conversation with a cybersecurity leader right now doesn’t involve AI at some point, “it’s here; everyone has to be read and prepared.”

The Basics Still Hold True

Asked about what healthcare organizations should be thinking about related to security and privacy concerns, Bailey goes back to the basics and stresses that they’re still the same.

Failure to conduct an accurate and thorough risk analysis continues to be one of the most common reasons a healthcare organization finds itself in the crosshairs of regulatory bodies like Office for Civil Rights (OCR). The same best practices that are critical to managing cyber risk hold true in the face of AI.

“People need to understand their AI risks and the potential impacts to the data and to patients. Can you demonstrate that you’ve implemented reasonable and appropriate controls to protect the data?” says Bailey.  

Within this risk analysis process, Bailey stresses the importance of understanding what happens to data when it’s used in an AI‘s learning model. The biggest difference between a traditional data set and an AI data set is the data undergoes some level of change, even continual change. Bailey says if there’s protected data in an AI data set, it’s even more important to understand the risks and ensure the proper controls are in place to mitigate them.  

Recommended Best Practices

So how can healthcare cybersecurity leaders, from provider organizations to their business associates and partners, start implementing security and privacy best practices around AI now?

Bailey says, “Everything should start at the top with a really good governance structure. Ensure that you have governance around the use of AI and all your data.”

Secondly, Bailey recommends implementing a standard risk management framework, like NIST’s AI Risk Management Framework (AI RMF) introduced in early 2023. Bailey says adopting a framework like this can help organizations build confidence their approach to AI governance aligns with NIST, offers a detailed plan to address gaps, drives organizational dialogue to address and manage AI risk, and will be key to determining the trustworthiness of AI systems that promise to unleash benefits and managing risks.

Finally, Bailey says that AI will likely change the way we all think about the lifecycle of data and data governance, “We all have to recognize that we may have to understand and change some of our practices.”

Newsletter

Sign up for our monthly newsletter discussing hot topics and access to invaluable resources.


Related Blogs

Perspective on the Proposed Health Infrastructure Security and Accountability Act

Perspective on the Proposed Health Infrastructure Security and Accountability Act

The Health Infrastructure Security and Accountability Act (HISAA) introduced in the U.S. Senate on September 26 is another good step forward in addressing key factors contributing to the healthcare sector’s deficiency in establishing and maintaining adequate cybersecurity controls and risk management programs. While there are many in the sector that are already implementing recognized standards, having mandated standards would help to make sure everyone is playing by the same rules.

Connect
With Us