An AI Risk Management Framework Can Decrease Risks While Accelerating Adoption
Healthcare organizations are rushing to adopt artificial intelligence (AI), and it’s rapidly transforming the industry. From improving patient care to streamlining administrative tasks, AI has a major impact on healthcare delivery.
A report from Accenture, for example, found that 98% of healthcare executives say generative AI is ushering in a new era of enterprise intelligence. Similarly, a report from Optum found that 85% of healthcare leaders indicated their organization plans to implement an AI strategy, with another 48% saying they already have one in place.
Why is this happening so quickly in an industry that, pre-COVID, had long been hesitant to adopt new technologies early? Because AI is proving it can quickly and cost-effectively:
- Ease administrative burden
- Improve patient outcomes
- Reduce costs
- Automate manual tasks
- Decrease the chance of human error
Benefits of AI for Healthcare
AI is being used to improve and innovate patient care in a number of ways, for example:
- Diagnostic tools help doctors diagnose diseases more accurately and efficiently. For example, to analyze medical images such as X-rays and MRI scans to identify abnormalities the human eye may miss.
- Personalized treatment plans can be tailored to the individual patient’s needs. For example, to analyze medical history, genetic makeup, and other factors to establish the best course of treatment.
- Remote patient monitoring can help doctors track patient health remotely and identify potential problems early on. It can be used to monitor blood sugar levels, heart rate, and other vital signs.
On an administrative level, AI is streamlining tasks like:
- Coding medical procedures and processing insurance claims more accurately and efficiently, leading to faster reimbursements from insurance companies and reduced costs.
- Appointment scheduling can help hospitals to schedule appointments more efficiently and reduce patient wait times.
AI can also help healthcare providers reduce costs by:
- Identifying and preventing fraudulent insurance claims
- Managing and optimizing supply chains
- Conducting predictive maintenance to identify and prevent equipment failures and reduce downtime and costs
But these benefits also introduce new risks into healthcare’s already complex environment, for example:
- Security and privacy risks: AI collects and stores vast amounts of sensitive patient data, a valuable target for cybercriminals who can steal, alter, or misuse it.
- Bias and discrimination: AI systems trained on data may reflect existing biases and inequalities in healthcare, leading to biased decisions, such as denying care to certain patients or recommending treatments that are not appropriate for all patients.
- Lack of transparency and accountability: It can be difficult to understand how AI models make decisions, increasing accountability challenges.
In addition to these risks, threat actors use AI to design and execute attacks, such as:
- Development of phishing emails
- Impersonation attacks
- Rapid exploitation of vulnerabilities
- Development of complex malware code
- Deeper target reconnaissance
- Automation of attacks
The speed and complexity of AI-powered cyberattack strategies can easily overwhelm human defenses and make ransomware widespread and more evasive.
AI Risk Management Frameworks
AI risk management is a key component of responsible development and use of AI systems and protecting them from potential cyberattacks. By implementing responsible AI practices, your organization can make better-informed decisions about AI design, development, acquisition, and use along with the intended aim and value. As a healthcare organization, it’s important to focus on developing and managing trustworthy systems that ensure AI is safe for all patients, including data privacy and security, but knowing exactly how to do this isn’t easy. This is where implementing an AI risk management framework is key.
An AI risk management framework can provide a structured approach to identify, assess, and manage AI risks. There are several risk management frameworks available to address AI risk, for example:
- The NIST Artificial Intelligence Risk Management Framework (AI RMF)
- ISO/IEC 27005:2019 Information Security Risk Management
- The Control Objectives for Information and Related Technologies (COBIT 5) framework
The NIST AI Risk Management Framework looks at addressing risk through four core functions: govern, map, measure, and manage.
- Govern: A cross-cutting function that informs and infuses the other three functions.
- Map: Context is recognized and risks related to context are identified.
- Measure: Employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.
- Manage: Risks are prioritized based on projected impact. Manage establishes the context to frame risks related to an AI system and entails allocating risk resources to mapped and measured risks regularly and as defined by the govern function.
As a healthcare organization, you should choose an AI risk management framework appropriate to your size and complexity so it’s tailored to your specific needs. This will help you better protect your patients, their data, and your organization’s reputation and:
- Identify and assess risks, including security and privacy risks, and the potential for bias and discrimination.
- Develop mitigation strategies to reduce the likelihood and impact of these risks. This may involve implementing security measures, developing policies to prevent bias and discrimination, and training staff on how to use AI systems responsibly.
- Monitor and evaluate risks continuously to identify emerging risks as AI technologies evolve.
An AI risk management framework can also help you look at risk from three core perspectives to better define which risks to prioritize for attention and mitigation.
1. Harm to people
- Individuals: Harm to a person’s civil liberties, rights, physical or psychological safety, or economic opportunity
- Group/community: Harm to a group, such as discrimination against a population sub-group
- Societal: Harm to democratic participation or educational access
2. Harm to your organization
- Harm to an organization’s business operations
- Harm to an organization from security breaches or monetary loss
- Harm to an organization’s reputation
3. Harm to your ecosystem
- Harm to interconnected and interdependent elements and resources
- Harm to the global financial system, supply chain, or interrelated systems
- Harm to natural resources, the environment, and the planet
Assessing Your AI Risk Management Maturity
Once you have an AI risk management framework in place, you should periodically assess your adherence to and the degree of adoption of the controls using a maturity scale like that outlined in the Control Objectives for Information Technologies (COBIT) 2019 framework:
- Rating 0: Incomplete
- Process lacks capability or control not in place.
- Rating 1: Performed
- Application of an incomplete set of activities.
- Rating 2: Managed
- Process achieves its purpose but is largely informal
- Rating 3: Established
- Process achieves its purpose and is typically well-defined.
- Rating 4: Predictable
- Performance is measured (quantitatively).
- Rating 5: Optimized
- Continuous improvement is pursued.
The goal is to answer questions such as:
- Do you have a process in place?
- Is it effective?
- Does it achieve its purpose?
- Is it well-defined?
- Is it reasonable and appropriate?
- Can you measure it?
- Is it a sound practice as outlined in the framework?
Aligning to an AI risk management framework will also help you establish your current AI posture and better understand risk identification and effective risk management practices. This is how you can go beyond implementing controls to better understand business use cases and effective safeguards, identify gaps, potential AI risk drivers, and what you can do to decrease that risk.
Need help establishing or maturing an AI risk management framework? Let’s schedule a call; our team would love to help.