Dive Brief:
Risks from artificial intelligence-backed products are the most significant technology hazards in the healthcare sector, according to a report published Thursday by research nonprofit ECRI.
Though AI has the potential to improve care, things like biases, inaccurate or misleading responses and performance degradation over time could cause patient harm, the analysis said.
Healthcare organizations need to think carefully when implementing AI tools, clearly define their goals, validate and monitor its performance, and insist on transparency from model developers, according to the safety and quality research firm.
Dive Insight:
The report, which ranked the top healthcare technology hazards that require “urgent attention” in the upcoming year, chose one of the sector’s most alluring emerging technologies for the top spot.
Healthcare leaders argue AI could help solve some of the industry’s most pernicious labor issues, like provider burnout or staff shortages. The tools can be used for a wide range of healthcare applications, from triaging critical imaging results, helping clinicians take notes or assisting with scheduling patient appointments.
But if healthcare organizations don’t carefully assess and manage risks from AI, quality care and patient safety could suffer, according to the ECRI report.
For example, AI could perpetuate bias in the underlying data used to train models, potentially exacerbating existing health disparities. Model performance could also suffer when used with a patient population that doesn’t mirror its training data, leading to inaccurate or inappropriate responses.
“We have to be careful that when we are implementing an AI model, the population that it was trained on really matches the characteristics of the population that we want to use it on in the institution,” Francisco Rodriguez-Campos, a principal project officer at ECRI, said during a webinar about the report.
Hallucinations, or when an AI system gives inaccurate or misleading information, could be a risk for healthcare organizations too. Plus, model performance can degrade over time, especially with AI that is continuously ingesting new information or when clinical circumstances change.
Risks can depend on how an organization implements AI — insufficient oversight, poor data management practices and too much trust in a model could risk patient care hazards, according to ECRI.
The regulatory environment for AI in healthcare can be patchwork too, and the federal government is still working on an overarching strategy. Some AI systems for tasks like clinical documentation or scheduling appointments could have a significant impact on patient care, though they likely wouldn’t be regulated as a medical device under the Food and Drug Administration, according to ECRI.
As companies adopt AI, they’ll need to set up an effective governance structure and ensure they train staff about the capabilities and limitations of the model. They should also validate the model’s performance, ideally with an outside source, and keep monitoring the system over time.
When buying an AI product, organizations should also demand transparency from technology firms, like what data the system is trained on, a clear explanation of how the AI works and metrics that show its performance under ideal conditions, according to ECRI.
However, AI isn’t the only technology threat to healthcare organizations. Cybersecurity incidents at their vendors — like third parties that provide their electronic health records or scheduling and billing services — could have a serious impact on patient care. ECRI ranked cyberthreats at vendors third on its hazard list.
The healthcare sector saw the potential ramifications of a cyberattack on a key vendor early this year after an incident at major clams processor Change Healthcare slowed payments to providers for weeks and exposed data from 100 million Americans.
To mitigate potential harm, organizations should conduct vendor risk reviews, create redundancies for critical systems and develop incident response and recovery plans.
Response exercises shouldn’t be limited to cybersecurity teams either, said Kallie Smith, vice president and information security officer at ECRI.
“They really should be done in congruence with your healthcare providers in those particular settings where they’re providing care, so that you can identify what an outage might mean to those that are actually providing that care,” Smith said during the webinar.