Editor’s note: This article includes insights from Healthcare Dive’s recent live event, “AI and the Future of Healthcare.” You can watch the full event here.
Healthcare organizations face a number of barriers to adopting artificial intelligence tools. Providers must address concerns from patients, while payers and other life science companies struggle with how to balance promises of efficiency with ethical concerns like biases.
They do it all on the promise that AI will automate rote tasks, cut down on medical spending and waste, free up clinicians to spend more time with patients and transform the healthcare industry.
But more than two years after generative AI became popular with the general public through ChatGPT, healthcare is still catching up on how to regulate, test and implement the tools, experts said during a panel hosted by Healthcare Dive on Nov. 19.
Here are tips from eight healthcare experts on what organizations should consider when implementing AI and how to develop standards and regulations.
How providers can vet AI tools
The first step providers should take when deciding whether to integrate an AI tool is to assess the clinical setting it will be used in. Not every tool is appropriate for every task in the clinic, according to Sonya Makhni, medical director for the Mayo Clinic Platform.
“An AI algorithm that might be really good and appropriate for my clinical setting might not be as appropriate for another and vice versa,” Makhni said. “Health systems need to understand … what to look for, and they need to understand their patient population so they can make an informed decision on their own.”
Although providers must assess AI tools, they face hurdles analyzing them due to their sheer complexity because the algorithms and models can be difficult to understand, she added.
“Our workforce is strained as it is,” Makhni said. “We can’t really expect everyone to go get a master’s degree in data science or artificial intelligence to be able to interpret the solutions.”
To help solve this problem, providers should turn to public and private consortia — like the nonprofit Coalition for Health AI — for guiding principles when evaluating AI tools.
“We have learned a lot about what types of principles that solutions should adhere to,” Makhni said.” So, safety, fairness, usefulness, transparency, explainability, privacy, you know, all of these things we should use as a lens for when we look at AI solutions.”
Addressing patient concerns
Once providers decide to integrate AI tools, they face another potential stumbling block: their patients.
As AI tools become more popular, patients have expressed reservations about the technology being used at the doctor’s office. Last year, 60% of surveyed American adults told Pew Research they would be uncomfortable if their provider relied on AI for their medical care.
To make patients more comfortable, Maulin Shah, chief medical information officer at health system Providence, said clinicians should emphasize that, right now, AI provides a purely supportive role.
“AI is really just, in a lot of ways, a better way of supporting and providing decision support to your doc[tor], so that they aren’t missing things or so they can be suggesting things,” Shah said.
Although AI tools have only just become popular with the general public, patients may feel better knowing that AI has existed in the medical field for a long time.
Aarti Ravikumar, chief medical information officer at Atlantic Health System, pointed to transformational tools such as the artificial pancreas, or the closed loop hybrid insulin pump, which has become a “game changer” for patients who are insulin-dependent.
“All of that work is being done using artificial intelligence algorithms,” Ravikumar said. “So we have AI tools that are embedded within our medical devices or within our electronic medical record, and have for a long time.”
“None of these tools are removing the clinician from that interaction or medical decision making,” Ravikumar said. “If we get to the stage that it’s going to automate decisions and remove the clinician from that decision-making process, I think then we’ll have to definitely explain a lot more.”
Tackling errors and biases
Every organization will be forced to address glitches when integrating AI models. But the stakes are higher in healthcare, where bias and hallucinations, or when AI tools produce false or misleading information, can lead to disruptions in patient care.
Providers aren’t the only healthcare organization grappling with bias. Payers have faced backlash for using AI tools to deny medical care, and tech firms have been accused of creating tools that compound existing healthcare disparities.
It’s essential for generative AI companies to keep a human in the loop, including feedback from users, like experts, nurses or clinicians, according to Aashima Gupta, global director of healthcare at Google Cloud.
“To me, that feedback input will make gen AI more effective for a given use case,” Gupta said.
AI companies should also thoroughly test their models. At Google, dedicated teams try to break AI tools through trickery, like attempting to prompt an incorrect answer to a question. Robust development and keeping humans in the loop go “hand in hand” with keeping on top of errors, she added.
But, while organizations should be cautious about errors and bias, AI tools could represent an opportunity to try to mitigate bias in healthcare, said Jess Lamb, a partner at consultancy McKinsey.
“There is a ton of bias in the healthcare system before we introduce AI, right? And so we have to remember we are not starting from a perfect place,” Lamb said. “The idea that we can actually use AI and use some of this deliberate monitoring to actually improve some of that in-going position that we’re in when it comes to bias in healthcare, I think is actually a huge opportunity.”
“We always talk about the downside risk of bias when it comes to using AI, but I actually think there’s a pretty big upside here as well to kind of mitigate some of the existing biases that we see in the system,” she added.
Developing regulations, standards for healthcare AI
While healthcare organizations are deciding whether to implement AI, the federal government and private consortia are grappling with how to regulate it.
Although the federal government has made incremental progress attempting to regulate the tools, including through rulemaking, standards for the industry are still in the early days.
AI adoption has been rapid compared to when it became more mainstream two years ago, which has amplified pressure for the government to enact regulations, said Micky Tripathi, the assistant secretary for technology policy and acting chief AI officer at the HHS.
Going forward, partnerships between the government and private industry will be critical, Tripathi said.
“There is a maturation process that’s going to go on here that I think is very much going to be a public, private thing,” he said.
Tripathi also wonders when regulations will help compel the private industry toward adopting their own standards and certifications for tools. In another area in the industry, the government provides standards for electronic health record companies to apply for voluntary certifications.
“For example, what will drive the Providence Healths of the world to feel compelled to either use or get some kind of certification, or some kind of approval … from an organization that is providing certain services for validating AI models?” he said. “Right now, that would just be a pure cost to either a developer who’s developing these solutions or to a provider organization who’s implementing them.”
While consortia can provide high-level frameworks for AI, organizations also need open standards to help address clinical AI use cases at the ground level, said Sara Vaezy, chief strategy and digital officer at Providence.
“We need open standards similar to all of the progress that has been made around interoperability,” Vaezy said.
“The challenge today is that the consortia are far from where the work is happening for us, and we need to close that gap quickly. And I think one of the ways that we can do that is through the creation of open standards,” she added.
Training providers needs to happen alongside creating standards, according to Reid Blackman, founder and CEO of consultancy Virtue. It can also help fill the gaps in regulation or governance related to AI.
“You can do a lot to educate the average doctor, nurse, etc., to what those risks are,” he added.
“Training is an essential part of, I don’t want to say guardrails, but it’s an essential part of making sure things don’t go sideways,” Blackman said.