As hype grows around artificial intelligence in healthcare, a seemingly paradoxical question is setting off faint warning bells in the minds of skeptics and proponents alike: Could AI tools meant to reduce clinician burden actually increase it?
It’s too soon to say, given the technology’s nascency. But the answer — which has big implications for the future adoption of AI in the industry — depends in part on how intensely clinicians have to fact-check models, along with financial incentives in healthcare’s majority fee-for-service payment infrastructure, according to experts.
All told, AI products could add up to weaker time savings than developers promise, and any time saved could quickly be filled up by additional patient visits as hospitals and medical groups hustle to increase earnings.
“That’s absolutely a worry,” said Graham Walker, the co-director for advanced development at Kaiser Permanente’s medical group, during a panel at the HIMSS healthcare conference in Las Vegas. “The easiest way to get more revenue out of your healthcare system is by telling the doctors and [nurse practitioners] and pharmacists to go faster and see maybe just one more patient.”
‘The tools aren’t perfect’
The goal of many AI tools in healthcare is to reduce administrative friction by helping with rote tasks like documenting patient visits, looking up medical information or filing paperwork. Such tasks contribute to clinician burnout, which is a major problem in the industry and can lead physicians to exit the field.
About one-third of doctors responding to an American Medical Association survey in 2023 said they were interested in or planning to leave their jobs in the next few years. Many cited work overload as the reason: According to some research, healthcare workers spend the majority of their week working on administrative tasks.
Enter AI. Algorithms have been in use by healthcare companies for decades, but excitement around the tech has reached a fever pitch in the past few years with advancements like generative AI, which can create original text, and AI agents, which can perform tasks without human oversight.
A number of companies are folding the technology into sleekly-packaged products, touting time-saving metrics that may almost seem too good to be true to clinicians sick of after-hours ‘pajama time’ spent on documentation.
Abridge, a startup that uses ambient listening and generative AI to automate clinical notetaking, and Nabla, another ambient AI assistant, both claim to save providers about two hours each day.
Software giant Oracle has woven AI into its health records platform for providers, including a clinical AI assistant. Physicians using that tool see a 30% decrease in documentation time, according to the company.
Meanwhile, Microsoft says its AI documentation product saves doctors five minutes on average per patient visit.
“The capabilities that AI scribes bring to the practice of care are remarkable,” said Rohit Chandra, the chief digital officer at the Cleveland Clinic, during a panel. (The academic medical system is currently rolling out an AI documentation software from Ambience across its provider network.)
Documentation products tackle an acute pain point for doctors, while being relatively simple to use, easy to implement and safe, because they’re supervised by clinicians, Chandra said.
However, AI performance metrics such as time saved should be taken with a grain of salt, according to experts.
Clinicians still need to review the AI’s output to catch mistakes. That process, known as ‘human in the loop’, is key to ensure accuracy and build trust in AI, experts say.
Letting AI tools operate unsupervised could result in medical records containing made-up symptoms, or missing information that could be key to a patient’s health.
“That’s where I think we can get into trouble. The technology is going to get to the point where, at least in my opinion, I don’t think we’re going to need to worry as much. But as we think about early adoption, we do have to be cognizant that it is going to make mistakes,” Brenton Hill, head of operations at health AI standards group the Coalition of Health AI, said during a panel.
But deputizing clinicians to police what AI generates can cut significantly into time savings or, in some cases, erase them altogether.
“The tools aren’t perfect,” said Deborah Edberg, a family medicine physician with CVS-owned primary care chain Oak Street Health, during a panel. “We use AI to do our documentation and I do spend quite a bit of time going back and editing … It can be a bit of a burden to make sure that what is recorded is accurate.”
To date, there have been no extensive, independent reviews of AI scribes in healthcare. But one recent study published in JAMA Network Open found Microsoft’s ambient scribe received mixed feedback from users.
“A recurring theme was the need for substantial editing and proofreading of the AI-generated notes, which sometimes offset the time saved,” the researchers wrote.
Time savings may not reach clinicians
Fee-for-service incentives could also prove a problem. The payment structure, which underpins the majority of healthcare spending in the U.S., rewards providers based on the volume of visits, tests and procedures they perform.
Meanwhile, despite a recovery in operating margins since the COVID-19 pandemic, costs are continuing to rise for hospital operators.
As a result, hospitals and other physician owners are incentivized to stuff their clinicians’ schedules as tightly as possible, to bring in as much revenue as possible. Any time freed up for doctors by AI might be filled with another visit, experts say.
“It could happen. And some organizations want to do that. And from our perspective, that’s an organizational decision,” Rachel Wilkes, the corporate lead for generative AI initiatives at EHR vendor Meditech, said. “If they want to schedule more patients with time savings, they can do that.”
Things that improve the productivity of the healthcare system — like reducing patient no-shows — often result in a heavier visit load for physicians, said Seth Howard, executive vice president of research and development at Epic.
“The conversation between the health system and the doctors need to be, ‘What do we do with those time savings?’ Does it go back into helping the doctor have some flexibility during the day? … Or for doctors to see more patients?” Howard said.
Vendors are aware that the draw of AI products for hospitals includes the potential to increase revenue.
Along with saving five minutes per encounter, Microsoft’s ambient AI frees up “13 additional appointment slots per provider, per month,” according to marketing materials.
The need to bolster flagging finances is one reason hospitals are ravenous for the tools, experts say. Demand has been skyrocketing, with the CEO of one company using AI to automate provider workflows likening it to “a fairy tale.”
“Our ambient product went from zero appointments a year two years ago, to a couple hundred thousand last year, to on track for 10 million this year,” said Tanay Tandon, the CEO of Commure, during a panel. “That sort of expansion and usage of AI in healthcare is incredibly exciting. That’s no longer your nerdy technical doctors playing around with it in the corner. This is meaningful use.”
Experts say like with any other technology, there are pros and cons when it comes to AI scribes, assistants and similar tools. In addition, other metrics besides time saved are important to physicians, like reducing their mental load by simplifying tasks or improving their job satisfaction by allowing them to focus on clinical care.
And anecdotal feedback from clinicians has been great, according to tech giants, AI scribes and the EHR vendors weaving the tech into their workflows.
Epic, for example, doesn’t have an average figure for how much time its own ambient documentation tool saves. But according to Howard, “physicians say, very commonly, I could never go back to how I did documentation before.”