Tech companies are trying to get healthcare companies to adopt artificial intelligence tools, hoping they can pull in revenue by appealing to the industry’s need to reduce costs and tackle clinician burden.
Google is one such company. The search and cloud computing giant has unveiled a number of health AI products in the past few years, including a large language model called Med-PaLM trained specifically on medical data, generative AI products for healthcare organizations and a platform to help companies create their own AI agents.
Google also offers a tool that allows clinicians to search for and answer questions about information in patient notes and other medical documentation. That product, called Vertex AI Search for Healthcare, has quickly ramped up after launching last March. Google announced that Vertex AI Search was multimodal earlier this month, meaning it can now understand images as well as text, allowing the tool to scrub charts and scans for relevant information.
Google has inked deals with major EHR vendors and leading health systems to integrate its AI into their workflows. But not everyone is gung ho about AI, especially as the models become more evolved and the potential for mistakes grows.
Concerns include hallucinations, when AI makes up a response; omissions, when AI leaves important information out; and model drift, when AI becomes less reliable over time. Meanwhile, fledgling oversight and a lack of regulation are also hampering adoption as the industry confronts weighty questions of accuracy, privacy and bias.
Healthcare Dive sat down with Aashima Gupta, the head of healthcare for Google Cloud, to chat about Google’s work in health AI and the future of the technology — including how AI is evolving from a task-based helper to a collaborator for clinicians, uncertainty from the Trump administration and why she’s excited about agentic AI.
Editor’s Note: This interview has been edited for clarity and brevity.
HEALTHCARE DIVE: Vertex AI Search for healthcare can now understand images. Why was this a necessary update?
AASHIMA GUPTA: Healthcare information is scattered in different forms and types. During an annual diabetes foot exam, for example, when a physician is looking for ulcers, they mark on a diagram of a foot where it’s callous, pre-ulcer and ulcer, using different symbols. Healthcare is full of examinations like this. Now, with multimodal, you’re able to see and contextualize that diagram of a foot and extract that information, including any potential ulcers, and put it into the medical record automatically. That saves clinicians time, because they don’t need to interpret this themselves.
Aashima Gupta, Google Cloud’s head of healthcare
Permission granted by Google
Healthcare, as an industry, has a lot of paperwork. We talk about burnout. These are the type of innovations we want to add to Search to reduce it. Last year, we said Search was ‘semantic’, meaning it knows what people mean when they say ‘diabetes’ or ‘A1C’ — it knows clinical concepts and how they’re related. Now we’re taking that and applying that to forms in an exam room that have different pictures. So our results are much more accurate and helpful.
What other inputs might Google want to add to the search-and-answer tool — sound?
You’re right. We will continue to add different modalities here. Sound and videos, as an example.
What else is Google working on in healthcare that you’re excited about?
We are very excited about agentic AI. The last few years have been a lot about generative AI, which is task-based — ‘Give me a discharge summary. Give me a nurse handoff. Write me a referral.’
AI agents are a leap forward, because they can think multiple steps ahead. They can plan unique steps for the goal in mind.
Imagine this in healthcare workflows. Let’s say I want to figure out, in my revenue cycle, where there’s variability by payer type, market, for a certain CPT code — that’s a multistep process. And that’s what agentic AI offers. Imagine hundreds of agents helping nurses and physicians do their jobs.
Given the number of companies offering AI agents, what’s Google’s elevator pitch?
We believe there’s a need for centralized coordination and management, and that’s the platform we provide, and we will give agents the tools.
So if a company is building their first-party agent, they have three things: An ability to use [Google’s family of large language models] Gemini, our search functionality based on clinical knowledge and an orchestration layer.
We believe that people have choice. Some people they really want to build themselves. Some want a partner. How do you organize all of that? That’s where we want to be.
Google doesn’t publicly share the number of healthcare organizations using its AI. How would you characterize uptake?
We have relationships with Basset Health, Highmark Health, Mayo Clinic, HCA — customers using generative AI to streamline prior authorization submissions, close screening gaps, enhance radiology workflows and more.
Healthcare used to be a laggard in technology adoption. But this AI is different — this is actually going into the back office, the revenue cycle, claims, to tackle burnout. You are seeing this move much, much faster. You are seeing generative AI pilots move from experimentation to scaling.
As for agentic AI, I’d say healthcare companies are definitely adapting to it. But it’s a newer space.
Generative AI is becoming more integrated into healthcare workflows, despite the technology’s tendency to make mistakes. Does Google track error rates for its AI products?
Not only us — customers are building guardrails. At HCA, for example, we worked with them on a very robust evaluation framework, not just to catch errors but also — Is this model reliable? Is it hallucinating? Did it omit something? So we provide those tools for an evaluation framework, and for all of this there’s a human in the loop capturing the feedback, and that feedback loop then makes the model more effective. That’s what gives me comfort. That’s best practices being deployed.
But no data around accuracy that you’ll share? Is it accurate enough that you’re comfortable with increasing adoption?
Our health systems have put humans in the loop, and there’s change management around that. Technology by itself is not enough. You need robust change management, and to have a human in the loop and make sure they’re evaluating this. These processes are in place before the AI is brought in.
There are checks like grounding, or citing where the AI pulled information from. That helps combat hallucination. For omission, when the AI leaves something out, this is where we’re seeing evaluation frameworks. We’ll say, ‘Okay, did this result omit something pertinent?’ And eventually, after that feedback loop goes a few times with human oversight and the model learns, the next situation is better.
Also, use cases matter. This is why AI is being used more in administrative tasks.
The Biden administration was working to create a federal governance framework for health AI, but Trump has canceled those efforts and is instead focusing on deregulation. What are your thoughts on that approach?
At Google, we have our own AI principles and processes. But while self-regulation is good, we don’t believe that it’s sufficient. We believe AI is too important to not be regulated. And I will stop at that.
Do you think we’ll see anything from the Trump administration around oversight?
It’s hard to say at this point. I think we are trying to figure out how we best work with them, share our best practices with them. Too early to say. I think the entire healthcare community is waiting for that.