Vital Co-founder and CEO Aaron Patzer recently appeared on The Collective Voice of Health IT podcast produced by the Workgroup for Electronic Data Interchange. In this blog post, we feature some highlights from his discussion with host Matthew Albright.
Let’s start with your origin story, including founding Mint.com
I’m an engineer through and through. And so I care a lot about building technology. Now I’m a CEO, so my engineering team has banned me from programming, but I still do a little bit on the side.
Mint.com was one of those things that took off like a rocket ship. When we sold it to Intuit, we had 3 or 4 million users, which eventually got up to 25 million people using Mint. It was a big data collection issue. We connected to 12,000 different banks, brokerages, and mortgagers.
That’s not dissimilar to the data problem that we face in health care—connecting up to all those different data silos, insurance information, pricing information, etc. It’s an absolute mess. So, I've been fortunate enough to build products that have been used by hundreds of millions of people. Vital now has over a million patients using it every year for patient experience in 100+ hospitals.
I love that story because the thread is certainly the building idea—the engineering aspect—but also the consumer centric idea.
Yes, to me it is an absolute tragedy that providers hand people a piece of paper that says things like “your BUN is 26 micrograms per deciliter.” And you're like, great, what do I do with that information? But our software translates that into things like, “This tests your kidney function and it's looking a little low.” So we’re putting everything into human terms. And as a result, we get really, really high engagement. About 60% of all patients who are offered to use our patient experience software actually use it. Others in this space only see about 10% usage because they’re not really patient centric.
That reminds me of a study that talked about how, in some cases, the AI in interacting with the patient was thought to be more empathetic than even the physician.
Yes, but we need the human touch. Your doctors and nurses and techs are the ones who are providing the care. They're also rushed. So what you can do with AI is provide a really personalized experience.
The first thing is deciding what to present. The software experience for somebody who is giving birth should be very different than a person who is having hip replacement surgery, which should be very different than someone in the ED for chest pain.
So, we use AI to look at your doctor and nurses notes, the reason for visit, your medical history, etc., and use that to, for example, assign patient education automatically. And as things change in real time—as lab results come in, as the doctor discovers new things—as these things move, the AI adapts.
There’s also the 21st Century Cures Act, which requires the immediate release of doctors’ notes through the patient portal. The problem for patients is that they’re getting things like, your mother suffered a “cerebral infarction” but to the average person, that’s a stroke. Or you get a note that says, you’re getting a “hemiarthroplasty in the a.m. NPO at midnight.” NPO is an abbreviation of a Latin phrase. AI translates that as, “don't eat or drink after midnight” prior to your hip replacement.
So, speak to humans as if they're humans. And a way to do that is through artificial intelligence. That’s great.
I’m reminded of your recent Forbes article where you talked about the areas AI can help in health care. In addition to translating the lingo that doctors use, what else is changing with AI?
“Hyper relevance” is what AI allows us to do. For example, when you're discharged from the hospital or an emergency room, you often get handed a 15-page packet of paper—your discharge instructions. It has the medications you should pick up at the pharmacy, what you should and shouldn't do, warning signs to look for, etc.
But it also has COVID procedures, which may now be two years out of date. And it says not to smoke or drink. It just sort of all gets put in there. Something like 80% of this is boilerplate stuff. And people just throw this away. If you want to see a tragedy, look in the trash bin outside an emergency room and it’s just full of the instructions that people should follow.
So, AI can parse this content. It knows that, say, 80% of this is boilerplate and can pull out, say, the top five things the patient needs to do. For example, number one, pick up these two medications. Here are the side effects. Then number two, set a follow-up appointment with this specialist. AI can easily parse this big document and help people with each step in patient-friendly language.
AI can help patients with follow up, too. I think the average person makes about five phone calls to schedule an appointment with a specialist. Say your doctor’s instructions are to see a cardiology specialist. You go online to find a doc and there’s 200 in the Chicago area. There's interventional cardiology and pediatric cardiology and cardiac stress tests and… It’s mind numbing.
Which one of these am I supposed to go with? Do they take my insurance? Do they have a good cost? Do they have a good quality rating?
AI can actually crunch tens of millions of insurance records to figure out cost and quality and so on, to match people to the right doctor at the right time. It can be hyper-specific to exactly what we know are in doctor’s notes. So those are some of the ways that we're beginning to solve this problem.
Hyper relevance. That synthesizes exactly what AI brings to the table. That's terrific.
So, we've heard a lot of hype. Of course, AI is in the papers every day and certainly in the industry trades for health care. There’s a real range on this: from near panic to knowing how to use AI and knowing that we need to make it safe. Any comment on that and where you think the industry should go?
First off, health care doesn’t use enough AI. At Vital, we use natural language processing techniques and large language models like those in ChatGPT—using things that were only invented, you know, three months ago, six months ago, or a couple of years ago. But most of the health care industry isn’t using what I’d call AI. It’s machine learning—things like logistic and linear regression.
So at Vital, we’re working on a sepsis prediction model to help doctors detect septic shock hours in advance. Almost everybody who's tried this, including a very large EHR, have used only vital signs, lab results, and other categorical information.
But there’s other information available, such as free-form nurses notes. “This patient looks sluggish.” or “This person looks out of it, but they don't have a history of dementia.” So, maybe that's the beginning of septic shock.
It's these little human things that are observed that can be picked up with advanced AI. If you’re not using the latest AI, you’re not doing enough.
Now, how do you keep it safe? How do you know it's working? First off, you should always use a doctor in the loop. AI is really good at what I would call general cases—the things that typically happen? You know, abdominal pain usually resolves this way. Or hip replacements will usually go that way. But doctors are really good at the exceptions and the rare cases.
I would be very skeptical of AI that diagnoses patients. But if you look at the way large language models work, they’re very good at summarizing things. They’re very good at finding synonyms. Very good at seeing patterns, like these 10 pain medications are related to one another because they co-occur in the same context. If you understand how the AI is actually trained, you can understand where it's safe to use it.
That said, a lot of this stuff can be a black box. So how do you test a black box? Well, you put it through its paces.
Right now, we have a translator for imaging results like CT scans, MRIs, and X-rays. But we’re working on translations for all doctors’ notes. So, we have our own physicians look at literally thousands of translations. Then they grade them to know things like, Is this an accurate translation? Is it missing information? Is the tone off? Is it unsafe? Is there anything in here that would be immediately dangerous? Is there anything that would be dangerous, not today or tomorrow, but a month or a year from now?
So you actually need a gradation and then ideally you compare it to what humans do. You have doctors do the same work and compare. AI is never going to be perfect. That shouldn't be the standard that you hold it to. Is it better than humans, or is it close because it never gets tired, never gets overloaded? That's the standard.
So, we did that with our own internal team of physicians first as part of quality control. Then, we formed an external panel with physicians from places like Stanford and Emory and a number of other very prestigious institutes. Each of them grades translations with some overlap, because there's variation between the graders as well. And so we have a really rigorous and scientific way to evaluate these black boxes, and that's the only thing that makes me feel comfortable.
Like you said, AI may not be perfect, but humans aren’t perfect either. So we have to hold AI to the same standards as we do humans and look for ways to check and regulate AI as you would human beings.
Is there anything you’d like to close with?
Patient experience matters. It’s not just a check-the-box sort of thing. These are people who are arguably having one of the worst days of their lives, who are rightfully scared about what's happening, who may have an illness or a surgery or something that affects them for the rest of their life. You owe it to them to do right by them and their family, and to give them the most personalized experience, just as you would a family member.
So if you’re a doctor or a nurse listening to this, everybody around you probably asks you medical questions and can rely on you to explain things. But not everybody has that. And oddly, software is about the only way we can do that, given the nurse and doctor shortages. And that's why I think Vital or anybody who's really pushing the edge on patient experience is important.
Listen to the full podcast here or on your favorite platform.