Vital’s Take on the AI Executive Order

Nicholas Sterling
Nicholas SterlingNovember 6th, 2023

On October 30, 2023, the Biden administration signed an executive order outlining artificial intelligence (AI) standards that aim to seize the promise and manage the risk of AI. While the executive order is broad in its approach, it provides important guidelines and considerations that US lawmakers, business leaders, and the general public should consider as the horizon of AI continues to be revealed.

The executive order directs the following actions:

  • New standards for AI safety and security

  • Protecting Americans’ privacy

  • Advancing equity and civil rights

  • Standing up for consumers, patients, and students

  • Supporting workers

  • Promoting innovation and competition

  • Advancing American leadership abroad

  • Ensuring responsible and effective government use of AI

We’ll get to some of these in a minute. But to oversimplify, the order addresses fears around the potential of AI to do harm (e.g. displace workers, national security, and civil liberties) and sets the stage for responsible innovation (e.g. privacy, international cooperation, and safe use). 

The U.S. is not alone in calling for a hardened look at the future and impact of AI: the EU passed the first AI regulation in June and China passed its own in August. So how do we, collectively, ensure that AI is used in a secure, safe, and trustworthy manner that benefits humanity? And as such, how do these considerations translate to AI applications in health care?

Vital’s response to the executive order

The executive order sets initial guardrails, yes. The executive order is a step forward for the U.S. health care delivery system. But Vital takes it a step further. 

Since our founding, Vital’s core systems are centered around secure, safe, and accurate AI innovation. For years and for millions of patients, we have successfully and safely used AI to enhance the patient experience, improve outcomes, and drive meaningful value for health systems. We believe our approach to security, safety, and accuracy is the gold standard that health systems should expect when adopting AI into their tech stacks

Let’s look at how we’re already leading in AI best practices

The Biden administration has called for the creation of a Department of Health & Human Services (DHHS) safety program to receive reports of — and act to remedy — harms or unsafe healthcare practices involving AI.  

The executive order calls for: New standards for AI safety and security

Regarding safety, Vital’s AI models:

  1. Are trained based on sound scientific methods, including first principles for rational model design, as well as datasets having diverse patient populations to ensure generalizability.

  2. Use rigorous standards for model validation. We utilize sound practices for model validation, including separation of training and testing datasets and performance evaluation across multiple facilities. We partner with physicians to evaluate our AI-based tools and publish in peer-reviewed scientific journals.

  3. Are continuously monitored to evaluate how live models are performing in the real world. 

  4. Utilize internal quality controls. Vital has systems in place to monitor responsiveness and reliability, to keep our models performing safely.

As for security, Vital utilizes the highest standards in the industry to protect personal data. We are HIPAA-, HITRUST-, and SOC 2-compliant. You can learn more about our security systems here

Protecting Americans’ privacy

Vital’s co-founder and CEO Aaron Patzer has a long-standing track record of keeping complex, sensitive, and highly personalized data safe and private — with a “clean record” dating back decades upon his founding of, the #1 personal finance app. He enacted crypto systems, security principles, and auditing at, and we’re doing a variation of those at Vital. For Aaron, security and privacy are table stakes. It’s a must before you even start to build software on top of it. 

Advancing equity and civil rights

As Nicol Turner Lee commented on, “the tireless efforts of researchers and civil society advocates have finally made it into the nation’s most aggressive proposal to advance equity, privacy, and national security in artificial intelligence (AI) systems.” Vital’s models are trained on diverse patient populations and we will continue to push for advancing equity within health care delivery systems.

Standing up for consumers, patients, and students

The order calls for advancing the responsible use of AI in healthcare. Vital does what’s best for patients — and that, in turn, delivers tremendous value for our health system clients. We make consumer-grade software that engages patients where they are and in a way that makes sense to them. We summarize complex medical documents at a basic reading level. We send test results safely, without significant delays, and with explanations that are relevant and meaningful to the individual. We set expectations for both patients and families. In short, we’re here to make the patient experience better when patients’ needs for simplicity and clarity are greatest.

Supporting workers 

An October 2023 study found that provider burnout and turnover related to EHR inefficiency was a top five operational burden, according to its CIO respondents. Healthcare software can and should augment clinician capabilities, rather than undermining them. Vital uses AI to make the work lives of those who deliver care easier, not harder. And we do so in a way that is safe and accurate for both providers and patients.

Our software sits on top of existing EHRs. Vital translates data from the EHR data and puts it to use in a user-friendly way. We save staff valuable time — by automatically doing things like clearly communicating with patients and families and appropriately routing service requests to the correct department. As one patient commented, “[Vital] was the primary way that I was updated on my treatment throughout the day. Knowing what to expect from the website meant that I didn’t need to bother the nurses for updates.” Amazing, right?! AI can be a powerful tool to support healthcare professionals.

A quick summary

Vital delivers high quality and safe engagements across the care continuum — and has proven that you can elevate the health and happiness of the communities you serve through the use of AI. It’s important to remember, amid the noise of AI discourse, that it’s AI that enables us to provide content and experience interventions that are highly personalized and relevant throughout each step of the care journey. This leads to higher patient engagement and advanced outcomes. If that isn’t seizing the promise and managing the risk of AI, I don’t know what is.

To learn more, visit or schedule a meeting.

Back to Newsroom