RamaOnHealthcare June 19, 2021

Andrew Eye, CEO of award-winning ClosedLoop.ai, shares insights into how AI is changing the healthcare landscape.

Andrew Eye, ClosedLoop.ai CEO

Andrew Eye, ClosedLoop.ai CEO

In true “David and Goliath” fashion, ClosedLoop.ai recently won top prize in the $1.6 million CMS AI Health Outcomes Challenge, the largest healthcare-focused AI challenges in history, beating out more than 300 of the world’s leading technology, healthcare, and pharmaceutical organizations, including IBM, Mayo Clinic, Geisinger, Merck, Accenture, and Deloitte. ClosedLoop CEO Andrew Eye shares a behind-the-scenes look at what went into this two-year long challenge. Andrew offers his thoughts on how AI stands to revolutionize the healthcare landscape by using predictive analytics to reduce social and racial disparities in healthcare, surface insights to reduce unplanned hospital readmissions and adverse events, and solve the biggest and most costly problems facing healthcare today.

RamaOnHealthcare Q&A

You recently won the top prize in one of the largest healthcare focused AI challenges in history – $1M awarded by CMS. Can you give us a look inside the AI Health Outcomes Challenge?

“Born to Win”

Two years ago, the Centers for Medicare & Medicaid Services (CMS) launched the Artificial Intelligence (AI) Health Outcomes Challenge, the largest healthcare-focused AI challenge in history. The $1.6 million contest prioritized creating “explainable artificial intelligence solutions to help front-line clinicians understand and trust AI-driven data feedback.”

From the moment the Challenge was announced, we knew we wanted to enter. It was even more clear when we had five customers reach out. Within 24 hours we had a dozen messages saying “Hey, this sounds right up your alley” and “isn’t this what you already do?” But we also had advisors who said, “You know, those things are hard to win” and cautioned it would be a distraction. But everything we would need to do was already part of our roadmap, which was not the case for everyone.

From day one we said we were “born to win.” This wasn’t because we thought it would be easy or guaranteed, but because building these kinds of models was precisely why ClosedLoop was founded.

A David Among Goliaths

The Challenge had three stages. The initial submission took entries from 300+ down to 25. We wondered whether we had enough credibility against the other teams, but we felt good about our submission. Stage 1 narrowed the field from 25 to seven. Some of the Stage 1 winners surprised us, especially when we saw that a couple of big names had been left out. We expected to see IBM, a company we thought of as a front runner, and others we thought could do it well and had the ammunition to bring it. At times we felt like David among the Goliaths — that we were playing for second place. When none were in the final seven, we realized we were there to win.

Key Breakthroughs

Every stage pushed us. In Stage 1, we pushed the system to its limits to get the highest accuracy possible, but we were building models at a scale we hadn’t run before. We wrangled data from 56,000 columns, created 193 billion data points, and made 460 million individual predictions. Our full models ran end-to-end with one week left. In Stage 2, that part was easy. CMS sent us data and within 48 hours we had models we thought could win, their AUCs already above 80.

We wrangled data from 56,000 columns, created 193 billion data points, and made 460 million individual predictions.

In Stage 2, we had breakthroughs in our Patient Health Forecast interface and our work on algorithmic bias and fairness. For each area, partnerships were key. We got a boost in accuracy and key clinician feedback from Booz Allen Hamilton. And our work on bias and fairness included feedback from U.S. Sen. Cory Booker’s office as well as Ziad Obermeyer and David Kent, authors of seminal papers on algorithmic bias and fairness in healthcare AI. Our submission on each of these was so comprehensive, so rich, we couldn’t imagine someone else beating it.

Explainable AI and the Patient Health Forecast

For Stage 1, we started with a report we already had, but it didn’t last long; we scrapped it early on. We learned so much in user testing. We had to understand how physicians interpreted the predictions and how they would use them. The work also clarified our thinking around what we see as our responsibility. Was it to just make predictions? Or is it to surface those predictions in a way that physicians trust?

One doctor who gave us feedback actually said, “If physicians can spend more time with this report, we might reinvigorate their love of medicine.”

You won’t find these kinds of reports in an EMR, certainly not today’s systems. When viewed through the lens of Jerry Chen’s The New Moats, he would call the EMR a “system of record,” a place to record clinical findings. But what we heard physicians say is that they want a “system of engagement,” which is actually aligned with what inspired the Challenge in the first place. When it was first announced, CMS and AAFP declared their goal as being to “not only enhance the predictability of illnesses and diseases, but also enable providers to focus more time with patients.” We succeeded in doing that. One doctor who gave us feedback actually said, “If physicians can spend more time with this report, we might reinvigorate their love of medicine.”

The sheer power of the Patient Health Forecast became clear when we were reviewing cases. The biggest “aha” moment – even before we had shared it with anybody – came while walking through individual patient profiles and predictions. There was even one “oh, my God” case. I clearly remember this person. The data clearly revealed that he was quite sick; you couldn’t miss that. But we were predicting mortality and we had predicted his situation a full year ahead of time, including what his death would be related to. Sadly, it was a case where someone might have possibly been able to mitigate things, had they known ahead of time.

It was a powerful and humbling example. But it’s why we’re here – to change the nature of what’s possible. We predict the future so people can change it.

What’s Next for ClosedLoop

Since winning the Challenge, it’s become clear we are the market leaders in AI for healthcare. It also shows the talent we have and how much the platform has matured in the last few years. We don’t get here without all the pieces and people that went into it. As for ClosedLoop, we’re at the beginning of something, not the end. We have an opportunity to partner with organizations who share our commitment and who want to take what we’ve created and use it to help their patients. There’s a lot of work to do and people who are equally as committed to taking it on. It’s going to be very exciting!

We don’t get here without all the pieces and people that went into it. As for ClosedLoop, we’re at the beginning of something, not the end. We have an opportunity to partner with organizations who share our commitment and who want to take what we’ve created and use it to help their patients.

How is ClosedLoop leveraging AI and ML to solve some of healthcare’s biggest challenges?

ClosedLoop.ai is healthcare’s data science platform. We make it easy for healthcare organizations to use AI to improve outcomes and reduce costs. Purpose-built and dedicated to healthcare, we combine an intuitive end-to-end machine learning platform with a comprehensive library of healthcare-specific features and model templates. We address the specific issues healthcare data scientists face as they create AI-based solutions to address the growing challenges in the U.S. healthcare industry.

The platform has already deployed hundreds of different predictive algorithms that now impact more than 3 million patients each day, including with HealthFirst, New York’s largest nonprofit health plan, and Medical Home Network, the largest Medicaid ACO in the U.S. ClosedLoop’s customers also share a strong commitment to achieving the Triple Aim, which means improving outcomes, reducing unnecessary costs, and enhancing the experience of care.

The CMS AI Health Outcomes Challenge was not an academic exercise; unplanned hospital admissions and adverse events are a $200 billion problem annually that affects nearly 32% of Medicare beneficiaries. Many of these adverse events can be prevented if they can be predicted ahead of time.

In fact, achieving the Triple Aim has become a national imperative. In 2019, U.S. healthcare spending reached $3.8 trillion and is projected to reach $6.2 trillion by 2028, nearly 20% of GDP. Such levels are nearly unsustainable and threaten the entire economy. Research shows that more than 25% of this spending is wasted, meaning it does nothing to result in better outcomes. To address the system’s underlying issues, the industry is evolving toward value-based healthcare centered on individuals, prevention, and chronic disease management. Success hinges on several elements, and AI is key among them.

ClosedLoop’s Explainable AI is key because it reimagines the concept of patient risk profiling and shifts from legacy risk “scores” to comprehensive, personalized forecasts delivered directly into a clinical workflow. Each forecast harnesses patient-specific data and surfaces key variables that explain precisely what risks a patient faces and why. It integrates relevant clinical details and links to specific interventions that clinical teams use to prevent adverse events, improve outcomes, and reduce unnecessary costs.

ClosedLoop’s Explainable AI is key because it reimagines the concept of patient risk profiling and shifts from legacy risk “scores” to comprehensive, personalized forecasts delivered directly into a clinical workflow. Each forecast harnesses patient-specific data and surfaces key variables that explain precisely what risks a patient faces and why.

The following are just two examples of ClosedLoop’s immediate opportunities to improve patient health while helping to curb rampant healthcare costs in the United States:

Writ large, the exponential increases in healthcare data and the new federal regulations requiring health data sharing, when combined with the advances in explainable AI will revolutionize AI-assisted healthcare. Liz Richter, Acting Administrator of CMS, put it best: “Clinicians are eager to use the latest innovations to better help identify patients at risk, provide higher quality care, and improve health outcomes. The use of artificial intelligence has the potential to achieve these aims by providing important information to clinicians that may be helpful in providing higher quality care.”

How can AI, when used responsibly, improve patient safety and care?

“Patient safety” describes the discipline that has emerged as the complexity in healthcare systems has grown and resulted in an increase in patient harm in healthcare facilities. The focus is to prevent and reduce risks, errors and harm that occur to patients during the provision of healthcare.

Unfortunately, more than twenty years after the Institute of Medicine’s To Err Is Human report, problems with safety remain all too common. Adverse events related to unsafe care represent one of the top ten causes of death and disability worldwide, despite the fact that between 33 – 50% appear preventable. The majority of healthcare harms fall into one of the following domains: healthcare-associated infections, adverse drug events, venous thromboembolism, surgical complications, pressure ulcers, falls, decompensation, and diagnostic errors.

AI can be a powerful tool to improve the safety of care. It can be used to identify patients at high risk of harm and guide prevention and early intervention strategies. Similarly, AI can be applied in outpatient, community, and home settings. When coupled with digital approaches, AI can improve communication between patients and providers and reduce the occurence of preventable harms.

Data-driven ML algorithms have advantages over rule-based approaches for risk prediction and allow simultaneous consideration of multiple data sources to identify predictors and outcomes. With data available today, especially laboratory information, imaging and continuous vital sign data, In addition, automated detection of safety issues, especially outside the hospital (e.g., drug surveillance), will make routine measurement possible, with data-driven AI playing an increasingly important role.

Can hospital systems be successful under Alternate Payment Models (APMs) without incorporating predictive analytics?

Data and analytics are increasingly at the heart of healthcare.

The disruption of fee-for-service healthcare is causing a massive reshaping of the industry, which is rapidly evolving toward a value-based system centered on individuals, prevention, and the management of chronic disease.

Data and analytics are increasingly at the heart of healthcare. The disruption of fee-for-service healthcare is causing a massive reshaping of the industry, which is rapidly evolving toward a value-based system centered on individuals, prevention, and the management of chronic disease.

This evolution is not a one-time event. It hinges on several mutually reinforcing elements, including Alternate Payment Models. But by themselves, APMs are not enough. Healthcare’s new payment models will only succeed when organizations combine them with the medical treatments and health services that can achieve better outcomes, and they are able to systematically improve the health of individuals and populations.

Success will demand new competencies, skills, and infrastructures. AI and analytics will be key among them. The industry’s traditional rules-based analytics and risk stratification methods are simply inadequate to the task. They return little in the way of guidance on the best actions to take and are incapable of systematically learning from data. In a data-driven world, when data becomes too big to know, the limits of rules-based approaches are fatal. Such approaches are only as good as the rules underlying them and only improve when experts come armed with better rules.

In fact, experts say that the competitive battleground is shifting – from the moats of competitive advantage being the platforms themselves (e.g. the system of record or engagement platform) to begin what you do with the data that comes from them. Such strategies, especially when executed recursively and repeatedly, create a composite intelligence that is difficult to replicate and can be an effective barrier against competitors. As healthcare systems use data to learn what works for whom, those that fail to invest in building these capabilities will find it difficult to compete.

How can technology help address the racial and economic disparities in healthcare?

It has been extensively documented that the U.S. healthcare system’s extremely high price tag fails to pay for excellent outcomes. Not only do we spend more to get less, health is unevenly distributed. Such disparities are persistent, pervasive and expensive and create a large and growing burden, both for individuals and the nation’s economy. LaVeist et al. estimated that nearly one third of the care for minorities in the U.S. were the result of health disparities, and that the loss of human potential, talent and productivity, when combined with the cost of medical care, exceeded $1.2 trillion between 2003 and 2006. Further, a 2014 report by the Agency for Healthcare Research and Quality (AHRQ) showed that disparities had barely changed, despite improved access to care from the Affordable Care Act and an improved understanding about how to reduce them. Not only are health inequities persistent, they could actually increase, as the growth of minority populations in the U.S. raises the prospect of greater or even new disparities.

Experts are beginning to acknowledge that healthcare data, in particular the underlying data used to train AI models, reflects our healthcare system’s historical biases and inequities in terms of access and delivery of healthcare. Healthcare’s increasing reliance on algorithms (e.g. to target interventions, reward performance, and distribute resources) has put algorithmic bias and fairness in the spotlight, which is particularly important in settings where AI models could be used to help prioritize limited resources for early interventions.

In fact, a 2019 study by Ziad Obermeyer of UC Berkeley and Brigham and Women’s Hospital demonstrated that predicting future healthcare costs leads to racially biased models, and proposed methods to avoid this bias. Such findings must be taken into account when evaluating final models for bias based on race, ethnicity, gender, age and disability status.

The ClosedLoop platform has built-in capabilities for helping address bias and fairness. This is crucial for AI systems, particularly when they are used to inform decisions about allocating limited resources. Models that systematically underpredict risk for a particular group can lead to that group being unfairly denied resources.
With respect to algorithmic bias, our AI platform systematically assesses for bias in model design, data, and sampling and makes sure to use measures (e.g. Michael’s Correlation Coefficient) that are insensitive to differences in disease prevalence between groups. To assess fairness, we developed a new metric for measuring fairness called Group Benefit Equality (GBE). Standard fairness metrics (e.g. Disparate Impact) are completely unsuited to healthcare situations. They ignore “false negative” errors, which can leave individuals who would benefit from an intervention unable to get it, or they use arbitrary benchmark thresholds that fail to adjust for instances where the alarm rate for the reference group is too low. The GBE metric addresses these shortcomings. It is also easily explained, has transparent procedures, and uses clearly defined thresholds to assess when models are biased.

It is clear that AI technologies The ClosedLoop platform has been specifically engineered to help healthcare organizations identify and address algorithmic bias and fairness.

It is clear that AI technologies The ClosedLoop platform has been specifically engineered to help healthcare organizations identify and address algorithmic bias and fairness. We have integrated methods that are especially suited to healthcare (e.g. prevalence-agnostic methodologies and metrics that are appropriate for), identifies when bias sneaks in and helps clinical teams adjust to avoid it. Legacy approaches fail at this, so even well-intentioned teams can perpetuate and exacerbate the very thing they hope to address.

 
Topics: AI (Artificial Intelligence), Interview / Q&A, Technology, Trends
STAT+: Q&A: How the FDA is approaching AI in clinical trials and drug development
Using de-identified lab data to find patients, target physicians and expedite treatment
Speeding up scientific research with AI: Interview with Anna Koivuniemi of Google DeepMind
The Best Bets for Palliative Care Reimbursement Post-VBID
Prostate cancer Q&A: Improving detection and reducing overtreatment

Share This Article