Skip to main content

Health care has AI fever.

According to a report from CB Insights, health care AI companies brought in a record $2.5 billion worth of investments in the first quarter of 2021 across 111 deals, which is a 140 percent increase from the first quarter of 2020. Moreover, a survey of health care leaders from Intel found that 84 percent say their organization is currently, or will be, using AI—up from 37 percent in 2018. The survey found that the top potential uses of AI include predictive analytics for early intervention, clinical decision support and collaboration across multiple specialties.

It’s not just providers who are interested in AI usage. Payers are increasingly using AI to reduce expenses and identify members whose costs surpass $250,000 in a given year. In a 2020 survey from Deloitte of life science companies, more than 50 percent of respondents said their investments in AI will increase. The technology is expected to have “a transformational impact on biopharma research and development (R&D),” Deloitte notes.

Moreover, experts say COVID-19 pandemic only made the appetite for AI solutions more palatable for health care executives. A report from KPMG found that health care business leaders have been overwhelmingly confident in AI’s ability to monitor the spread of COVID-19 cases (91 percent), help with vaccine development (94 percent) and distribution (88 percent), respectively.

“We’ve seen the near elimination of competitive angst,” says John Halamka, MD, President, Mayo Clinic Platform. “With COVID, we discovered we needed to come together as a coalition, as a society to deal with COVID response. You saw a whole lot of non-obvious partnerships, collaborations and joint ventures happened during COVID.”

The best example of those kinds of partnerships, Halamka notes, is the fact that Google, Microsoft, and Apple came together to create the COVID exposure notification system. These kinds of collaborations, he says, will spur the industry forward in developing and adopting AI.

Of course, Halamka and others acknowledge that AI adoption in health care is still nascent, in particular on the clinical side. Concerns about the ability to integrate into the clinical workflow, data biases and integrity, a lack of an industrywide ethics framework and regulation, and costs and return on investment (ROI), all remain significant barriers to increasing AI adoption.

In part one of a two-part series, Health Evolution will look in-depth at a number of the barriers preventing wider adoption of AI in clinical settings. In part two, we will examine the most promising clinical areas for AI usage.

Barriers with clinical usage of AI

Clinical workflow/poor use cases

Michael Matheny, MD, Co-Director Center for Improving the Public’s Health through Informatics, and Associate Professor in the Departments of Biomedical Informatics, Medicine, and Biostatistics at Vanderbilt University Medical Center, is fairly blunt when it comes to the challenges that are preventing wider adoption of AI.

“Trust in AI from front line clinical communities is really low,” Matheny said. “From the end user perspective, we want to see tools that are relevant and can be integrated into the workflow to help reduce our cognitive burden of the tasks we have to do. We want them to be highly accurate, thus safe to use where there’s not a lot of error when using its judgements and we want them to be unobtrusive.”

Suchi Saria, Founder and CEO of Bayesian Health, an AI-based clinical decision support platform and John C. Malone Endowed Chair and Director of Machine Learning and Healthcare Lab at Johns Hopkins, agrees that one of the big issues that has to be solved is trust. “How do we get them to adopt and trust it? That means many things, but a big part of that is having a research-first approach, infrastructure to do rigorous evaluations, and scaling up high-quality, validated ideas,” she says.

The data scientists and developer community need to find common working ground with frontline clinicians, Matheny says, which is leading to this lack of trust. Related to this challenge is the fact that many AI use cases are poorly defined, says Steven Lin, MD, Founder and Executive Director of the Stanford Healthcare AI Applied Research Team (HEA3RT). Too often, he says, developers and data scientists are building models in an opportunistic way, rather than identifying a problem that needs to be solved.

“We have developers coming to us who are really excited and they tell us their model can do X,Y and Z, only for us to tell them, ‘That’s actually not a problem we have in health care right now.’ They didn’t start with an articulated problem that is aligned with the pressing challenges of clinicians, patients and health systems today,” Lin says.

Greg Albers, MD, co-founder of the Stanford Stroke Center and Chairman and Scientific Lead of RapidAI, an AI company that specializes in stroke care and complex diseases, says that physicians can get inundated with an abundance of clinical alerts related to different AI modules and programs. “It’s important to get the AI to work together so rather than the physician getting blasted with a whole bunch of messages, it sends them a tailored message that makes more sense for an individual patient,” Albers says. “And then figure out how to get that information to them in the most seamless way on an interface that allows them to have optimal workflow.”

Data integrity and biases

There is a reason that clinicians do not fully trust clinical AI yet. The reality is that AI and machine learning algorithms are not foolproof. In fact, researchers from the University of Cambridge in the U.K. found that not a single AI model that claimed it could detect COVID-19 in other diseases was “of potential clinical use due to methodological flaws and/or underlying biases.” In fact, these problems with a lack of credibility in AI models are pervasive.

“Everyone wants to use these tools, but the literature, the clinical trial data and the bedrock foundation of success is much less solid,” Matheny says. He notes that there have been successes in imaging informatics, particularly with X-Rays, CT scans and eye examinations, which have made other clinical specialties understand the potential power of the technology. But he notes, “You don’t see that level of accuracy in some of the other applications of AI yet and so I think it sort of inflates expectations when you see it get knocked out of the park in a couple of specific areas.”

John Halamka

If you were to get a can of soup, you would look at the label on the back and would say, ‘Oh my God, there’s a thousand grams of sodium and fifty grams of fat. I don’t want to eat this soup.’ There is no such nutrition label on an AI algorithm, and they are therefore often black boxes.

John Halamka, MD, President, Mayo Clinic Platforms

 

 

Saria at Johns Hopkins says that those developing AI models for health care must account for a number of data modalities, many of which are messy. “How do you integrate it all together to drive inferences that frontline experts would find useful? You want to be rigorous and principled in integrating this kind of data,” she notes.

The challenge for health care organizations is to curate this wide swath of information—whether it’s from an EHR or a Fitbit—and separate the signal from the noise, Halamka says. Moreover, an increasingly critical issue is that algorithms can be biased and increase socioeconomic inequities in health care. The researchers from Cambridge say that 55 of the 62 clinical AI studies they systematically reviewed were guilty of “high risk of bias in at least one domain.”

Matheny says COVID increased awareness from health care AI experts on the racial biases of high-profile algorithms and models. Lin adds that even models designed to produce positive results can have unintentional side effects. For example, when algorithms heavily rely on EHR data, they are leaving out individuals who do not have access to care.

Related to this challenge on data integrity and biases, Lin notes, is the issue around data generalizability. Developers will build a model, it will perform well, but it’s not generalizable. The same model will see its performance drop at a staggering rate when applied to a different system. “Like any data driven tool, AI is extremely vulnerable the way the algorithms are built in the first place. They will only work as well as their data that feeds into the initial training set will allow them to work. Before they can be implemented, models need to be validated against the local data architecture and the local populations,” he says.

Ethics and regulation

The missing piece of the puzzle in pushing AI to wider adoption could be an industry standard or framework around the ethical and appropriate use of the technology. Halamka likens this need to eating soup.

“If you were to get a can of soup, you would look at the label on the back and would say, ‘Oh my God, there’s a thousand grams of sodium and fifty grams of fat. I don’t want to eat this soup.’ There is no such nutrition label on an AI algorithm, and they are therefore often black boxes. You don’t know what they are measuring, how effective they are or if they were developed for the patient population you are trying to treat,” Halamka says. “There needs to be increasing transparency on AI algorithms so we pair the right algorithm with the right condition and patient demographics.”

In this spirit, the World Health Organization released the “Ethics and Governance of Artificial Intelligence for Health” in June of this year. The report says while AI has a lot of potential in solving clinical challenges, improving public health interventions and a whole lot more, it can also be used to further health inequities, sharpen the digital divide, and undermine human autonomy in clinical decision making when used incorrectly and unethically. WHO says six core principles can promote the ethical use of AI. These principles include 1) Protecting human autonomy; 2) Promoting human well-being, human safety, and the public interest; 3) Ensuring transparency, explainability, and intelligibility; 4) Fostering responsibility and accountability; 5) Ensuring inclusiveness and equity; 6) Promoting AI that is responsive and sustainable.

Matheny was the co-author of another widespread industry report, Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril for the National Academy of Medicine in 2019. He says that since that report was written, there have been best practices created around model development, data management, data transparency and reporting. What’s not quite there is a way to put all those best practices together in a way that would help a health system reassure patients, clinicians, and other stakeholders that the AI within its system is being used in an appropriate manner.

Yet another significant challenge for health systems is de-identifying the large amounts of data needed to run these AI algorithms in a way that protects patient privacy. Just like adoption of AI in health care is still in the early stages, so too is regulation. The U.S. Food and Drug Administration has taken steps to increase oversight of AI-enabled devices, including releasing an action plan in January. However, the pace of AI development has exceeded the rate at which these regulatory guidelines have been established.

Costs/ROI

The biggest barrier to AI adoption among health care leaders surveyed by Intel was cost. It’s not just a matter of implementing the technology solution and maintaining it on a yearly basis, as researchers in Translational Vision Science & Technology reveals, organizations are going to have to recruit an AI competent physician workforce. A survey of health care executives from KPMG found that the high cost and lack of workforce talent were the biggest barriers to adoption of AI in the industry.

When it comes to yielding a return on AI-related investments, Lin says that means different things for those in a fee-for-service world than for those in a quality, value-based care health system. Organizations that rely on fee-for-service reimbursements—the vast majority of health care providers —will see ROI come in the form of reducing administrative burdens. This allows health systems to bring in more patients at a quicker rate, he says, and it may be a major reason why AI experts say clinical adoption of the technology lags behind using it for administrative purposes.

For those in the value-based world, it’s a different story. “If you’re thinking about cutting costs and cutting utilization…the technologies that are going to be most helpful are the ones that keep patients out of hospitals and the ones that keep patients from having unnecessary utilization or unnecessary visits. This is predictive analytics, which allow you to intervene before hospitalizations. They also may be tools to support continuous primary care vs. episodic care,” Lin says.

Many health systems are currently looking at AI adoption with the idea of improving outcomes in mind, rather than concerns over the cost, Matheny says. However, he does acknowledge that health system leaders expect an estimate of impact on implementing this technology. They want to know the types of workflow processes that will change and who will be needed to oversee the system.

“If you’re bringing on a bunch of extra [full time equivalent staff members] to manage the AI system because it is throwing out all these recommendations to hire nurse managers to manage patient populations and address these additional issues, you might not actually see any real ROI. You have a cost from AI implementation and sustainment standpoint and costs from extra staffing to manage the information coming in. One of the areas of growth for this domain is how to do this smartly so you reduce the cognitive burden and the FTE burden,” Matheny says.

 

Next week: Part two on the promising use cases of AI in health care

X