FEATURES

With Proper Use, Artificial Intelligence Improves Health Plan Operations

Health plans must fully understand artificial intelligence tools before using them to assist with coverage determinations or administrative tasks.

Source: Getty Images

- While artificial intelligence is far from a new concept, the healthcare industry has recently latched on to the technology, leading to increased adoption across many sectors. Artificial intelligence has the potential to lower healthcare costs and streamline administrative processes for health plans, but it can also have unintended, negative consequences for consumers.

Artificial intelligence use in healthcare is inevitable, according to Lindsey Fetzer, an attorney at Bass, Berry & Sims.

“It’s well-established that AI will be used,” she told HealthPayerIntelligence. “How it will be used and the guardrails and regulatory environment around that is still developing. With this technology that is developing so rapidly, regulators, lawyers, and people in healthcare are all struggling to keep up with it and understand how it impacts use.”

In the Medicare Advantage space, health plans have commonly been using artificial intelligence to evaluate or assist with coverage determinations. Plans have faced scrutiny for this practice, with some complaints escalating to legal action.

For example, Humana is facing a class action lawsuit for using naviHealth’s nH Predict AI Model to determine Medicare Advantage beneficiaries’ coverage criteria in post-acute care settings. The lawsuit alleged that the tool is inaccurate and is not based on patients’ medical needs, leading to inappropriate denials of necessary care.

The plaintiffs alleged that the payer continues to use the artificial intelligence model despite awareness of its shortcomings because it knows that few policyholders will appeal the denied claims and the majority will pay costs out of pocket or forgo care.

“We’re trending towards the idea that it’s fine to use AI and other tools for certain decisions about how long you might need to be in a nursing home or how long you might need some type of care, but it’s not the only basis to be used as a way to make an adverse or coverage determination,” Fetzer noted.

However, utilizing artificial intelligence has its positives, too. Medicare Advantage plans have used the technology to improve customer service and respond to beneficiary questions quickly. In addition, artificial intelligence has helped draft summaries of patient interactions, which plans can then use to identify trends and issues that need to be addressed.

“[AI] has tremendous capability to translate formal healthcare language, like disease codes, into natural language and then put it back,” Fetzer explained. “It can also be used to consider social determinants of health and other health equity considerations, but all of that depends on the data we’re using to train the models.”

When artificial intelligence is used correctly, it can help plans improve patient outcomes and reduce administrative costs. With lower administrative costs, health plans can direct savings toward supplemental benefits and other member-oriented solutions, Fetzer shared.

Using artificial intelligence in an ethical way can be tricky, though.

“AI doesn’t work if people don’t trust AI,” Fetzer indicated. “How to trust AI is something that is being considered by all regulators within and outside of the healthcare space. It’s something innately difficult to regulate, given how the technology is changing.”

While there is no one-size-fits-all approach to artificial intelligence, increasing transparency can help build trust in the tools. Additionally, health plans should be prioritizing the best way to prevent bias in artificial intelligence models to ensure equitable outcomes. While regulators and stakeholders are still understanding what personally identifiable information and other types of data should be used to train these models, it is clear that there should be more guardrails and regulations in place.

Before deploying artificial intelligence for internal processes, health plans should make sure to fully understand the tool and its capabilities. Taking a cautious approach to using technology can avoid potential negative impacts on beneficiaries and care access.

The feelings toward artificial intelligence use in healthcare have generally been more negative than positive, and better messaging is needed on the concept of using artificial intelligence as a tool for good, Fetzer said.

“[Medicare Advantage] organizations or any organization developing an AI strategy needs to think through what the corporate governance of AI looks like, what the strategy is, and how to teach that strategy so that humans understand it,” she concluded.

“All companies should be careful not to roll out a tool until they understand what that tool is and have some comfort around whether the information used to train the tool creates negative biases or other negative outcomes.”