Any effort to apply AI to risk adjustment in a VBC context must start with the right strategy and technology for data capture across formats, systems, and providers.

The health care community has collectively determined that value-based care (VBC) is essential for achieving our common goals—which include more efficient interaction among patients, providers, and payers; lower, more predictable costs; the transparency necessary for a more consumer-like patient experience and effective regulatory oversight; healthy profitability; and, of course, consistently better patient outcomes.

VBC facilitates achievement of these goals because (at least theoretically) it correlates “facts on the ground” for any given episode with the actions most likely to result in desired health outcomes. Armed with this correlation, payers can make smarter clinical, financial, and engagement decisions—which, by extension, enable them to better optimize risk adjustment for patient health, provider efficiency, and payer profitability.

But while these principles hold true in theory, putting them into practice is another story. And that story, it is now clear, is contingent upon the application of AI to health care data.

Big, clean data

The success of any analytic undertaking depends on the scale, quality, relevance, completeness, and freshness of the data being analyzed. For one thing, as the saying goes: “garbage in, garbage out.” For another, analytics only produce accurate insight when the data they use reflects all the factors that might influence actions and outcomes. Scale is important, too—because only statistically significant data can produce accurate conclusions about the world as it really is.

At the heart of health care data is the patient “phenotype.” This phenotype encompasses all key patient attributes at any given moment: clinical and demographic data (including social determinants of health), the precipitating event(s), health impacts (as reflected in lab tests and imaging), procedures and/or pharmaceutical treatments, and the results thereof. This data is key because every VBC episode ultimately begins with a patient phenotype.

The data relevant to risk adjustment in a VBC context, however, goes well beyond the patient. To accurately determine the best course of action for any episode, analytics must also take into account provider histories. After all, different providers often achieve different outcomes with a similarly coded courses of treatment—owing to factors such as skill and equipment. Understanding the impact of these factors is essential for accurate risk adjustment, as well as for promulgating best practices across providers.

Obstacles to gathering this big, clean data abound. A lot of health care data is still exchanged non-digitally. Providers use systems from diverse vendors. Coding errors are commonplace. And much of the richest case-related information exists in the form of unstructured data.

Any effort to apply AI to risk adjustment in a VBC context must therefore start with the right strategy and technology for data capture across formats, systems, and providers.

Extracting value from the ore

The other piece of the AI value equation is the algorithmic operations performed on the data. These operations comprise multiple complex and ever-evolving disciplines that health care professionals have neither the time nor need to understand too deeply. Stakeholders in AI-related risk adjustment initiatives should nevertheless realize three things:

  • AI transcends statistical modeling. With statistical modeling, data analysts must choose variables and methods pre facto. These long cycles of trial-and-error may still never uncover the truly optimal correlation of inputs and actions. Techniques such as machine learning (ML) and predictive analytics, on the other hand, continuously self-optimize. So sufficiently advanced AI can generate the best possible insights for all targeted outcomes—and iteratively re-optimize those insights over time.
  • AI outputs can and should be customized to your audiences. All analytics generate probabilistic outputs. One of the beauties of AI is that these probabilities can be adjusted for any given audience or purpose. It may make sense, for example, to boost the confidence of clinicians who are skeptical about AI by aggressively minimizing false positives in exchange for a smaller number of high-certainty predictions. Conversely, for risk adjustment purposes—where tolerance for the occasional outlier is much higher—it’s better to accept more false positives in exchange for broader applicability to population risk. These calls should be made by business decision-makers, even though technical experts will implement them.
  • AI can and should be implemented incrementally. AI is not an all-or-nothing proposition. Some providers use second-tier EHR systems that don’t integrate easily into data analytics engines. And for business reasons you may choose to, say, prioritize chronic illness episodes among Medicaid enrollees over trauma episodes among other members. That’s fine. Focusing on low-hanging fruit is a proven best practice. Plus, learnings from your initial implementations will be useful as you expand AI-driven risk adjustment across your business.

AI is already being used to great effect in many other industries. And early adopters in health care have reaped significant quantifiable benefits. Every payer can profit from these benefits, and few can afford to forego them. So it’s time to get started. Do your research, build institutional consensus, and target some early use-cases. You don’t want to get too far behind the AI curve—especially as pressures keep escalating to deliver better results at less cost with greater predictability and transparency.

About the author

Greg Ficklin is vice president of provider operations for the government services division of Change Healthcare.  In his role, Greg leads and guides a global team (continental U.S., Manila, Puerto Rico, India) providing strategic and operational direction for all chart retrieval services and related functions including call centers, scheduling, document management, field review, release of information (ROI) vendor management, training, and resource planning. 

In addition to client, program and project execution, he is also responsible for driving continuous improvement, innovation, and transformation of Change Healthcare’s chart retrieval operations and is keenly focused on tech-enablement opportunities leveraging Artificial Intelligence (AI), robotics, and intelligent process automation (IPA).

Greg has been with Change Healthcare for almost four years.  He brings more than 25 years of diverse professional and business expertise gained at large Fortune 100 corporations including Coca Cola, AT&T, Kimberly Clark, The Home Depot, General Electric and Wells Fargo.  Greg’s experience also includes roles with consulting firms Accenture and BC Forward.

Greg received his MBA from Harvard Business School (HBS) in Boston. He completed his undergraduate studies at Howard University in Washington, D.C. where he earned a BBA in Finance while earning the distinction of National Competitive Scholar.