The use of analytics for the proactive review of and oversight into coding and submission processes has become more critical than ever. But instead of only looking for undercoding or gaps, health care organizations need to look for overcoding as well. In this landscape, even plans that did not think they were on the radar for RADV may now be at risk—and all plans should prepare for some kind of audit each year.
Recent reports by the Office of the Inspector General (OIG) have sounded the alarm on billions of potentially improper payments to Medicare Advantage organizations (MAOs) primarily due to unsubstantiated or non-compliant diagnoses. The Centers for Medicare & Medicaid Services (CMS) estimates that such practices could account for nearly 10 percent of payments made to these organizations. This comes at a time when Medicare is facing growing solvency and affordability challenges, as the number of enrollees, and spending, continues to rise.
These factors combined have prompted a major crack down on inaccurate or fraudulent risk-adjustment scores. In addition to CMS increasing the number of RADV audits it performs each year, OIG has begun its own targeted audits aimed at diagnoses that fail to comply with federal regulations. Preparing for and undergoing an audit is an enormous task with significant consequences. Health plans may see a reduction in monthly CMS payments up to 3X the government’s damages caused by the violator, and a civil monetary penalty from $5,500 to $11,000 for each false claim. Lawsuits and negative media attention can also damage an organization’s reputation and brand and hurt their ability to attract and retain members.
With this increased level of scrutiny, using analytics for the proactive review of and oversight into coding and submission processes has become more critical than ever. But instead of only looking for undercoding or gaps, health care organizations need to look for overcoding as well. In this landscape, even plans that did not think they were on the radar for RADV may now be at risk—and all plans should prepare for some kind of audit each year.
OIG audit techniques: What you need to know
OIG is increasingly using data analysis to audit noncompliant codes, and recently demonstrated in small pilots that it can easily uncover these codes using data alone. The graph below represents one such audit in which the agency targeted seven key areas. The first three—acute stroke, acute heart attack, and acute stroke and heart attack combined—were geared toward finding acute conditions documented in a provider’s office without record of an inpatient stay.
The next three—embolism, vascular claudication, and major depressive disorder—pertained to diagnoses documented by providers in the absence of medication, or in the presence of a different medication than would have been expected. For embolism, OIG was looking for this condition without an anticoagulant. For vascular disease, it was looking for this condition with a medication that suggested it should have been a different condition. For major depression, it was looking for this condition without a prescribed antidepressant.
The final analysis was around ‘fat finger’ diagnoses, which OIG was very easily able to find simply by looking for common ‘flip flops’ and then comparing them against other sources of clinical information to determine if the code that could have been transposed was not actually valid. Across the board, you can see a 60 percent hit rate (86 percent for some codes) on these studies. Going forward, we expect OIG to scale this pilot program and use it as a model for how it will assess health plans in the future.
Compliance-focused chase analytics
The good news is that OIG has published the results of this study. When building chase lists, health plans should be sure to include these compliance red flags, retrieving charts from providers that have coded these unlikely conditions and then validating the diagnoses in question by coders. To this end, Episource has built our own Compliance Pack using these rules along with additional rules we’ve gathered from other OIG publications.
For the example of acute stroke, we found this condition flagged as non-compliant 21 per thousand times, telling us we should be pulling those charts specifically and assessing them for overcoding. In addition, there is also a list of other high-risk HCCs like heart attack where you wouldn’t expect to see these conditions without an inpatient stay. While some codes with very high hit rates, like acute stroke, may be filtered at the point of submission, others need to be checked and validated by coders. To enable this, analytics tools need to include charts containing suspected non-compliant codes when building chase lists.
Retrospective reviews: Using tools to ‘look both ways’
When we talk about analytics, we’re generally referring to upside analytics. How is member RAF trending? Where are there documentation gaps for chronic conditions? Where do we see clinical suspects? Which charts do we need to chase or in-home assessments do we need to do? These are all important questions, but that same analytics process can and should be done for compliance as well—and this means ‘looking both ways’ to remove unsubstantiated codes.
Traditionally to do a legitimate two-way review, you had to code everything in the chart. This was much slower and more expensive than year-wise capture. The problem with this strategy is if there are multiple instances of overcoding in a single PDF or for a single member, year-wise capture may indicate one of those codes as a delete, but not necessarily every single one. On the other hand, If you’re doing encounter-wise coding, these two-ways reviews are slow and much more expensive because they take a great deal more effort.
NLP has sped up coding, but often by taking an upside only view i.e., only having coders review new codes. This does not allow you to catch any deletes, nor does it solve the problem of capturing multiple instances of one code that hasn’t been deleted. In this new landscape, NLP can and should be used to look both ways, allowing coders to see which claims codes are in the chart to assess for overcoding, showing the results of work both upside and downside.
The way we solve this issue in our NLP SaaS Coding Tool at Episource is by showing the coder which codes are in claims and which codes are not.
This allows coders to:
- Elect to skip over codes that are in claims and substantiated in a chart
- Capture codes that are not in claims that are substantiated in the chart
- Delete codes from claims that are in the chart but are not substantiated
Retrospective reviews: Add/delete outcome results
Episource does two-way reviews across many different plans and tend to find more adds than deletes across the board. This translates to 2.5 to 3 codes that need to be deleted per every hundred members. When you multiply that across a large program, you can see that there is a lot of inaccurate provider coding that needs to be removed, as OIG can find them that quickly.
So, although we’re seeing a fairly large net overall increase in HCC capture—and the net RAF impact is positive—those deletes are frequent enough that it has to be part of a compliant risk-adjustment program.
Closing the loop: Improving future documentation
To close the loop, we need to take it from coding back to the provider. Many of our clients have provider education and clinical documentation improvement (CDI) programs. However, most of those are ‘risk-adjustment 101.’ For example, something as simple as reminding providers to remember to write the word ‘morbid’ in front of obesity. But what we really need to be doing is training them on that handful of codes that shouldn’t be in the chart at all.
The same data and tech processes that help shape provider education programs can also be used to identify the specific codes and providers that require the most help. It’s very easy to find the frequency of deletes by provider and HCC code and deliver practice-wise custom training for provider groups. But if we don’t provide that training and just make training programs about documentation completeness only, we may actually be adding to the problem, as providers may think the risk is only about missing codes.
Reducing the risk of RADV audit
Ensuring compliance requires a data-integrity mindset across the entire risk-adjustment lifecycle, from prospective to retrospective. With this approach, the focus should be about trying to submit fully accurate documentation more than just looking for gaps. To do this, we need to employ the tools being used to make risk adjustment faster and better to ensure data integrity.
As OIG and CMS continue to increase their focus on MAOs, these organizations will need to shift their processes to look both ways and get ahead of compliance audits before they happen—and partnering with the right vendor can be a key part of this process.
Episource arms clients with data, tools, and insights to navigate the chaos of the health care system. To learn more, join Erik Simonsen, chief operating officer, Episource, at the RISE 18th Risk Adjustment Forum, Nov. 15-17, live at Caesars Palace, Las Vegas for a presentation on the NLP-targeted second level review. Click here for the full agenda, roster of speakers, health and safety protocols, and how to register.
About the author
As chief operating officer of Episource, Erik Simonsen is responsible for the delivery of onshore and offshore coding services, record retrieval, IT infrastructure, and compliance. His knowledge in engineering, finance, and operations enables him to continuously improve performance and operational transparency for clients. He has over 10 years of experience managing outsourced centers, and substantive backgrounds in investment banking and technology. Erik has a BS in Biomedical Engineering from Johns Hopkins University and an MBA from NYU Stern.