When operations outgrow oversight: A lesson from call centers that risk adjustment leaders now face

Early in my career, I spent several years running large call center operations at Capital One. Like many financial services organizations in the early 2000s, we relied heavily on outsourced partners to scale quickly. On paper, the model worked. Costs were predictable, capacity was flexible, and vendors specialized in execution.

In practice, something became clear very quickly: outsourcing execution without operational visibility creates risk.

We tracked performance metrics relentlessly. We monitored servicelevel agreements, average handle time, and firstcall resolution. Yet despite all this reporting, quality standards were often not met, and the metrics alone could not tell us why.

If first‑call resolution declined, for example, the dashboard did not reveal whether agents were missing required steps, providing inconsistent guidance, or rushing calls to meet time targets. To understand where quality was breaking down, leaders had to listen to calls, review workflows, and see the work itself. Without that visibility, intervention was delayed and, often, misdirected.

The solution was not to fire vendors. It was to change how we managed the operation.

Once we implemented call‑center management platforms, systems that standardized workflows, enabled call listening and quality scoring, and gave leadership realtime visibility, vendors stopped being black boxes. They became manageable extensions of the organization. Control did not come from more oversight. It came from seeing how work was performed, not just how it scored after the fact.

Two decades later, risk adjustment is facing a very similar moment.

Risk adjustment has become a live operation

For many years, risk adjustment could be treated as a periodic effort. Coding surged, vendors mobilized, results were reviewed, and organizations reset until the next cycle.

That model no longer holds.

Expanded Risk Adjustment Data Validation (RADV) audit cadence, compressed timelines, and heightened scrutiny have fundamentally changed the nature of the work. Risk adjustment is no longer something organizations prepare for intermittently. It is something they operate under continuously.

Yet many programs are still being managed primarily through outputs, score movement, submissions, and audit results. Leaders are often asked to answer critical questions using lagging indicators. Where is our risk coming from right now? Which partners consistently meet quality standards? Where are documentation or evidence issues emerging? Which diagnoses would hold up under RADV audit scrutiny today? How exposed are we in this moment?

When those answers arrive weeks or months later, the opportunity to intervene has already passed. That is not a vendor problem. It is an operating‑model problem.

A familiar structural pattern

The parallel to call centers is hard to miss. In both environments, work is distributed across internal teams and external partners. Outcomes depend on consistent quality and defensibility. Leadership is accountable for results but removed from execution. Metrics signal problems without explaining them.

In call centers, leaders learned that tracking firstcall resolution, handle time, or QA scores was necessary but insufficient. When quality standards were not met, the only way to understand why was to see and listen to the work itself. Without that insight, decisions were based on assumptions rather than evidence.

Risk adjustment today faces the same constraint.

RADV audits changed the most important variable: Time

Once an organization is pulled into an audit, the opportunity to correct coding has largely passed. At that point, the focus shifts from improvement to defense, explaining decisions that are already locked in.

That reality changes where the real work needs to happen. Accuracy must be established during normal submission cycles, not after the fact. Retrospective reviews, analytics, and deletion projects are no longer fallback activities, they are essential parts of maintaining a defensible position.

At the same time, leaders need the ability to evaluate partner performance against consistent standards. When vendors can be compared side by side, based on quality, defensibility, and outcomes, organizations are better positioned to make informed operational and financial decisions.

Why this requires a true operational system

In the call‑center era, spreadsheets and vendor reports could not scale. Leaders needed systems that sat above the labor, systems that governed workflow, surfaced quality issues early, and allowed intervention while work was still in progress.

Risk adjustment has reached the same point. Health plans need operational infrastructure, like a central command center that empowers change. They need to be able to manage vendor oversight and identify coding quality issues early, so that RADV audit readiness becomes continuous, rather than episodic, work.

From managing results to managing the work

The organizations that will succeed under sustained RADV audit pressure are not those with the most vendors or the highest activity levels. They are the ones that treat risk adjustment as a governed, day‑to‑day operation, manage partner performance while work is underway, and can explain not just their outcomes, but how those outcomes were produced.

This was the lesson from call centers years ago. RADV audits do not reward effort, they reward discipline, quality, and transparency.

The question leaders now face is the same one operations executives faced then:

Are we managing results, or are we finally managing the work that produces them?

If this article resonates with you, we’d welcome the opportunity to connect and explore further. Visit our website to see how Wolters Kluwer, Health Language is tackling these challenges, and reach out to speak with one of our experts today.