The stakes around risk adjustment accuracy in Medicare Advantage have never been higher. With CMS confirming approximately $17 billion in annual overpayments linked to unsupported diagnosis data, health plans that treat risk adjustment purely as a revenue function are increasingly finding themselves on the wrong side of federal scrutiny.
The shift underway is not just operational. It is cultural. Coding programs that once focused on maximising diagnosis capture are being rebuilt around a different priority: defensible accuracy.
The Regulatory Pressure Is Real and Accelerating
CMS announced in January 2026 that Payment Year 2020 RADV audits would begin as early as February of that year, with variable sample sizes ranging from 35 to 200 enrollees per contract. Earlier completed audits covering Payment Years 2011 through 2013 found overpayment rates between five and eight percent, and CMS has confirmed that recoveries will begin soon.
The Office of Inspector General added further pressure with its February 2026 Medicare Advantage Industry Compliance Program Guidance, which identified specific risk adjustment practices that have drawn federal investigations. These include chart reviews conducted solely to add diagnoses without removing unsupported codes, and health risk assessments that generate diagnoses not reflected in actual patient care.
For health plans managing large member populations, the message is clear. Programs built around one-directional code addition are now regulatory red flags, not best practices.
What Retrospective Risk Adjustment Actually Involves
Understanding what the process is supposed to look like in practice helps explain why so many programs fall short of where they need to be.
Retrospective risk adjustment is the systematic review of medical records after patient encounters to identify, validate, and correct diagnosis codes that reflect a member’s true health status. Coding teams examine clinical documentation, lab results, specialist consultations, and medication histories to find chronic conditions that were missed or miscoded during initial claims processing.
Every validated HCC code must be supported by MEAT criteria, meaning documented evidence that the condition was Monitored, Evaluated, Assessed, or Treated during a qualified face-to-face encounter. This requirement is the compliance foundation that most audit disputes ultimately hinge on.
What distinguishes a defensible program from a problematic one is two-way coding. Reviewers must identify and flag unsupported codes for deletion with the same rigour they apply to identifying missed diagnoses. Programs that only add codes, regardless of how carefully those additions are documented, create the kind of statistical pattern that now attracts OIG attention.
RAAPID’s detailed resource on retrospective risk adjustment covers how health plans can build programs that meet this standard, including the specific documentation requirements and process controls that create audit-ready evidence trails.
How AI Is Changing the Chart Review Process
Manual chart review has well-documented limitations. Complex reviews can take between 30 and 45 minutes per chart. Accuracy tends to decline during extended sessions, and coding teams working through large volumes of unstructured clinical data face significant burnout risk. Improving accuracy through manual processes alone is not a sustainable path forward.
AI-powered tools, and specifically Neuro-Symbolic AI that combines deep learning with structured clinical reasoning, are addressing these limitations in ways that earlier technology could not.
The clinical context problem was a significant barrier for older automated tools. A system that cannot distinguish between a family history of diabetes and active diabetes with peripheral neuropathy will produce suggestions that create more work rather than less. Neuro-Symbolic AI approaches this differently, recognising patterns across prescriptions, lab results, and clinical notes that indicate active management of a condition rather than simply the presence of a code.
Explainability is the other critical requirement in a compliance environment. Every suggested diagnosis needs to link to specific evidence in the clinical record, creating a transparent trail that can withstand audit scrutiny. Opacity in AI-generated coding suggestions is not acceptable when the stakes include federal overpayment recovery.
Organisations deploying well-designed AI tools for retrospective review report meaningful reductions in chart review time and improvements in first-pass accuracy, alongside the productivity gains that allow coding teams to work sustainably at scale. The compliance value, however, comes from building programs where accuracy is genuinely bidirectional rather than volume-focused.
The Relationship Between Retrospective and Prospective Approaches
Retrospective review does not operate in isolation. Health plans that achieve the strongest compliance outcomes integrate retrospective, prospective, and concurrent approaches into a coordinated risk adjustment strategy.
Prospective risk adjustment captures diagnoses during or before the patient visit, using pre-visit planning and clinical decision support to document relevant conditions at the point of care. CMS considers this the most defensible method of risk capture because the documentation is generated in direct connection with the clinical encounter.
Concurrent risk adjustment sits between the two, reviewing documentation during or shortly after an episode of care. This allows health plans to address gaps faster than traditional retrospective timelines permit.
Retrospective review then functions as the final quality layer, identifying what the earlier stages missed and, critically, removing codes that lack adequate support before they reach CMS. Each approach reduces the burden on the ones that follow, and together they produce the kind of comprehensive, accurate data that supports both financial performance and regulatory compliance.
Building Programs That Hold Up Under Scrutiny
Preparing for the current RADV audit environment means treating every chart review as potential audit evidence rather than a routine operational task.
The OIG’s 2026 compliance guidance offers a practical framework. Health plans should implement data filtering logic to identify anomalies in diagnosis data, benchmark HCC prevalence rates across years to identify unusual coding patterns, and conduct follow-up education where provider coding intensity suggests potential overcoding. Reporting of unsupported codes to CMS and appropriate handling of overpayments is also specifically addressed in the guidance.
Vendor relationships require scrutiny as well. The OIG explicitly recommends that health plans review any software used in risk adjustment to ensure it is not designed primarily to increase risk scores without supporting clinical validity.
Our healthcare compliance resources provide additional context on how payers are navigating the broader regulatory environment alongside these risk adjustment requirements.










