Coding

Raising Red Flags for Coder Quality in ICD-10

May 12, 2016 9:29 am

Data reveals particular concerns with coder knowledge ramp-up on medical/surgical, neoplasm, and nervous system cases.

Are coders rushing through cases to maintain the discharged not final billed (DNFB) only to discover that the majority of these cases result in denials or decreased payment? Payers are quickly coming up to speed with ICD-10 coding rules. Even if you haven’t seen an onslaught of denials yet, it may not take long for payers to catch up. Alternatively, payers may already be up to speed on ICD-10 but have discovered that denying cases due to faulty coding is not in their financial interests.

Likewise, if it’s taking longer for coders to code each record, are you at least reaping the benefits of accurately coded claims? CFOs must be able to analyze coder performance to pinpoint specific revenue vulnerabilities and mitigate risk for the organization.

Based on early ICD-10 coder performance data, red flags are rising for inpatient, ambulatory and emergency services coding accuracy and coding productivity. ICD-10’s inherent specificity requires more attention to detail, and simply takes longer to select the correct—and most specific—diagnosis and procedure code. Knowledge gaps are specifically noted for medical/surgical cases in ICD-10.

A recent analysis provides a high-level snapshot of coder knowledge data collected across 150 coders in January 2016. While no one expected coder knowledge ramp-up in ICD-10 to be quick or easy, data reveals particular concerns with medical/surgical, neoplasm, and nervous system cases. By analyzing early coder performance data, healthcare organizations target coding workforce education, mitigate claims denials risk, and brace for latent, unexpected audit events.

Coding Accuracy and Productivity by the Numbers

The most frequently cited study on coding accuracy was reported in 2013 by the HIMSS/WEDI (Health Information and Management Systems Society/ Workgroup for Electronic Data Interchange) ICD-10 National Pilot Program. Coders participating in this pilot saw a 50 percent decrease in productivity and an average accuracy rate of only 63 percent. So far in ICD-10, most organizations are reporting better results than those cited by HIMSS/WEDI in 2013. Even the latest WEDI news on May 9, 2016, reported minimal disruption from a productivity perspective with only a “slight decrease in productivity for providers.” That’s the good news.

The bad news, however, is that coding accuracy still lags far behind ICD-9 gold standards. There are several reasons to consider and questions to ask.

  • Are coders rushing through cases to meet DNFB metrics?
  • Are payers foregoing denials due to parallel gaps in ICD-10 expertise or because it is in their best monetary interest?
  • Are there hidden areas of coder knowledge deficits that will cause latent denials, recovery audits, and lost revenue?

In ICD-9, coders maintained a 95 percent or greater accuracy rate. However, this same rate may not be feasible in the short term. During January 2016, approximately 150 coders using a web-based ICD-10 coder assessment tool produced the following performance results as compared with ICD-9.

  • Inpatient coding: 86 percent average accuracy, 54 percent average productivity
  • Ambulatory coding: 80 percent average accuracy, 64 percent average productivity
  • Emergency services coding: 84 percent average accuracy, 41 percent average productivity

This overall coder performance data is based on actual medical record assessments and a coder population that included experienced coders as well as coders solely trained in ICD-10. This type of blended coding team is the most common workforce scenario in healthcare provider settings today.

When drilling down into the data, it becomes clear that some hospitals’ coding teams are more advanced in their knowledge of ICD-10 than others. Coder testing of 37 coders across 4,000 cases performed at one health system revealed the following accuracy results.

  • Inpatient diagnosis coding: 67 percent average accuracy
  • Ambulatory diagnosis coding: 60 percent average accuracy
  • Emergency department diagnosis coding: 53 percent average accuracy

In addition, these coders tended to assign between two and eight extra ICD-10 codes per case, which did not belong on the claim.

How is this data helpful to financial directors and revenue cycle managers within an organization?

Translating Data into Service-Line Predictions

Productivity data is a good predictor of cash flow. And accuracy data is a good predictor of overall financial performance. By drilling down even further into the data (i.e., into specific categories as well as procedure/diagnosis codes), organizations can make financial predictions and take steps to mitigate risk using education and documentation improvement strategies.

From a diagnosis coding accuracy standpoint, the web-based ICD-10 coder assessment tool identified these five areas of service line concern.

  • Neoplasms
  • Parasitic diseases
  • Conditions in the perinatal period
  • Injuries and poisonings
  • External causes

From a procedure coding accuracy standpoint, the assessment tool identified these five areas of concern:

  • Peripheral nervous system
  • Lymphatic systems
  • Central nervous system
  • Gastrointestinal system
  • General anatomical regions

Using this data as a benchmark, organizations should assess coder knowledge through existing learning systems or coding accuracy audits. We expect that most organizations will recognize that the same, or similar, diagnosis and procedures are at risk for coding denials and compromised revenue in 2016. Specific service line improvements are the next practical step.

Five Questions to Ask Your Data

Revenue cycle SWAT teams including CDI, coders, and physician advisors are suggested for each service line at risk. The purpose of this team is to ensure coding integrity that ultimately yields accurate reimbursement. Once assembled and a coder knowledge assessment is conducted, the team should have enough data on hand to answer these five questions by specific code category and/or service line.

  • Is coding up to par for high-risk services lines?
  • Within each category/service line, where are the specific coder knowledge gaps by diagnosis, procedure, coder, and case?
  • Is clinical documentation complete and accurate for these problem areas?
  • Are payers paying these high-risk service lines correctly? What is payer feedback so far?
  • How much revenue is potentially lost due to incorrect coding? How will the DRG shift?

Without close coding oversight and remediation, your organization could see a dip in revenue for certain service lines. That’s why it’s important to identify coder strengths and weaknesses—drilling down into the data and cases as much as possible.

Detailed coder performance data provides extraordinary insight into current and future financial performance in ICD-10. By analyzing coder performance data, organizations pinpoint revenue vulnerabilities and mitigate reimbursement risk.


Manny Peña, RHIA, is chairman and CEO of H.I.M. ON CALL and AVIANCE Suite, and is a member of HFMA’s Metropolitan New York Chapter.

Advertisements

googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text1' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text2' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text3' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text4' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text5' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text6' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text7' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-leaderboard' ); } );