This practice brief is made available for historical purposes only.
Collecting Root Cause to Improve Coding Quality Measurement
Each healthcare organization has its own coding quality review and reporting program. Most programs aim to improve the quality of coded data by improving the accuracy of codes assigned by the coding staff. However, without a reference standard for accuracy or standardized coding performance measurement, it is impossible for organizations to compare their coding measurements against industry benchmarks and determine if the quality of administrative coded data is indeed improving.
Organizations must begin measuring and collecting coding quality review data consistently throughout the profession. Measuring coding quality performance by tracking, trending, and reporting by the root cause of coding errors is an important first step in standardizing coding quality measurement.
This practice brief explores methods for reporting coding quality and shares highlights from a survey on coding quality measurement. It provides a model for reporting coding errors by root cause.
Methods for Reporting Coding Accuracy
It is important for organizations to report individual coder accuracy uniformly. However, it is also important that organizations understand and correct the root causes for coding errors, more appropriately characterized as coding variances.1
An important component of coding quality reviews is evaluating the quality of coding at the code level, reviewing for consistency among a team of coders. Coder variation, also known as inter-rater reliability, is the extent to which the same sets of codes are applied to the same source documentation by independent coders (i.e., “raters”).
For example, if four coders assign codes to the same documentation, ideally all four coders would produce the same set of codes. This would reflect 100-percent inter-rater agreement; anything less than 100 percent reflects coder variation.2 One obvious root cause for coder variation is the quality of documentation, including the degree of specificity, consistency, completeness, and timeliness of documentation.
Facility-specific policies and procedures such as medical record completion and query policies, though often necessary for consistency within the facility, ultimately contribute to coding variance nationally, as do payer-specific coding policies.
A common method for reporting coding accuracy is to report it as a percent accuracy; for example, MS-DRG accuracy, principal diagnosis accuracy, or overall coding accuracy. Many facilities set performance goals and report coding quality performance results based on accuracy targets (e.g., 95 percent or greater, 90–94 percent, 85–89 percent, etc.). If the results of the review miss the target, then action steps are developed, follow-up reviews are performed, and incremental improvements in percent accuracy are monitored.
Kim Streit, vice president of healthcare research and information for the Florida Hospital Association, states:
It is time to eliminate the vagueness about what is counted as coding accuracy. Reporting an accuracy rate of 97 percent is not meaningful if there is no clear understanding of what was wrong with the 3 percent of cases that failed to achieve the “accuracy” stamp of approval. The root causes contributing to coding variance must be collected in order to improve the quality of coded data. Without that information, there is no clear understanding of what comprises the error rate.
At face value, it may appear that a variance is due to mistakes made in applying coding convention and guidelines; however, the variance may instead be due to the fact that the coder did not have the discharge summary at the time of coding. This would not be a “coder error” but an unintended consequence of a process that some hospitals have instituted in an effort to meet unbilled targets.
Some organizations adjust the coder accuracy rate, removing these cases from the final accuracy rate reported, while others do not. This illustrates why accuracy rates cannot be compared across organizations. The coding accuracy rate, commonly reported today, is incomplete at best and often misleading.
Root Cause: A More Reliable Measure
Coder accuracy, although important for internal coder performance tracking, is not the ideal measure for reporting the overall quality of coded data. Tracking and trending the underlying root causes for code variation is a more reliable method for measuring the consistency of coded data that results from an entire healthcare system. Once the underlying causes for code variations are understood, then system-wide improvements can be made to the process, technology, education, or training.
This type of method for measuring is more consistent with the Hospital Payment Monitoring Program (HPMP), which was created by the Department of Health and Human Services and Office of Inspector General (OIG) to monitor the accuracy of payments made in the Medicare Fee for Service Program.
In a recent report that reflects Medicare discharges in the 12-month period ending December 31, 2006, the national error rate, reported as 3.9 percent, is broken down into five root causes for errors: no documentation, insufficient documentation, medically unnecessary, incorrect coding, and other.4 The data for paid claims error rate are further refined and reported by incorrect coding; for example, 5 percent error rate due to DRG 551, Permanent cardiac pacemaker implant with major cardiovascular diagnosis or AICD lead or generator; 2.6 percent due to DRG 416, Septicemia (FY 2006). The data for paid claims error rate for no documentation are 9.9 percent on DRG 217, Wound debridement and skin graft except hand for musculoskeletal and connective tissue disease (FY2006). This is an effective reporting method for HPMP’s focus on reducing paid claims errors.
The performance target, national improper payment error rate, is adjusted from year to year. In 2007, it was 4.3 percent, and next year’s performance target is 3.8 percent. The national paid claims error rate has significantly dropped from 14.2 percent reported in 1996 to 3.9 percent in 2007.5
OIG has been tracking and trending underlying root causes for error rates for many years. A 1988 report breaks down the 20.8 percent DRG total error rate by root cause.6 It found that 39 percent of the total error rate was related to principal diagnosis assignment not supported by the physician documentation in the medical record; 27 percent was related to incorrect sequencing of principal diagnosis; 12 percent was related to incorrect code selection based on coding convention and rules; and 13 percent was based on physician documentation of vague or nonspecific diagnostic language, patient status, data entry errors, and coding from an incomplete record.
As these examples illustrate, understanding the cause for coding errors provides insight that can lead to improvements in the processes that support code assignment. Research conducted by Pavani Rangachari on coding accuracy supports this as well:
The report shows that more detailed analysis of this gap between coding and documentation is required at the individual hospital level to reduce the inconsistency and variation in coding and the impact to quality reporting.
Rangachari cites studies that have demonstrated that nearly 20 percent of payments based on codes on hospital bills are incorrect and that there is considerable variation in coding accuracy by geographic location and bed size. Without having detailed information about the root causes for coding errors, it is difficult to respond to such statistics and make improvements.
Survey Results on Underlying Causes
Healthcare organizations must disseminate and adopt a standardized terminology for measuring coding quality performance, standardized definitions for how to count coding variance, and a standardized method for classifying and reporting coding variance. HIM and coding professionals must be educators and advocates concerning the use of standards.
In an effort to begin evaluating the current state of coding performance measurement, the AHIMA Work Group on Benchmark Standards for Clinical Coding Performance Measurement convened in 2007. A subgroup was charged with addressing coding quality measurement.
One aspect of the subgroup’s work was to search for a standardized metric for measuring the quality of coded data or a regulatory requirement for consistency in reporting data quality, which it did not identify. The subgroup also conducted a survey on coding quality measurement to collect the contributing causes to coding error and variation. “Survey on Coding Quality Measurement: Hospital Inpatient Acute Care” further aimed to collect data on coding quality review outcomes to understand how coding review data are used to improve the coding quality, reduce coder variation, and develop best practices for national coding consistency.8 A total of 668 surveys were e-mailed to AHIMA members, with 68 responses returned.
Survey responses indicated coding errors are reported by the following error type (listed in order from the most common to the least common):
Coder and provider (physician documentation) comprise the two main reasons for coding error. The survey results report the leading reason for coder error is due to complication/comorbidity code assignment, followed by principal diagnosis code assignment and secondary diagnosis code assignment.
The top three reasons for coder error related to query policies are lack of a clear understanding of clinical indicators for the condition being queried, writing unnecessary queries, and lack of follow-up for inappropriate queries initiated by clinical documentation specialist.
The number-one reason for missing coding quality performance targets (accuracy) is the challenge of meeting productivity standards while balancing quality expectations.
Survey results related to coding errors due to provider (physician documentation) find the leading reason for coding errors is vague documentation that leads to nonspecific code assignment or the need to query. This is followed by a lack of documentation to support a cause-and-effect relationship between two conditions, attending physician not concluding with a definitive diagnosis (after study) as the reason for admission, and conflicting or inconsistent documentation.
The top two reasons for coding errors related to physician response to queries are a delayed response followed by no response to queries.
Systems, policies, and procedures are another cause for coding errors. The survey asked members to list reasons for insurance denials due to coding and noncoding reasons; for example, codes not crossing to the UB-04, codes assigned by chargemaster incorrectly, payers who do not follow official coding guidelines or payer contract, or insurance companies that do not have updated codes or use old groupers. These types of coding errors should not be attributed to the coder or physician.
The survey also collected data on contributing root causes for coding errors. These are factors that indirectly impact the quality of coded data. Contributing root causes as reported in the responses include coding leadership, HIM leadership, and administrative support.
The primary reason for coding leadership contributing to coding error is that the coding lead is consumed with the day-to-day operations, followed by the priority for productivity over quality. HIM leadership contributions to coding errors relate to an ineffective workflow process to support quality coding and lack of electronic or computerized tools to support coders.
Other respondents reported that administrative support is not holding the medical staff accountable to medical record completion requirements including query requirements. Administrative staff is also focused on the financial aspects of coded data, which can contribute to root causes.
Based on the survey results, the work group created a framework for reporting root causes for coding variation that may be universally applied across all organizations in collecting and reporting quality measurement data. The root causes for each error type are outlined in the table above.
Coding variance may be classified and reported in one of six categories in an effort to eliminate or reduce coding variation. The total coding variance may be reported as one number; for example, 10 percent. However, the real value is in the reporting by root cause to explain what comprises the 10 percent coding variance—3 percent due to coder (misapplication of a coding guideline), 4 percent due to physician documentation (conflicting documentation led to inaccurate code assignment), 2 percent due to administrative (coding without a discharge summary), and 1 percent due to a system issue (code did not transfer from abstracting to billing), for example. For reporting purposes, a pie chart may be an excellent illustration of the data.
The top three departments in support of coding quality efforts, according to the survey, are medical staff, HIM leadership, and the quality department. “Survey on Coding Quality Measurement: Hospital Inpatient Acute Care” is available in the FORE Library: HIM Body of Knowledge at www.ahima.org.
The Goal: Consistent, Reliable Coded Data
There are many independent variables that affect the selection of administrative code sets linked to a health service encounter. Coding professionals rarely work with perfect documentation correctly aligned with the language used in the classification systems. The use of code set conventions and official coding guidelines must be applied to achieve the highest level of reliability and consistency for clinical coded data.
The translation of a case into a correctly coded data set is a matter of professional and ethical pride to those who perform this service. As healthcare moves into an era of increased scrutiny of coded data, accompanied by an expectation that administrative coded data reflect quality of care, it is in the profession’s best interest to adopt standardized coding processes and use best practices for quality measurement.
It is important to classify coding variance by root cause and measure the improvement over time. Furthermore, HIM professionals may need to stretch beyond reporting individual coder accuracy rates to reporting coding variance reflective of the entire system. Furthermore, HIM professionals may begin to focus on reducing coder variation (inter-rater reliability) among a team of coders by understanding the reasons for code variation.
Reporting root cause consistently is a first step in moving to performance metrics that reflect the quality and consistency of administrative coded data. Moving from measuring individual coder accuracy to measuring organizational coding consistency of diagnostic and procedural coded data using standardized performance measures is a critical next step for HIM professionals.
Hanna, Joette. “Constructing a Coding Compliance Plan.” Journal of AHIMA 73, no. 7 (July–Aug. 2002): 48–56.
Olsen, Jack. Data Quality: The Accuracy Dimension. San Francisco, CA: Morgan Kaufman Publishers, 2003
O’Malley, Kimberly J., et al. “Measuring Diagnoses: ICD Code Accuracy.” Health Services Research 40, no. 5, part 2 (Oct. 2005): 1620–39.
Schedel, Ellen, and Beth Parker. “Coding Compliance and Operations: Can the Marriage Survive?” AHIMA’s 78th National Convention and Exhibit Proceedings, October 2006.
e-HIM Work Group on Benchmark Standards for Clinical Coding Performance Measurement quality subgroup:
Cheryl D’Amato, RHIT, CCS
Sue Malone, MA, MBA, RHIA, FAHIMA
This work is sponsored in part by a grant to FORE from 3M Health Information Systems.