Background:

Consistent and accurate documentation with proper coding for physician services is the fiscal foundation of a successful medical practice. There may be, however, significant variability in interpretation of the Centers for Medicare and Medicaid Evaluation and Management (CMS E/M) guidelines as they apply to clinical practice. In 2000, Zuber et al. reviewed 1069 patient charts from physician offices and found significant interobserver differences. In a recent unpublished study, 1 of the investigators (T.E.B.) utilized mock inpatient documents to assess resident understanding of the CMS E/M guidelines. We found variability in the responses from certified coders who were asked to assign a “correct” code to the documents in question. Based on our observations, we believe that the E/M guidelines may be subject to significant variability in interpretation, which in turn could lead to variability in reimbursement rates for similar work and documentation.

Methods:

We delivered a demographic survey and series of 3 mock inpatient admission documents and 3 mock inpatient subsequent encounter documents to approximately 250 physicians and coding specialists. Subjects were asked to review the 6 documents and determine which CMS E/M code most accurately described the amount, complexity, and appropriateness of work documented.

Results:

Forty‐one completed surveys (15 physicians and 26 coding specialists) were returned (16%). Coding assignments by participants are shown in Figure 1. Concordance of code assignment was evaluated using Randolph's free‐margin multirater kappa, where a result of “0” is similar to random chance and “0.6” or higher is considered substantial agreement (Landis and Koch, 1977). Overall kappa for agreement among all respondents was 0.016 for admission documents and 0.18 for subsequent encounters. Results of concordance among all respondents, coding specialists, and physicians per document are noted in Table 1. For most documents, there was little agreement in code assignments.

FIGURE 1. Overview of Code Assignments by Document

TABLE 1 Kappa Coefficient for Measure of Concordance Among Raters: from 1 = Perfect Agreement to ≥0.6 = Substantial Agreement

Conclusions:

In this small study, there was significant variability in code assignments by participants. The lack of concordance among both physicians and coding specialists suggests consistency may not be correctable by education, but rather a more fundamental problem with the code assignment system. If our results can be extrapolated to real‐world practice, some of the hundreds of billions of dollars spent on physician services nationally may be dispensed inconsistently.

Disclosures:

T. Bell ‐ none; J. Aldinger ‐ none; H. Richey ‐ none