Hospital performance measurement efforts often use risk adjusted mortality data. Most existing risk models use administrative data and are disease specific. Importantly, administrative data models increase chances for measurement bias whereas disease specific models reduce applicability to hospitals with small sample sizes. We therefore sought to develop a hybrid administrative/clinical risk model to apply broadly across hospitals’ medical/surgical population to address both shortcomings.
We performed a retrospective cohort study among 131 Veterans Administration hospitals. Patients were included if admitted to the acute care VA hospital from 2006 to 2008. We excluded readmissions that may have occurred between the index admission and a 30–day mortality event so as not to “double count” outcomes. The final cohort was split into a development/validation (60/40) cohort. The primary outcome was mortality 30–days from the time of hospital admission. Independent predictors, obtained exclusively from VA electronic databases, included: age, acute medical/operative diagnosis, comorbidities (modified Elixhauser Index), admission source (e.g., emergency room), and labs surrounding the first 24 h of admission. We developed a case–mix adjusted logistic regression model of the relationship between 30–day mortality and the independent variables, while accounting for non–linear relationships for physiologic predictors (using cubic splines). Discrimination (C–statistic), calibration (Hosmer–Lemeshow goodness–of–fit statistic) and Brier’s score evaluated model performance. Hospital standardized mortality ratios (SMRs) and their 99% CI were plotted to assess the ability to differentiate performance across hospitals.
Among the 131 VA hospitals, half of the hospitals were medium or small hospitals and 86% were teaching. 59 hospitals had no or small ICUs with limited services. More than 70% of hospitals utilized hospitalists. The 1,114,327 patients were overwhelmingly male, older (64.2% were older than 60 years of age), and non–operative (83.2%). The most common specific non–operative and operative diagnoses included angina and orthopedic, respectively. Mortality was 2.8% at hospital discharge, and 5.6% at 30 days. The 30–day mortality model in the validation cohort with fixed coefficients had a c statistic of 0.873, a Brier’s score of 0.043, and excellent calibration across risk and major diagnostic groups. When applied across the group of hospitals, we found that 20/128 (16%) hospitals had 99% confidence intervals <1 (better than expected).
A general model with predictors from administrative and laboratory data yields a highly accurate 30–day mortality prediction. Advantages of a general model include avoidance of multiyear cohorts and the generation of a more complete portrayal of hospital performance.