Background: The diagnostic process is fraught with uncertainty and potential failures, which increases risk of diagnostic error (DE) during hospitalization. One way of improving awareness of DE involves the use of electronic health record (EHR) data to calculate and display risk of DE for individual patients. As a part of our AHRQ-funded Patient Safety Learning Laboratory, we developed a prototype of a real-time predictive algorithm (a DE risk score) that uses selected EHR data corresponding to pre-specified clinical factors to flag patients at elevated risk for DE. We simultaneously implemented a structured Diagnostic Time-Out (DTO) during educational sessions to encourage clinicians to acknowledge diagnostic uncertainty or DE risk in hospitalized patients.

Purpose: We report our experience testing a prototype of a DE predictive algorithm with clinicians. During implementation, we sent patient-specific polls to care team members and asked whether these clinicians would take a DTO on patients at varying DE risk states predicted by the algorithm.

Description: A preliminary list of DE risk factors (Figure 1, right) was established based on literature review and expert opinion. We modeled our algorithm embedded in our EHR using the following expression: DE Risk Score = a1x1 + a2x2 + a3x3 + a4x4… + anxn, where xn is a dichotomous predictor corresponding to a specific EHR data element, and each coefficient (an) is a configurable parameter that weights the contribution of each individual risk factor to the overall DE risk score. Calculated scores less than configurable value, X, indicated low risk (green flag); between X and configurable value, Y, indicated moderate risk (yellow flag), and greater than Y indicated high risk (red flag). A “Yes”/ ”No” email poll (“Would you take a Diagnostic Time-Out [on this patient]?”) was administered weekly to clinicians caring for patients over 33 weeks and communicated the calculated risk state (Figure 1). Ninety-nine patients were randomly chosen from three general medicine teams (one patient per team per week). A total of 393 polls were sent to the attending physician, resident(s), and responding clinician for 99 unique patients (3.96 per patient), and 51 (13%) responses were collected. We conducted our analysis on 43 patients (43%) with at least one response (defaulting to the “Yes” response in instances of conflicting responses from different care team members). The distribution of “’Yes” responses (Figure 2) was 7%, 25%, and 54% for low, moderate and high-risk cases, respectively. In a bivariate analysis comparing individual risk factors with poll responses, high-risk diagnoses (e.g., altered mental status) were significantly associated with “Yes” responses (p< 0.01). The calculated sensitivity and specificity of moderate or high-risk flags with regard to eliciting a “Yes” response were 92% and 42%, respectively.

Conclusions: Our prototype of the DE predictive algorithm had high sensitivity but low specificity for identifying patients on whom clinicians would take a DTO. Our next steps are to review of false positive and false negative cases of DE confirmed by our rigorous DE chart review process. We will use findings from our regression model outputs to tune our algorithm by adjusting weights.

IMAGE 1: Figure 1. Diagnostic Time-Out Poll Email & Diagnostic Error Risk Algorithm

IMAGE 2: Figure 2. Distribution and Analysis of “Yes” and “No” Responses (n=43) per Flag Risk Status