Background:

Teaching during bedside rounds is necessary to educate students and housestaff, but attendings rarely receive structured feedback on their teaching and rounding practices. We aimed to evaluate a Peer Observation Program and a structured bedside rounds observation tool. We hypothesized that participants would value observing their colleagues, report more effective preparation of the team for rounds, have increased confidence in bedside teaching, improve performance of key bedside teaching skills and value both the observation tool and the program.

Methods:

We developed the Peer Observation of Ward Rounding Tool using literature-based best practices on four salient themes: set the stage, role model patient centered care, practice learner-centered teaching, and wrap-up / give feedback. We developed a workshop to train participants to use the tool and give constructive and reinforcing feedback. In the 2014-2015 academic year, hospitalists on teaching services were invited to observe each other; pairings were based on availability. Immediately after an observation, the observer was surveyed on the utility of observing rounds. All participants took pre and post program surveys to assess satisfaction with the process. We estimated means and standard deviations to describe responses to Likert scale questions (1=strongly disagree, 5=strongly agree).

Results:

Thirty hospitalists completed the training and pre-program survey; 28 completed the post-program survey. Twenty observed a colleague, 20 were observed; 17 served both as observer and were observed. Average attending experience was 3.5 years (±2.8). Observations averaged 95 minutes (±29 min). Immediately after the observation, observers agreed that “the experience was valuable” (mean = 4.5 ±0.8). Post-program, participants reported preparing more effectively for rounds (ex: “There is a rounding plan before my team starts rounds,” pre = 3.7 ±1.0 vs. post = 4.1 ±0.7, p = 0.05), increased confidence in teaching at the bedside (ex: “I am confident in my ability to teach effectively at the bedside,” pre = 3.2 ±0.7 vs. post = 3.7 ±0.6, p < 0.01), and improved performance of key skills (ex: “I give specific feedback to learners after bedside rounding,” pre = 3.2 ±1.2 vs. post = 3.9 ±1.0, p = 0.04). Of the 17 participants that both served as observer and observed, 11 felt that observing was more useful. Our Peer Observation of Ward Rounding tool was regarded as an excellent framework for both observing (4.5 ±0.6) and giving feedback (4.5 ±0.5). Overall, the program was valued: “Participating was worthwhile,” mean response = 4.7 ±0.5; but the perceived effect on teaching skills was moderate: “My bedside teaching has improved as a result of the program,” mean response = 3.9 ±0.6.

Conclusions:

Participants reported constructive changes in rounds preparation, confidence in teaching, and performance of bedside teaching skills. Peer observation of bedside teaching and rounding using a structured tool is valuable for hospitalists.