Step Five. A change in practice is implemented and evaluated in Step Five
(Larrabee, 2009). A pilot study was conducted to evaluate the tool in a large-scale,
military training event involving local community assets in May 2011. The checklist was
evaluated in two phases: the first phase introduced the tool through training of ASF and
AE personnel involved in the event, and the second phase involved evaluating the use of
the checklist in the mission flow of receiving and transporting patients. Each phase was
evaluated using the same questionnaire. Interrater reliability was evaluated during the
first phase. Participants filled out the SBAR checklist based on the information they
were given on three standardized patients. Three evaluators from the EBP team
independently matched four participants’ data points with the master generated by the
team for each standardized patient (Appendix D). The number annotated in each block of
Tables 11 through 14 represents the number of evaluators in agreement when comparing
the participant’s SBAR tool to the master SBAR. Only Participants 1, 2, and 3 were used
during the calculation of interrater reliability. The interrater reliability was calculated
using kappa coefficients among three pairs: Participant 1 and Participant 2: .77;
Participant 1 and Participant 3: .86; and Participant 2 and Participant 3: .84. The mean
was then calculated at .82. The reason Participant 4 was not counted will be explained in
the Limitations section in Chapter V.