Following the independent coding of all three raters, an assessment of reliability took place. Inter-rater reliability was calculated using the percentage of agreements and kappa value. Based on the initial coding, the raters agreed on 82% of all the codes during the first round of coding. Then all three reviewers met and analyzed the areas of agreement and disagreement, ultimately making final coding decisions. During the second round of coding, raters agreed on 90.1% of coding instances. The assessment of reliability showed substantial agreement among the raters, as demonstrated by a kappa value of .727. In other words, the study attained an acceptable level of agreement with an acceptable kappa value.