Providing credibility and reliability checks.
Three additional checks were used. First, as an initial
reliability check, the two people involved in coding
conducted a consensus review and appraisal of
themes for the first transcript. Second, to assess
inter-rater reliability, 50 sample quotations from
transcripts were independently allocated to the list
of themes by the same two people. This revealed a
Kappa value of 0.82, indicative of a good level of
inter-rater agreement. Third, the credibility of final
themes was checked using ‘‘respondent validation’’
in which the themes were presented to six participants
for feedback. All six recognized and endorsed
the themes, and no participant suggested any significant
omission.