Another important piece of the qualitative data analysis was the calculation
of inter-rater reliability. Two coders separately coded a random selection
of 300 lines from the transcripts using the final draft of the codebook as
recommended by Lombard, Snyder-Duch, and Bracken (2010). The kappa
value was considered “substantial” (0.61 to 0.80) using the benchmarks set
by Landis and Koch (1977).
Additional validity and reliability of the qualitative data were established through various methods. For example, ontological appropriateness
and contingent validity were strengthened through the use of a diverse range
of participant perspectives to describe the reality of teen pregnancy among
AIs (Healy & Perry, 2000). Descriptive validity was strengthened through the
use of verbatim responses and investigator triangulation, which was obtained
by means of cross-checking coding schemes to ensure that the investigators
agreed on the categorization of the data (Johnson, 1997; Maxwell, 1992).
Interpretive validity, which refers to the accuracy in which the researchers
portrayed the meaning attached to the data as perceived by the participants
(Johnson; Maxwell), was also strengthened through the use of verbatim responses in that little was left up to interpretation outside of the creation of
categories in which the verbatim responses were coded (Johnson; Maxwell).
The results and discussion relate closely to the actual written responses of the
participants and would, therefore, be deemed as having strong theoretical
validity.