There are several general classes of reliability estimates:
Inter-rater reliability assesses the degree of agreement between two or more raters in their appraisals.
Test-retest reliability assesses the degree to which test scores are consistent from one test administration to the next. Measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions.[3] This includes intra-rater reliability.
Inter-method reliability assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used. This allows inter-rater reliability to be ruled out. When dealing with forms, it may be termed parallel-forms reliability.[4]
Internal consistency reliability, assesses the consistency of results across items within a test.[4]