Test-Retest Reliability One basic premise of reliability is the stability of the measuring instrument; that is, a reliable instrument will obtain the same results with repeated administrations of the test. Test-retest reliability assessment is used to establish that an instrument is capatest measuring a variable with consistency. In a test-retest study, one sample of individuals is subjected to the identical test on two separate occasions, keeping all testing conditions as constant as possible. The coefficient derived from this type of analysis is called a test-retest reliability coefficient. This estimate can be obtained for a variety of testing tools, and is generally indicative of reliability in situations where raters are not involved, such as self-report survey instruments and physical and physiological measures with mechanical or digital readouts. If the test is reliable, the subject's score be similar on multiple trials. In terms of reliability theory, the extent to which the scores vary interpreted as measurement error. Because the variation in measurement must be considered within the context of total measurement system, errors may actually be attributed to many sources. Therefore, to assess the reliability of an instrument, the researcher must be able to stability in the response variable. Unfortunately, many variables do change over time For example, a patient's self-assessment of pain may change between two testing sessions. We must also consider the inconsistency with which many clinical variables naturally respond over time. When responses are labile, test-retest reliability may be impossible to assess.
Test-Retest Reliability
One basic premise of reliability is the stability of the measuring instrument; that is, a reliable instrument will obtain the same results with repeated adminis of the test. Test-retest reliability assessment is used to establish that an instrument is capable of measuring a variable with consistency. In a test study, sample of indi viduals is subjected to the identical test on two separate occasions, keeping all testing conditions as constant as possible. The coefficient derived from this type of analysis is called a test-retest reliability coefficient. This estimate can be obtained for a variety of testing tools, and is generally indicative of reliability in situations where raters are not involved, such as self-report survey instruments and physical and physiological measures with mechanical or digital readouts. If the test is reliable, the subject's score should be similar on multiple trials. In terms of reliability theory, the extent to which the scores vary is interpreted as measurement error. Because variation in measurement must be considered within the context of the total measurement system, errors may actually be attributed to many sources. There fore, to assess the reliability of an instrument, the researcher must be able to assume stability in the response variable. Unfortunately, many variables do change over time For example, a patient's self-assessment of pain may change between two testing ses sions. We must also consider the inconsistency with which many clinical variables nat urally respond over time. When responses are labile,
test-retest reliability may be impossible to assess.
Carryover and Testing Effects With two or more measures, reliability can be influenced by the effect of the first test on the outcome of the second test. For example, practice or carryover effects can occur with repeated measurements, changing performance on subsequent trials. A test of dexterity may improve because of motor learning. Strength measurements can improve following warm-up trials. Sometimes subjects are given a series of pretest trials to neutralize