Intrarater reliability
Intrarater reliability refers to the stability of data recorded by one individual across two or more trials. When carryover or practice effects are not an issue, intra-rater reliability is usually assessed using trials that follow each other with short intervals. Reliability is best established with multiple trials (more than two although the number of trials needed is dependent on the expected variability in the response. a situation, when a rater's skill is relevant to the accuracy of the intra-rater reliability and reliability are essentially the same The effects of rater and the cannot be out. Researchers may assume that intra-raler reliability is achieved simply by having one experienced individual perform all measurements; however, the objective nature of scientific inquiry demands that even under expert conditions, rater reliability should be evaluated. Expertise clinical standards may not always match the level of precision needed for research documentation. By establishing statistical reliability, those who critique research cannot question the measurement accuracy of data, and research conclusions will be strengthened. Rater Bias. We must also consider the possibility for bias when one rater takes two measurements. Raters can be influenced by their memory of the first score. This is most relevant in cases where human observers use subjiective criteria to rate resporuses, but can operate in any situation where a tester must read a score from an instrument. The most effective way to control for this type of error is to blind the tester in some way, so the first score remains unknown until after the second trial is completed; however, as most clinical measurements are observational, such a technique is often unreasonable. For instance, we could not blind a clinician to measures of balance, function, muscle testing or gait where the tester is an integral part of the measurement system. The bias are to develop grading criteria that are as major protections against tester possible, to train the in testers the use of the instrument, and to document reliability across raters