(ICF: body-structure)Concurrent and construct/convergent correlationsCategory of Assessment: adopted from “An OT Approach to Evaluation of Cognition/Perception”, Vancouver Coastal Health, April 2011Statistical Evaluation Criteria: from StrokEngine (accessed Dec 2012), http://www.medicine.mcgill.ca/strokengine-assess/statistics-en.htmlLevel of task performance• Provides screening assessment in context of occupation (e.g. Cognitive Performance Test, Kettle Test)• In-depth understanding of the impact of cognitive deficits on occupation (e.g. AMPS, EFPT, ILS)Excellent Adequate Poor≥ 0.80 0.70-0.79 < 0.70(ICF: activity & participation)• May provide higher ecological & predictive validity than in-depth assessment at level of impairmentTest-re-test or Inter-rater reliability (ICC or kappa statistics)Level of Impairment• To augment screening at level of task performance (e.g. SMMSE, MoCA, Cognistat)• To provide some in-depth understanding of specific cognitive components such as memory, attention. (e.g. Rivermead Behavioural Memory Test, Test of Everyday Attention)Poor<0.40Screening assessmentIn-depth assessmentReliability• May provide higher ecological & predictive validity than impairment-based screeningExcellent Adequate≥ 0.75 0.40-0.74• Be aware of limitations (e.g. predictive validity, depth of assessment)Excellent Adequate Poor≥ 0.60 0.31-0.59 ≤ 0.3Purpose: This inventory was developed to complement the algorithm entitled “An OT Approach to Evaluation of Cognition/Perception”. This is an inventory of cognitive (but not perceptual) assessment tools identified by OTs within VCH and PHC. These tools are not meant to be used in isolation during the process of cognitive assessment but, instead, during Steps 4 & 5 of the assessment process (as per the algorithm). Although this inventory provides a comprehensive list of standardized tools available to OTs to measure cognition, it is not an exhaustive list.DEFINITIONS: **In deciding whether or not an assessment tool is precise, it is important to consider both reliability and validity.Reliability: “Does the test provide a consistent measure?”Internal consistency = the extent to which the items of a test measure various aspects of a common characteristic (e.g., “memory”). Do the items/subtests of the measure consistentlymeasure the same aspect of cognition as each other?Test-retest reliability = the extent to which the measure consistently provides the same results when used a second time (re-test). Parallel-form reliability would involve 2different/alternate versions of the same test.Inter-rater reliability = the extent to which two or more raters (assessors) obtain the same result when using the same instrument – do they produce consistent results?Validity: “Does the test measure what it is supposed to measure?”Criterion validity = the extent to which a new measure is consistent with a gold standard criterion (i.e., a previously validated measure). For concurrent validity, the measures areadministered at approximately the same time. For predictive validity, typically one measure is administered at some time prior to the criterion measure (to examine whether the measure can predict, or correlate with, the outcome of a subsequent criterion event). Note: poor concurrent validity would suggest that the tests being compared measure different constructs; adequate concurrent validity suggests some shared variance in the constructs being measured; and excellent concurrent validity suggests that the tests measure very similar constructs. If 2 tests are highly correlated with each other, then one would want to question the need for having both tests – you would then want to determine other ways in which one test might be more superior than the other (for example, one takes less time to administer).Construct validity = the extent to which a test can be shown to measure a construct, e.g. “memory” or “cognition for everyday function”. The construct validation process may be used when a gold standard (previously validated criterion) does not exist, thus, when one cannot test for concurrent validity. Convergent validity is the extent to which a test agrees with another test (or test) believed to be measuring the same attribute. Discriminant validity is the extent to which tests that are supposed to be unrelated are, in fact, unrelated (i.e., measure different things). Group differences refers to: “Does the measure allow you to differentiate between 2 or more populations?” for example as determined by analyzing for statistically significant differences between the groups on the measure. Ecological validity refers to: “Does the measure reflect behaviours/function that actually occur in natural/everyday settings?”Vancouver Coastal Health and Providence Health Care, Occupational Therapy Practice:Occupational Therapy Cognitive Assessment Inventory & References, last updated March, 2012 page 1 of 22Internal consistency (Chronbach’s α or split-half statistics)Validity
การแปล กรุณารอสักครู่..