Assumption #1:Your dependent variable should be measured at the continuous level (i.e., they are interval or ratio variables). Examples of continuous variables include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth. You can learn more about interval and ratio variables in our article: Types of Variable.
Assumption #2: Your independent variable should consist of at least two categorical, "related groups" or "matched pairs". "Related groups" indicates that the same subjects are present in both groups. The reason that it is possible to have the same subjects in each group is because each subject has been measured on two occasions on the same dependent variable. For example, you might have measured 10 individuals' performance in a spelling test (the dependent variable) before and after they underwent a new form of computerized teaching method to improve spelling. You would like to know if the computer training improved their spelling performance. The first related group consists of the subjects at the beginning (prior to) the computerized spelling training and the second related group consists of the same subjects, but now at the end of the computerized training. The repeated measures ANOVA can also be used to compare different subjects, but this does not happen very often. Nonetheless, to learn more about the different study designs you use with a repeated measures ANOVA, see our enhanced repeated measures ANOVA guide.
Assumption #3: There should be no significant outliers in the related groups. Outliers are simply single data points within your data that do not follow the usual pattern (e.g., in a study of 100 students' IQ scores, where the mean score was 108 with only a small variation between students, one student had a score of 156, which is very unusual, and may even put her in the top 1% of IQ scores globally). The problem with outliers is that they can have a negative effect on the repeated measures ANOVA, distorting the differences between the related groups (whether increasing or decreasing the scores on the dependent variable), and can reduce the accuracy of your results. Fortunately, when using SPSS Statistics to run a repeated measures ANOVA on your data, you can easily detect possible outliers. In our enhanced repeated measures ANOVA guide, we: (a) show you how to detect outliers using SPSS Statistics; and (b) discuss some of the options you have in order to deal with outliers.
Assumption #4: The distribution of the dependent variable in the two or more related groups should be approximately normally distributed. We talk about the repeated measures ANOVA only requiring approximately normal data because it is quite "robust" to violations of normality, meaning that the assumption can be a little violated and still provide valid results. You can test for normality using the Shapiro-Wilk test of normality, which is easily tested for using SPSS Statistics. In addition to showing you how to do this in our enhanced repeated measures ANOVA guide, we also explain what you can do if your data fails this assumption (i.e., if it fails it more than a little bit).
Assumption #5: Known as sphericity, the variances of the differences between all combinations of related groups must be equal. Unfortunately, repeated measures ANOVAs are particularly susceptible to violating the assumption of sphericity, which causes the test to become too liberal (i.e., leads to an increase in the Type I error rate; that is, the likelihood of detecting a statistically significant result when there isn't one). Fortunately, SPSS Statistics makes it easy to test whether your data has met or failed this assumption. Therefore, in our enhanced repeated measures ANOVA guide, we (a) show you how to perform Mauchly's test of sphericity in SPSS Statistics, (b) explain some of the things you will need to consider when interpreting your data, and (c) present possible ways to continue with your analysis if your data fails to meet this assumption.