The third case is a further simplification of the first two. The hypothesis
is that a single βj is zero; H_0 : βj = 0. For this hypothesis, K’ is a row
vector of zeros except for a one in the column corresponding to the βj
being tested. As described in case 2, the sum of squares for this hypothesis
is the contribution of X_j adjusted for all other independent variables in the
model. This sum of squares is called the partial sum of squares for the
jth independent variable.