According to a sum of squared error criterion, the prediction error is simply the squared error [P G(t )−p G(t )]2. However there is a problem with this measure of error because it penalizes all errors equally, regardless of the uncertainty of our prediction. A sample proportion P based on N observations has a binomial distribution , with a mean equal to p =E [P ] (the expected value of P ) and the variance of this distribution equals to V (P )=p ·(1−p )/N =View the MathML source. The variance is minimal when the true probability is close to zero or one and it is at its maximum when the true probability is close to .50. Therefore errors that occur at the extreme should be penalized more than errors that occur in the middle of the probability range because the variance is larger in the middle. For this reason, it is statistically superior (with respect to the variance of the estimated parameters) to weight the squared errors by the reciprocal of the variance to produce a weighted squared error: