Generalized linear models (GLMs) extend the linear modeling capability of R to scenarios that involve non-normal error distributions or heteroscedasticity. All other classic assumptions (particularly independent observations) still apply. The idea here is that linear functions of the predictor variables are obtained by transforming the right side of the equation ( f(x) ) by a link function. The data are then fit in this transformed scale (using an iterative routine based on least squares), but the expected variance is calculated on the original scale of the predictor variables. Simple examples of link functions are log(y) [which linearizes exp(x)], sqrt(y) [x^2], and 1/y [1/x]. More particularly, GLMs work for the so-called 'exponential' family of error models: Poisson, binomial, Gamma, and normal. Each of these has a standard (default) link function: