sometimes called the deviation between f and g at xo(Figure 9.4.1). However, we are concerned with approximation over the entire interval [a, b], not at a single point. Consequently, in one part of the interval an approximation g1 to f may have smaller deviation from f than an approximation g2 to f, and in another part might be other way around. How do we decide which is the better overall approximation? What we need is some way of measuring the overall error in an approximation g(x). One possible measure of overall error is obtained by integrating the deviation /f(r)- g(x)/ over the entire interval [a, bl; that is,