precise ; to do this we need a precise way of measuring the error that results when c inuous function is approximated by another over la, bl. If we were concerned on would be f(x) at a single point xo, then the error at xo by an approximati simply times called the deviation between f and g at xo(Figure 9.4.1). However, we a erned with approximation over the entire interval[a, b], not at a single point. Cons tly, in one part of the interval an approximation to f may have smaller deviatio gi f than an approximation g2 to f, and in another part might be t way around. How of the approximation? Wh do we decide which is the better overall eed is some way of measuring the overall error in an approximation g(x). O ble measure of overall is obtained by integrating the deviation f(r) g(x the entire interval[a, bl; that is,