ssessing goodness of fit is a necessary component of any sound procedure for modeling data, and the importance of such tests cannot be stressed enough, given that fitted thresholds and slopes, as well as estimates of variability (Wichmann & Hill, 2001), are usually of very limited use if the data do not appear to have come from the hypothesized model. A common method of goodness-offit assessment is to calculate an error term or summary statistic, which can be shown to be asymptotically distributed according to c 2—for example, Pearson X2—and to compare the error term against the appropriate c 2 distribution. A problem arises, however, since psychophysical data tend to consist of small numbers of points and it is, hence, by no means certain that such tests are accurate. A promising technique that offers a possible solution is Monte Carlo simulation, which being computationally intensive, has become practicable only in recent years with the dramatic increase in desktop computing speeds. It is potentially well suited to the analysis of psychophysical data, because its accuracy does not rely on large numbers of trials, as do methods derived from asymptotic theory (Hinkley, 1988). We show that for the typically small K and N used in psychophysical experiments, assessing goodness of fit by comparing an empirically obtained statistic against its asymptotic distribution is not always reliable: The true small-sample distribution of the statistic is often insufficiently well approximated by its asymptotic distribution. Thus, we advocate generation of the necessary distributions by Monte Carlo simulation.