The limit of determination is a very simple concept: It is the smallest amount or concentration of an analyte that can be estimated with acceptable reliability. But this statement contains an inherent contradiction: the smaller the amount of analyte measured, the greater the unreliability of the estimate. As we go down the concentration scale, the standard deviation increases to the point where a substantial fraction of values of the distribution of results overlaps 0 and false negatives appear. Therefore the definition of the limit comes down to a question of what fraction of values are we willing to tolerate as false negatives.
Thompson and Lowthian (loc. cit.) consider the point defined by RSDR = 33% as the upper bound for useful data, derived from the fact that 3RSDR should contain 100% of the data from a normal distribution. This is equivalent to a concentration of about 8 x 10-9 (as a mass fraction) or 8 ng/g (ppb). Below this level false negatives appear and the data goes “out of control”. From the formula, this value is also equivalent to an RSDr ≈ 20%. The penalty for operating below the equivalent concentration level is the generation of false negative values. Such signals are generally accepted as negative and are not repeated.