There are many ways to define and quantify the stability of a learning algorithm. The natural way of
making such a definition is to start from the goal: we want to get bounds on the generalization error
of specific learning algorithm and we want these bounds to be tight when the algorithm satisfies the
stability criterion.
As one may expect, the more restrictive a stability criterion is, the tighter the corresponding
bound will be.
In the learning model we consider, the randomness comes from the sampling of the training set.
We will thus consider stability with respect to changes in the training set. Moreover, we need an
easy to check criterion so that we will consider only restricted changes such as the removal or the
replacement of one single example in the training set.
Although not explicitly mentioned in their work, the first such notion was used by Devroye and
Wagner (1979a) in order to get bounds on the variance of the error of local learning algorithms.
Later, Kearns and Ron (1999) stated it as a definition and gave it a name. We give here a slightly
modified version of Kearns and Ron’s definition that suits our needs.