Analysis of .2; 1/-Shell Sort. Our goal in this paper is to discuss the stochastic
behavior of Shell Sort when it operates on an array of n raw data. The usual probability
model is the random permutation model, whereby the ranks of the data are equally likely to be any of the permutations of f1; : : : ; ng, each occurring with probability 1=n!. This
model is natural and is used as the standard for the analysis of sorting algorithms. The
model covers a wide variety of real-world data models; for instance the entire class of
data drawn from any continuous distribution follows the random permutation probability
model. Henceforth the term random will specifically refer to data from this model. In
this section we consider only .2; 1/-Shell Sort.
The difficulty in the stochastic analysis of Shell Sort lies in the fact that after the first
stage the resulting data are no longer random. Instead, tk sorted subarrays are interleaved.
The second and subsequent stages may not then appeal to the results known for insertion
sort. For example, the second stage of the .2; 1/-Shell’s sort does not sort a random array
of size n. The first stage somewhat orders the array as a whole, and many inversions are
removed (some new ones may appear, though; see for example the positions of 5 and 6
before and after the 2-stage of our example).
Suppose our data are n real numbers from a continuous probability distribution.
Because ordering is preserved by the probability integral transform, we may (and do)
assume that the probability distribution is uniform on .0; 1/.We call the elements in odd
positions X’s and those in the even position Y ’s. Thus, if n is odd, initially our raw array
prior to any sorting is