As n increases, the algorithm described above quickly becomes computationally unfeasible. When X = 50 and n = 100,
for instance, step 1 in the algorithm amounts to computing 1029 permutations, which is extremely time-consuming. A
computationally feasible alternative is to use a smaller number of randomly chosen permutations and to assign a ν1 to
the observed permutation based on the proportion of random permutations which have smaller base 2 decimal expansions
than the observed permutation. In the comparison below, we refer to this as the random-permutations version of the Korn
and U(−1/2, 1/2) intervals. These intervals are computer-intensive, but not more so than other random permutation and
bootstrap methods. A caveat is however that some external randomness is introduced into the interval. The split sample
interval has the advantage that it is no more difficult to compute when n is large, and is preferable if computational power
is limited or if no external randomness is allowed.