Recursion. A method can call itself (if you are not comfortable with this idea, known
as recursion, you are encouraged to work Exercises 1.1.16 through 1.1.22). For example,
the code at the bottom of this page gives an alternate implementation of the
rank() method in BinarySearch. We often use recursive implementations of methods
because they can lead to compact, elegant code that is easier to understand than a corresponding
implementation that does not use recursion. For example, the comment
in the implementation below provides a succinct description of what the code is supposed
to do. We can use this comment to convince ourselves that it operates correctly,
by mathematical induction. We will expand on this topic and provide such a proof for
binary search in Section 3.1. There are three important rules of thumb in developing
recursive programs:
■
The recursion has a base case—we always include a conditional statement as the
first statement in the program that has a return.
■ Recursive calls must address subproblems that are smaller in some sense, so
that recursive calls converge to the base case. In the code below, the difference
between the values of the fourth and the third arguments always decreases.
■ Recursive calls should not address subproblems that overlap. In the code below,
the portions of the array referenced by the two subproblems are disjoint.
Violating any of these guidelines is likely to lead to incorrect results or a spectacularly
inefficient program (see Exercises 1.1.19 and 1.1.27). Adhering to them is likely to
lead to a clear and correct program whose performance is easy to understand. Another
reason to use recursive methods is that they lead to mathematical models that we can
use to understand performance. We address this issue for binary search in Section 3.2
and in several other instances throughout the book.