where s is sibling set of xs in the original tree, mis the original regressor, and m is the
regressor refined by (xs, ys), ys = m(xs). Following the cotraining paradigm, COAL uses
two base regression trees, each of which will label the PLE for the other tree during the
learning process. Notice that, COAL does not require two views. Similar to COREG [Zhou
and Li 2005a] and other single-view disagreement-based SSL approaches, the validity
of COAL can be justified by the recent theoretical results [Wang and Zhou 2007, 2010b].
More information on cotraining and other disagreement-based SSL techniques can be
found in the recent survey [Zhou and Li 2010].