Experiments
To evaluate SLIPPER, we used two sets of benchmark
problems, each containing 16 two-class classification
problems. The first set, the development set, was used
in debugging SLIPPER and evaluating certain variations
of it. The second set, the prospective set, was
used as a secondary evaluation of the SLIPPER algorithm,
after development was complete. This two-stage
procedure was intended as a guard against the possibility
of "overfitting" the benchmark problems themselves;
however, since the experimental results are qualitatively
similar on both the development and prospective sets,
we will focus on results across all 32 benchmark problems
in the discussion below. These results are summarized
in Table 2 and Figure 3, and presented in more
detail in Table 1
ExperimentsTo evaluate SLIPPER, we used two sets of benchmarkproblems, each containing 16 two-class classificationproblems. The first set, the development set, was usedin debugging SLIPPER and evaluating certain variationsof it. The second set, the prospective set, wasused as a secondary evaluation of the SLIPPER algorithm,after development was complete. This two-stageprocedure was intended as a guard against the possibilityof "overfitting" the benchmark problems themselves;however, since the experimental results are qualitativelysimilar on both the development and prospective sets,we will focus on results across all 32 benchmark problemsin the discussion below. These results are summarizedin Table 2 and Figure 3, and presented in moredetail in Table 1
การแปล กรุณารอสักครู่..
