As discussed in [45], training points may be selected for obtaining a more accurate dividing hyperplane (Figure 23.4 (b)), or if the direction of the hyperplane is
already certain, input points may be selected for reducing the size of margin (Figure
23.4 (c)). While it may seem obvious to sample training points closest to the decision boundary [55, 15], there are also methods that select the items furthest away
[15] that have potential advantages in scenarios involving several candidate classifiers, which are discussed in Section 23.7. This is because a classifier should be quite
certain about any items far from a decision boundary, but if newly acquired training data reveals the classifier to be inaccurate, the classifier may not fit the user’s
preferences well, so it should be removed from the pool of candidate classifiers.
As discussed in [45], training points may be selected for obtaining a more accurate dividing hyperplane (Figure 23.4 (b)), or if the direction of the hyperplane isalready certain, input points may be selected for reducing the size of margin (Figure23.4 (c)). While it may seem obvious to sample training points closest to the decision boundary [55, 15], there are also methods that select the items furthest away[15] that have potential advantages in scenarios involving several candidate classifiers, which are discussed in Section 23.7. This is because a classifier should be quitecertain about any items far from a decision boundary, but if newly acquired training data reveals the classifier to be inaccurate, the classifier may not fit the user’spreferences well, so it should be removed from the pool of candidate classifiers.
การแปล กรุณารอสักครู่..