The ε-insensitive loss function sets an insensitive tube around the
residuals and the tiny errors within the tube are discarded (Fig. 2).
We will adopt the popular Gaussian kernel, which presents less
parameters than other kernels (e.g. polynomial) [31]: K(x, x′)=
exp(−γ||x−x′||2), γN0. Under this setup, the SVM performance is
affected by three parameters: γ, ε and C (a trade-off between fitting
the errors and the flatness of the mapping). To reduce the search space,
the first two values will be set using the heuristics [5]: C=3 (for a
standardized output) and e = ˆ σ =
ffiffiffiffi
N
p
, where σ̂=1.5/N×Σi=1
N (yi−ŷi
)2
and ŷ is the value predicted by a 3-nearest neighbor algorithm. The
kernel parameter (γ) produces the highest impact in the SVM
performance, with values that are too large or too small leading to
poor predictions. A practical method to set γ is to start the search from
one of the extremes and then search towards the middle of the range
while the predictive estimate increases
The ε-insensitive loss function sets an insensitive tube around theresiduals and the tiny errors within the tube are discarded (Fig. 2).We will adopt the popular Gaussian kernel, which presents lessparameters than other kernels (e.g. polynomial) [31]: K(x, x′)=exp(−γ||x−x′||2), γN0. Under this setup, the SVM performance isaffected by three parameters: γ, ε and C (a trade-off between fittingthe errors and the flatness of the mapping). To reduce the search space,the first two values will be set using the heuristics [5]: C=3 (for astandardized output) and e = ˆ σ =ffiffiffiffiNp, where σ̂=1.5/N×Σi=1N (yi−ŷi)2and ŷ is the value predicted by a 3-nearest neighbor algorithm. Thekernel parameter (γ) produces the highest impact in the SVMperformance, with values that are too large or too small leading topoor predictions. A practical method to set γ is to start the search fromone of the extremes and then search towards the middle of the rangewhile the predictive estimate increases
การแปล กรุณารอสักครู่..
