In this paper, the effect of over-sampling, under-sampling,
threshold-moving, hard-ensemble, soft-ensemble, and SMOTE
in training cost-sensitive neural networks are studied empirically
on twenty-one UCI data sets with three types of cost
matrices and a real-world cost-sensitive data set