The approach is called “naïve” because it assumes the
independence between the various attribute values. Naïve
Bayes classification can be viewed as both a descriptive
and a predictive type of algorithm. The probabilities are
descriptive and are then used to predict the class
membership for a target tuple. The naïve Bayes approach
has several advantages: it is easy to use; unlike other
classification approaches only one scan of the training data
is required; easily handle mining value by simply omitting
that probability [10]. An advantage of the naive Bayes
classifier is that it requires a small amount of training data
to estimate the parameters (means and variances of the
variables) necessary for classification. Because
independent variables are assumed, only the variances of
the variables for each class need to be determined and not
the entire covariance matrix. In spite of their naive design
and apparently over-simplified assumptions, naive Bayes
classifiers have worked quite well in many complex realworld
situations [16].