Input: a set T of labelled instances
Output: a decision-tree with Naïve-Bayes categorizes at the
leaf
Algorithm:
1. Foreach attribute Xi, evaluate the utility u(Xi), of a split on
attribute Xi. For continuous attributes, a threshold is also
found at this stage.
2. Let j = arg maxi, i.e., the attribute with the highest utility.
3. If ui is not significantly better than the utility of the current
node, create a Naïve-bayes Classifier for the current node
and return.
4. Partition T according to the test on Xj. If Xj is continuous, a
threshold split is used; if Xj is discrete, a multi-way split is
made for all possible values.
5. Foreach child, call the algorithm recursively on the portion
of T that matches the test leading to the child.