Abstract
This paper analyzes experimentally discretization algorithms for handling continuous
attributes in evolutionary learning. We consider a learning system that
induces a set of rules in a fragment of first-order logic (Evolutionary Inductive
Logic Programming), and introduce a method where a given discretization
algorithm is used to generate initial inequalities, which describe subranges of attributes’
values. Mutation operators exploiting information on the class label of
the examples (supervised discretization) are used during the learning process for
refining inequalities. The evolutionary learning system is used as a platform for
testing experimentally four algorithms: two variants of the proposed method,
a popular supervised discretization algorithm applied prior to induction, and a
discretization method which does not use information on the class labels of the
examples (unsupervised discretization). Results of experiments conducted on
artificial and real life datasets suggest that the proposed method provides an
effective and robust technique for handling continuous attributes by means of
inequalities.