The importance of dimension reduction has been increasing according to the growth of
the size of available data in many fields. An appropriate dimension reduction method
of raw data helps to reduce computational time and to expose the intrinsic structure
of complex data. Sliced inverse regression is a well-known dimension reduction method
for regression, which assumes an elliptical distribution for the explanatory variable, and
ingeniously reduces the problem of dimension reduction to a simple eigenvalue problem.
Sliced inverse regression is based on the strong assumptions on the data distribution and
the form of regression function, and there are a number of methods to relax or remove
these assumptions to extend the applicability of the inverse regression method. However,
each method is known to have its drawbacks either theoretically or empirically. To alleviate
drawbacks in the existing methods, a dimension reduction method for regression based on
the notion of conditional entropy minimization is proposed. Using entropy as a measure
of dispersion of data, a low dimensional subspace is estimated without assuming any
specific distribution nor any regression function. The proposed method is shown to perform
comparable or superior to the conventional methods through experiments using artificial
and real-world datasets.