Abstract:
The Maximum Likelihood Estimator (MLE) is widely used in estimating information measures, and involves “plugging-in” the empirical distribution of the data to estimate a given functional of the unknown distribution. In this work we propose a general framework and procedure to analyze the nonasymptotic performance of the MLE in estimating functionals of discrete distributions, under the worst-case mean squared error criterion. We show that existing theory is insufficient for analyzing the bias of the MLE, and propose to apply the theory of approximation using positive linear operators to study this bias. The variance is controlled using the well-known tools from the literature on concentration inequalities. Our techniques completely characterize the maximum L2 risk incurred by the MLE in estimating the Shannon entropy H(P) = Σi=1S -piln pi, and Fα(P) = Σi=1Spiα up to a multiplicative constant. As a corollary, for Shannon entropy estimation, we show that it is necessary and sufficient to have n ≪ S observations for the MLE to be consistent, where S represents the support size. In addition, we obtain that it is necessary and sufficient to consider n ≪ S1/α samples for the MLE to consistently estimate Fα(P); 0