The Hidden Markov Model (HMM) is a powerful statistical method of characterizing the observed data samples of discrete-time series. It has been successfully used in automatic speech recognition. HMM is basically a markov chain where the output observation is a random variable X generated according to a output probabilistic function associated with each state. The definition of HMM is defined by which A is a transition probability matrix, B is an output probability matrix and is an initial state distribution. There are two categories of HMM based on statistical models: 1) Discrete HMM (DHMM) which evaluates probabilities based on discrete data counting and 2) Continuous Density HMM
(CDHMM) which evaluates probabilities based on continuous Probability Density Functions (PDFs) that is usually referred to as likelihoods. DHMM uses a vector quantization based method for computing the state probability. For instance, frame i has to be converted into the corresponding symbol k = O(i), and the probability of symbol k to state j is retrieved from B(k, j) of the matrix B. CDHMM uses a continuous probability density function for computing the state probability. The method for identifying the optimum parameter that maximize the probability (likelihood) of the sample data is based on reestimation of Maximum Likelihood Estimate (MLE). Although the computation of probabilities with discrete models is faster than with continuous models, CDHMM will be considered in order to solve the problem of discrete HMM during vector quantization process.