A unigram model used in information retrieval can be treated as the combination of several one-state finite automata.[1] It splits the probabilities of different terms in a context, e.g. from {displaystyle P(t_{1}t_{2}t_{3})=P(t_{1})P(t_{2}mid t_{1})P(t_{3}mid t_{1}t_{2})} P(t_{1}t_{2}t_{3})=P(t_{1})P(t_{2}mid t_{1})P(t_{3}mid t_{1}t_{2}) to {displaystyle P_{ ext{uni}}(t_{1}t_{2}t_{3})=P(t_{1})P(t_{2})P(t_{3})} P_{{ ext{uni}}}(t_{1}t_{2}t_{3})=P(t_{1})P(t_{2})P(t_{3}).
In this model, the probability to hit each word all depends on its own, so we only have one-state finite automata as units. For each automaton, we only have one way to hit its only state, assigned with one probability. Viewing from the whole model, the sum of all the one-state-hitting probabilities should be 1. Followed is an illustration of a unigram model of a document.