We are specifically dealing with discrete-time Markov chains with a finite number of states. Discrete-time means that the process can be divided into specific steps—in the case of football, plays, in the previous example, flips of a coin. In football, if a team is in a certain situation, what happened previously has no effect on what will happen next. For example, if we have a 1st-and-10 from our own 20, it does not matter if the previous play was a kickoff for a touchback or a 10-yard gain for a first down after a 3rd-and-10 from the 10-yardline. Either way, we now have a new situation that will only directly affect the next play.
A football drive can be seen as an absorbing Markov chain. In an absorbing Markov chain, there is a set of special states known as absorbing states. The main distinction of an absorbing chain is that as time goes to infinity—in our case, as the number of plays in a drive gets higher—the probability of ending up in one of the absorbing states goes to 1. Since a drive can only end in a specific number of ways, and a drive must end, these drive-endings are the absorbing states (touchdown, field goal, turnover, etc...). Specifically, it is impossible to leave an absorbing state. Once a team scores a touchdown, they cannot leave that state, the drive ends and the Markov chain is absorbed.
In order to define a Markov chain, we must know the transition probabilities. A transition probability is the probability of going from one state to another in one step: Px,y = Probability of going from state ‘x’ to state ‘y’ in one step. In the above picture, the arrows represent the transition probabilities. In a Markov chain with a finite number of states, like ours, these probabilities can be written in the form of a transition matrix, seen below: