By multiplying out the numbers along the arrows of each possible path through the decision tree, we can calculate the probability that the game takes this path. So the probability that John has A and Tom has Q and bluffs is $frac{1}{3} imes frac{1}{2} imes b = frac{b}{6}.$ Multiplying this probability by the number of the corresponding leaf gives us the value of the game to John. In our example this is $frac{b}{6} imes 1 = frac{b}{6}.$ From this value a computer (or a human for a small decision tree like this) can work out an optimal strategy. The dotted lines connect decision nodes where the decision is the same — these are called information sets. As you can see, the AKQ tree has nine nodes, ten leaves and two information sets.
In order to work out $E(J),$ the amount that John can expect to win on average (if the game were played many, many times) in addition to what he would win at showdown (his ex-showdown winnings), we have to add up the results we get from the individual paths through the decision tree. This gives