Decision trees represent the simplest form of decision analytic models [28]. A full description of
decision trees is presented in the book by Drummond et al [6] and only key features of decision
trees are summarized here. Decision trees are built from left to right, and start with a decision node
(a square) representing a choice between competing interventions. Following the decision node
appear chance nodes (circles), representing points beyond which upcoming events are uncertain.
This uncertainty is expressed by assigning probabilities (summing up to one at each chance node)
to the occurrence of possible events, which are mutually exclusive. As constructed, a decision tree is
analogous to possible clinical pathways that clinicians typically consider for their patients. However,
instead of just presenting the possible pathways, in decision analytic modelling the expected
costs and outcomes of each treatment alternative are calculated based on the costs and outcomes of
each clinical pathway weighted by the probability of going down each pathway. The major
advantage of using decision trees is that since pathways are clearly laid out, the model and calculations
are transparent for less experience modellers. The major disadvantage of decision trees is
that the tree can get very large (i.e. bushy) the more comparators included in the analysis, the larger
the number of health states patients can experience and the longer the model is run (e.g. long term
chronic disease).