Markov models are similar to decision trees in that patients are modelled as being in defined health
states over time, but are different in that the clinical pathways for patients are only partially displayed
in the model diagram, and key information about how patients progress in the model are dictated
though the use of equations, formulas and tables which are largely hidden in the background. This
makes Markov models more manageable for complex and chronic diseases, but also makes them less
transparent to less experienced modellers. Markov models tend to be used more for chronic diseases
where the goal is to model disease progression or risk of events over longer periods of time [28]. The
model is constructed based on a finite number of mutually exclusive health or Markov states, over a
series of equal time periods referred to as Markov cycles. At the end of each cycle, patients may
transition from one state to another or remain in the same health state based on transition probabilities
which are estimated from trial data or from longer term observation study data. For each cycle, each
health state is associated with a cost and effect/outcome measure. To obtain overall expected costs and
outcomes for each treatment alternative, these costs and effects are multiplied by the estimated time
patients will spend in each state [25]. Even though it is generally accepted that Markov models arewell
suited for modelling diseases with on-going risks [28], they have limited ability to structure very
complex conditions [25]. This is due to the so called Markov assumption [25,28], which means that