This article shows how to value the optimal stopping time for any Markovian process in finite discrete time. Specifically, the article focuses on the valuation of American options using simulations of stochastic processes. It also shows that the estimation of the decision rule to exercise early is equivalent to the estimation of a series of conditional expectations. For Markov processes, these conditional expectations can be estimated with nonparametric regression techniques. This article shows how to approximate the conditional expectations and the resulting early-exercise decision rule with spline and local regression.