Abstract
In this paper we present and evaluate painterly rendering techniques that work within a visual feedback loop of eDavid, our painting robot. The machine aims at simulating the human painting process. Two such methods and their semantics-driven combination are compared for different objects. One uses a predefined set of stroke candidates, the other creates strokes directly using line integral convolution. The aesthetics of these methods are discussed and results are shown.