We ran a second, post-hoc simulation that included all gestures:
single finger touches, one-handed pinches, bi-modal
two-finger touches, and palm touches. The goal was not to
distinguish between different gestures, as demonstrated in
[26], but rather to distinguish between users performing a
variety of gestures. Our classifier was trained on 20 samples
per participant (5 samples per gesture), representing 2
seconds of training data. Our testing data consisted of all 62
touch events from our testing data collection. Again using
all 55 simulated participant pairings, average accuracy was
97.8% (SD=6.9%). This is very similar in performance to
finger-touch-only performance using 2 seconds of training
data (both classifiers were trained on 20 samples per participant).