This paper reports work on automatic analysis of laughter
and human body movements in a video corpus of humanhuman
dialogues. We use the Nordic First Encounters video
corpus where participants meet each other for the first time.
This corpus has manual annotations of participants’ head,
hand and body movements as well as laughter occurrences.
We employ machine learning methods to analyse the corpus
using two types of features: visual features that describe
bounding boxes around participants’ heads and bodies, automatically
detecting body movements in the video, and audio
speech features based on the participants’ spoken contributions.
We then correlate the speech and video features
and apply neural network techniques to predict if a person
is laughing or not given a sequence of video features. The
hypothesis is that laughter occurrences and body movement
are synchronized, or at least there is a significant relation
between laughter activities and occurrences of body movements.
Our results confirm the hypothesis of the synchrony
of body movements with laughter, but we also emphasise
the complexity of the problem and the need for further investigations
on the feature sets and the algorithm used.