As our goal we set forth to generate physically plausible response
using a reaction-based motion library. While our approach may
seem simple, we argue that it takes advantage of the concept of the
described burst following an impact without the need for a complicated
implementation. Conversely, we argue that the components
we do include are essential to our method’s success. It may seem
tempting to consider performing basic interpolation following the
first, naive ragdoll-like simulation trajectory described. But, we
stress that the active control is critical for generating realistic, lifelike
motion. Foremost, our active simulation’s controller follows a
changing trajectory (based on the transition-to motion) and makes
the character move in a coherent, “conscious” manner. And by
moving the simulation toward the desired posture found from the
matched data, both an active reaction and passive response to the
collision may be computed in a physically based manner simultaneously.
This does not yield the same effect if the passive simulation
is simply blended with the final motion because the interpolation
can happen too slowly (leading to the ragdoll’s lack of direction revealing
itself) or too quickly (when the dynamic effect of the contact,
and possibly even the contact itself, is not fully realized.)
More subtly, but no less importantly, during simulation the active
input yields visible secondary effects, such as physically based slipping
which is far more desirable than kinematic-based, motion capture
foot skate. Sometimes physically based slipping is appropriate,
when a reaction calls for a fast corrective change in stance for
example. Likewise, desired changes in momentum can be excited
through active control, such as throwing out an arm or a leg for
balance for example. These physically based characteristics can
change the resulting motion in a manner consistent with the upcoming
motion, but are only achievable if the simulated character
is actively moving in anticipation of the “chosen” response in addition
to passively responding to the collisions happening based on
the current interaction.
We present a system that selects a motion capture sequence to follow
an impact and synthesizes a transition to this found clip. In
the process, the character is made to respond through an active controller
which works while the impact forces are being delivered. To
produce our results, we create a physically valid response and blend
this into the desired transition-to motion. An important contribution