ASIMO relies on the sound data to be situationally aware of the surrounding outside its field of vision.
The sound’s direction is calculated based on the volume and time differences of the signals at two separate
microphones. It can discern human voices and footsteps from the sound data and look when a person calls
its name, or something falls on the floor.