1. INTRODUCTION
A robot interacting with humans in the real world must be able to deal with socially
appropriate interaction. In many scenarios, it is not sufficient to achieve only taskbased
goals: the robot must also be able to satisfy and manage the social obligations that
arise during human–robot interaction. Building a robot to meet these goals presents a
particular challenge for input processing and interaction management: the robot must
be able to recognize, understand, and respond appropriately to social signals from
multiple humans on multimodal channels including body posture, gesture, gaze, facial
expressions, and speech. Since these signals tend to be noisy, an additional challenge
is for the robot behavior to be robust to uncertainty.
1. INTRODUCTION
A robot interacting with humans in the real world must be able to deal with socially
appropriate interaction. In many scenarios, it is not sufficient to achieve only taskbased
goals: the robot must also be able to satisfy and manage the social obligations that
arise during human–robot interaction. Building a robot to meet these goals presents a
particular challenge for input processing and interaction management: the robot must
be able to recognize, understand, and respond appropriately to social signals from
multiple humans on multimodal channels including body posture, gesture, gaze, facial
expressions, and speech. Since these signals tend to be noisy, an additional challenge
is for the robot behavior to be robust to uncertainty.
การแปล กรุณารอสักครู่..