Implementation
Following the data collection and analysis from human dyads,
we next designed and implemented a system to allow a robot
to take on the instructor role in the same puzzle completion
scenario. We implemented the system on the Meka robot
platform (Figure 1). We use the Robot Operating System
(ROS) to handle the execution and communication amongst
each of the system components described below (Figure 3).
Tracking the participant and task state—A depth camera
mounted in Meka’s chest was used for face tracking. A small
amount of noise is added to the tracking output so that the
robot’s gaze does not remain motionless when gazing toward the user’s face. For puzzle tracking, a separate webcam is
placed on the table and focused on the puzzle. Color blob
tracking is employed to determine the current locations of
each of the different colored puzzle pieces. Google speech
recognition is utilized for capturing the user’s requests for help
and requests to terminate the task.
Task controller—This component manages the flow of the
overall scenario. It keeps track of the current task phase (intask
or between-task) and continuously solves the puzzle from
the current state so the robot can provide hints and help when
requested. The robot can provide general strategies or suggest
moves to make in completing the puzzle. If the user makes
five bad moves in a row or does not make a move for ten
seconds, the robot automatically provides help. The robot
also randomly provides positive feedback when it detects that
the user has made a good move. Providing these hints and
feedback has been shown in previous work to positively affect
people’s motivation to engage in a task [48].
Generating robot behavior—Depending on the current phase
of the scenario (introduction, in-task, between-task, closing),
the dialogue controller generates the robot’s speech appropriately.
If a request for help is detected, the dialogue controller
generates the response speech and the gesture controller generates
a pointing gesture to the appropriate puzzle piece. The
gaze controller generates gaze shifts according to the personality
being expressed and the current phase of the interaction.
The values in Table are used to create distributions that the
robot draws from when planning and generating gaze shifts
toward the puzzle or toward the user. When gazing toward
the puzzle, the robot looks toward blocks that are in motion,
creating a stronger sense of responsiveness and lifelikeness.
ImplementationFollowing the data collection and analysis from human dyads,we next designed and implemented a system to allow a robotto take on the instructor role in the same puzzle completionscenario. We implemented the system on the Meka robotplatform (Figure 1). We use the Robot Operating System(ROS) to handle the execution and communication amongsteach of the system components described below (Figure 3).Tracking the participant and task state—A depth cameramounted in Meka’s chest was used for face tracking. A smallamount of noise is added to the tracking output so that therobot’s gaze does not remain motionless when gazing toward the user’s face. For puzzle tracking, a separate webcam isplaced on the table and focused on the puzzle. Color blobtracking is employed to determine the current locations ofeach of the different colored puzzle pieces. Google speechrecognition is utilized for capturing the user’s requests for helpand requests to terminate the task.Task controller—This component manages the flow of theoverall scenario. It keeps track of the current task phase (intaskor between-task) and continuously solves the puzzle fromthe current state so the robot can provide hints and help whenrequested. The robot can provide general strategies or suggestmoves to make in completing the puzzle. If the user makesfive bad moves in a row or does not make a move for tenseconds, the robot automatically provides help. The robotalso randomly provides positive feedback when it detects thatthe user has made a good move. Providing these hints andfeedback has been shown in previous work to positively affectpeople’s motivation to engage in a task [48].Generating robot behavior—Depending on the current phaseof the scenario (introduction, in-task, between-task, closing),the dialogue controller generates the robot’s speech appropriately.If a request for help is detected, the dialogue controllergenerates the response speech and the gesture controller generatesa pointing gesture to the appropriate puzzle piece. Thegaze controller generates gaze shifts according to the personalitybeing expressed and the current phase of the interaction.The values in Table are used to create distributions that therobot draws from when planning and generating gaze shiftstoward the puzzle or toward the user. When gazing towardthe puzzle, the robot looks toward blocks that are in motion,creating a stronger sense of responsiveness and lifelikeness.
การแปล กรุณารอสักครู่..