Abstract: This paper describes augmented robot hand to represent what the robot will handle, and show target object
announcement combining robot gaze and augmented robot hand. Many researches on preliminary announcement aim to
show “what the robot’s next action is”. However, it is not easy to design how to divide the robot action especially when
we consider the complicated robots like humanoids because they have many possible behaviors. We, therefore, aim to
show “what the robot will handle” instead of “what the robot’s next action is”. The experiments show that our system
has a potential to provide the object information.