The major goal of our research is the design of a robot system which does not only have an anthropomorphic body structure but which is able to communicate with the operator on the basis of natural communication channels. Recognizing the human’s hand position to identify a target object for grasping is only one of these channels. Within the current project we design several other means of man-machine interaction. We have, for instance, implemented a module for the recognition of the human’s gaze direction. This module can identify the position of the human’s eyes so that a rough estimate of the focus of attention is possible. In turn, the operator can see the gaze direction of the robot head so that a natural way of identifying the current region of interest on the table is possible. In addition we have built a speech recognition system which can identify spoken keyword commands [7]. By means of a speech synthesizer, the robot can express its behavioral state. On the basis of natural commands it is possible to guide the robot’s manipulator to a certain target or to terminate an incorrect behavior (such as the selection of a wrong object) by a spoken command. Another communication channel is based on touch. We used a touch sensitive artificial skin by means of which the operator can correct the posture of the robot arm. In addition, unintended contact with obstacles can be avoided. The goal of our reserach is the integration of all these different communication channels in a complete man-machine interaction scheme. Within this scheme, the redundancy of the different channels is transformed into accuracy. For example, the operator can specify an object by simultaneously pointing and looking to it and calling out the object’s name. In effect, the combination of these different sources of information will enable the robot to overcome errors in single sensor channels.