There is a semantic gap between simple but high-level action instructions like “Pick up the cup with the right hand” and low-level robot descriptions that model, for example, the structure and kinematics of a robot’s manipulator.
Currently, programmers bridge this gap by mapping abstract instructions to parametrized algorithms and rigid body parts of a robot within their control programs.
By linking descriptions of robot components, i.e. sensors, actuators and control programs, via capabilities to actions in an ontology we equip robots with knowledge about themselves that allows them to infer the required components for performing a given action.
Thereby a robot that is instructed by an end-user, a programmer, or even another robot to perform a certain action, can assess itself whether it is able and how to perform the requested action.
This self-knowledge for robots could considerably change the way of robot control, robot interaction, robot programming,
and multi-robot communication.