The current system has some limitations. First of all, the rules
that map the actions in the catalogue to the animation language
must be coded by hand. In order to alleviate this task, we will
investigate the possibility of a semi-automatic translation given
the structure of the action itself. The mapping of parameters from
plan actions to animations, addressed by [33], also deserves further
investigation. Second, the semantic labels attached to actions are
not exploited in the current system: so, in the future, we are planning
to use state-of-the-art technologies to support semantic access
to the Action Catalogue. Third, currently the system does
not support multi-agent coordinated actions. All the animated
behaviors must be monitored by the animation director and generated
again in case of conflicts in general and any collisions in particular.
Finally, the system does not support interactivity, since the
resulting plan devised by the decision making component is directly
executed in the 3D virtual world with no reconsideration
in case of failures. Since the paradigm of HTN planning can easily
be adapted to deal with re-planning [34,35], we intend to expand
the system to interactive animated agents.