Schmitz et al. [10] developed a navigation system that seamlessly integrates static maps with dynamic
location-based textual information from a variety of sources. Each information source requires a different kind
of acquisition technique. All acquired information is combined by a context management platform, and then
presented to the user as a tactile or acoustic map depending on the sources available at the current position.
Positioning is achieved by a combination of an inertial tracking system, RFID technology and GPS, and the
user is guided to a desired destination by speech output and a haptic cane.
Almost all systems depend on different sensors to accomplish navigation. In this paper we present a
navigation system which integrates data of a GIS of a building with only visual landmarks. Any normal camera
can be used together with a small, portable computer like a netbook. GIS/vision-based localisation is
complemented by navigation: at any time, the system traces and validates a route from the current position to a
given destination. Although designed for being integrated in the Blavigator prototype [5], this system can be
integrated in a robotic system that needs to navigate in a complex building. Summarising, we present a new
navigation algorithm that works in real time, integrating existing GIS data with existing visual landmarks like
objects and signs. We emphasise that, in contrast to other approaches, we do not distribute and employ special
tags with location codes.