1. Introduction
Robots and autonomous vehicles use maps to determine their location or pose within an environment, and to plan routes and trajectories [26,40]. Thus, the problem of automatic map building is an important topic in mobile robotics [25]. To achieve automatic map building, the mobile vehicle has to be accurately located inside the map that is being built. Simultaneous Localization and Mapping (SLAM) techniques [13,9] have been proposed to overcome this issue, which can be thought of as a chicken-and-egg problem: an unbiased map is necessary for localization, while an accurate pose estimate is needed to build that map. On the one hand, SLAM can be considered as a global rectification [34] of the mobile robot’s poses along its route once the loop is closed, and when the robot observes again a previously seen object. On the other hand, estimating the movement performed by a mobile robot using the observations gathered between two consecutive poses is considered to give a local rectification. This topic is usually referred to as egomotion [29]. The methods related to this research are called pose registration, and can be used within SLAM. Our main goal is to perform six degrees-of-freedom (6DoF) SLAM in a semi-structured environment, i.e. a man-made indoor environment [14].
The disadvantages of stereo cameras when used in low-textured environments motivate the use of two new 3D sensors. First, we use an SR4000 infrared camera [2], which measures the distance to objects using the time that the emitted infrared light takes to hit the object and return to the sensor. The SR4000 provides a set of 3D points, a 2D intensity image, and a map