Forrobotstoworkinunstructuredenvironments,theyneed to be able to perceive the world. Over the past 20 years, we’ve come a long way, from simple range sensors based on sonar or IR providing a few bytes of information about the world, to ubiquitous cameras to laser scanners. In the past few years, sensors like the Velodyne spinning LIDAR used in the DARPA Urban Challenge and the tilting laser scanner used on the PR2 have given us high-quality 3D representations of the world - point clouds. Unfortunately, these systems are expensive, costing thousands or tens of thousands of dollars, and therefore out of the reach of many robotics projects. Very recently, however, 3D sensors have become available that change the game. For example, the Kinect sensor for the Microsoft XBox 360 game system, based on underlying technology from PrimeSense, can be purchased for under $150, and provides real time point clouds as well as 2D images. As a result, we can expect that most robots in the future will be able to ”see” the world in 3D. All that’s needed is a mechanism for handling point clouds efficiently, and that’s where the open source Point Cloud Library, PCL, comes in. Figure 1 presents the logo of the project. PCL is a comprehensive free, BSD licensed, library for n-D Point Clouds and 3D geometry processing. PCL is fully integrated with ROS, the Robot Operating System (see http://ros.org), and has been already used in a variety of projects in the robotics community.