The Kinect system is arguably the most popular 3-D camera technology currently on the market.
Its application domain is vast and has been deployed in scenarios where accurate geometric measurements are
needed. Regarding the PrimeSense technology, a limited amount of work has been devoted to calibrating the
Kinect, especially the depth data. The Kinect is, however, inevitably prone to distortions, as independently
confirmed by numerous users. An effective method for improving the quality of the Kinect system is by
modeling the sensor’s systematic errors using bundle adjustment. In this paper, a method for modeling the
intrinsic and extrinsic parameters of the infrared and colour cameras, and more importantly the distortions
in the depth image, is presented. Through an integrated marker- and feature-based self-calibration, two
Kinects were calibrated. A novel approach for modeling the depth systematic errors as a function of lens
distortion and relative orientation parameters is shown to be effective. The results show improvements in
geometric accuracy up to 53% compared with uncalibrated point clouds captured using the popular software
RGBDemo. Systematic depth discontinuities were also reduced and in the check-plane analysis the noise of
the Kinect point cloud was reduced by 17%.