Time-of-flight (TOF) cameras provide a depth value at each pixel, from which the 3D structure of the scene can be estimated. This new type of active sensor makes it possible to go beyond traditional 2D image processing, directly to depth-based and 3D scene processing. Many computer vision and graphics applications can benefit from TOF data, including 3D reconstruction, activity and gesture recognition, motion capture and face detection. It is already possible to use multiple TOF cameras, in order to increase the scene coverage, and to combine the depth data with images from several colour cameras. Mixed TOF and colour systems can be used for computational photography, including full 3D scene modelling, as well as for illumination and depth-of-field manipulations. This work is a technical introduction to TOF sensors, from architectural and design issues, to selected image processing and computer vision methods.
Describes the physical principles, the overall hardware architecture, and the electronic design of time-of-flight (TOF) sensors, the pre-processing and enhancement of TOF depth data, and the metric calibration of a TOF cameraExamines the fusion of range and parallax data, proposing a method for calibrating a mixed TOF and binocular RGB systemExplains how multiple TOF and colour cameras can be combined to perform full 3D scene reconstruction