**3. RGB-D sensors**

Solving image processing tasks in various fields of research [19–21] is often helpful to obtain depth information for a better description of the scene in addition to color information. The goal is to capture the geometrical nature of the real-world object and convert it to the digital format with the highest possible accuracy. For obtaining mentioned information the depth sensors (RGB-D sensors) are widely used. They can convert the scene to the 2D plane called depth map. The depth map can be converted back to 3D space using reversed reconstruction. The depth map is usually represented by a monochromatic image, where the intensity value of the pixel represents the distance of the corresponding point from the imaging sensor. Using a combination of depth maps and color RGB images we can create a textured 3D model of scene. One of the novel application areas is the reconstruction of the 3D surface in medical research (scanning of the human heads, faces, or other body parts), taking the advantage of the noninvasive nature of digital imaging. A geometrically accurate model of head is applicable in medicine for predicting various diseases, e.g., respiratory syndromes, where the 3D representation of the patient's head and neck offers detailed visualization of craniofacial parameters with a given accuracy. In the field of 3D imaging, we know many principles and methods, e.g., photogrammetry or laser scanning devices. These methods provide high-quality 3D information; on the other side, their application is limited by the size of scanned object, size of scanner, or both object and scanner, these devices are often expensive and scanning time is too high. Also in most cases, laser scanners are not eye-safe. In the next sections, the basic principles of RGB-D sensors will be described.
