The data fusion process is similar to texture mapping, a method for adding images as texture to the surfaces of the
3D models. The main difference in the proposed data fusion process is that the temperature data from each IR image
pixel – instead of RGB pixel values – are directly extracted and assigned to points as non-graphic values [15, 16].
Thus, each point is considered as an object containing different types of data, such as x-y-z coordinates, intensity,
temperature, RGB, etc.