This paper proposes about indoor environment 3D image reconstruction to be used with the
autonomous observation system. By using Microsoft Kinect camera and sensor, the depth
of objects or obstacles that the camera projects to can be obtained as the raw data. These
data is calculated and transformed in term of meter to be many single depth points, then they
are combined together to form a depth map. This map detects the objects at some specific
range using tone changing in black and white color. For the next step, Microsoft Kinect can
capture the color coordinate of objects that it sees. So, this result is used with the previous
depth data obtained to generate the colored 3D point cloud of the environment that the camera
observes. 3D point cloud is the group of three-dimension coordinate that is generated
densely around the target image. Afterward, a particular algorithm must be implemented to
merge all 3D point cloud map as the camera proceed or change its angle of sight. Finally,
this vision system can be implemented in rescue robot to generate environmental map and
to be able to make a survey in unknown/collapse site.