Over the last two decades, the computer vision community has witnessed extensive research in the area of analyzing and understanding scenes containing different moving objects (usually referred ad dynamic scenes). Motion segmentation forms a significant part of this analysis and deals with separating into different groups some visual features extracted from these scenes, such that each group corresponds to a different motion.
Based on what type of visual features are extracted, the research in motion segmentation can be broadly divided into two categories. The first one is termed as direct motion segmentation and uses the intensities from the image of a scene in order to perform segmentation. This genre of algorithms separates the images into different patches that have the same 2-D motion, i.e. apparent motion in the images. The second category of algorithms proceeds by segmenting a sparse set of features corresponding to actual physical points on the objects. These features can be used to characterize either the 2-D motion of the scene or the 3-D motion of the objects imaged in the scene.
In our research, we have developed a general algebraic approach which can handle both types of features from two views in a unified manner.
Our latest research aims to solve the motion segmentation problem in the case where multiple-view point correspondances are available.