detection framework is applied to consecutive frames [7].
The residual pixels correspond either to parallax or independently
moving regions. In order to identify independent
motion in the initial detection results, we estimate the geometric
constraint errors after four consecutive frames. During
the parallax filtering process, the constraint errors are
accumulated within a buffer and represented in probabilistic
likelihood models. Multiple cues from appearance, motion
of detected blobs and the likelihood maps from parallax filtering
are integrated into a JPDAF-based multi-frame tracking
model. The approach follows a transition from twoframe
processing (phase 1) to four-frame processing (phase
2) and finally to the multi-frame processing (phase 3 and 4).
The affine motion detection framework initially extracts
a number of feature points in each frame by using the Harris
corner detector. Then the feature points in consecutive
frames It
1 and It+1 are matched by evaluating the
cross-correlation of local windows around feature points.
A 2D affine motion model At+1
t is robustly estimated by
fitting the model to at least three pairs of matched points
within a RANSAC-based scheme [2]. This affine model
is used not only for motion compensation and detection,
but also to estimate the homography matrix for the later
“Plane+Parallax” representation in phase 2. The affine motion
model At+1
t globally compensates for the motion of
pixels from It to It+1. The pixels that do not satisfy this
motion model are classified as residual pixels Φt.
Before computing the geometric errors, the epipolar geometry
is also estimated from the matched feature points
in every two consecutive frames. The fundamental matrix
Ft+1
t is estimated by a RANSAC-based 8-point algorithm
[4]. The corresponding epipoles et and et+1 are obtained
as the null vector of the fundamental matrix. As shown in
Figure 2(b), the geometric constraint errors are computed
on the residual pixels in four consecutive frames. A set of
detection framework is applied to consecutive frames [7].The residual pixels correspond either to parallax or independentlymoving regions. In order to identify independentmotion in the initial detection results, we estimate the geometricconstraint errors after four consecutive frames. Duringthe parallax filtering process, the constraint errors areaccumulated within a buffer and represented in probabilisticlikelihood models. Multiple cues from appearance, motionof detected blobs and the likelihood maps from parallax filteringare integrated into a JPDAF-based multi-frame trackingmodel. The approach follows a transition from twoframeprocessing (phase 1) to four-frame processing (phase2) and finally to the multi-frame processing (phase 3 and 4).The affine motion detection framework initially extractsa number of feature points in each frame by using the Harriscorner detector. Then the feature points in consecutiveframes It1 and It+1 are matched by evaluating thecross-correlation of local windows around feature points.A 2D affine motion model At+1t is robustly estimated byfitting the model to at least three pairs of matched pointswithin a RANSAC-based scheme [2]. This affine modelis used not only for motion compensation and detection,but also to estimate the homography matrix for the later“Plane+Parallax” representation in phase 2. The affine motionmodel At+1t globally compensates for the motion ofpixels from It to It+1. The pixels that do not satisfy thismotion model are classified as residual pixels Φt.Before computing the geometric errors, the epipolar geometryis also estimated from the matched feature pointsin every two consecutive frames. The fundamental matrixFt+1t is estimated by a RANSAC-based 8-point algorithm[4]. The corresponding epipoles et and et+1 are obtainedas the null vector of the fundamental matrix. As shown inFigure 2(b), the geometric constraint errors are computedon the residual pixels in four consecutive frames. A set of
การแปล กรุณารอสักครู่..