This covers only the phase one for generic video
information extraction. All the generated values are then
analyzed on data and mathematical level to provide desired
results. These equations and quantifiers not only fill the gaps
however they train the algorithm for learning deep aspects of
incoming information and for filling the gaps between the
received information. Theses gaps could be the result of noise,
information manipulation, forgery or any other aspect. For
event identification and trend identification processes the
results and observations can be put to both probabilistic and
statistical calculations. For finding out the missing trends or
scene generation based on these probabilities and functions the
algorithm can help in predicting the nature of video without
analyzing the complete video frames. For each column height
and length a single upper bound relevance and lower bound
relevance can be generated to fill in the gaps in information.