The data filtering phase collects data from sensor arrays and extracts the key information from the data. For real-time signal streams, the processing flow continuously calculates signal features. Raw data received from the sensors is initially treated as discrete signals. For example, the sensor array applies peak extraction and filtering functions to the
raw data, and then sends the filtered data to the feature extractor, which extracts some mathematical features, such as mean, first-order differences, and peak-to-peak intervals, as basic records. Finally, it converts the extracted signal features to a context generator, which produces the preliminary context for the upper layers, and only collects the necessary contexts as probabilistic conditions. In this procedure, the context interpreter plays a key role in the context-aware service, which includes
• getting the preliminary contexts from the lower layer, which reports the signal’s current status; and
• fetching the necessary domain knowledge from the context repository module in the datacenter. Finally, the filtering data phase produces a sequence of probabilistic conditions for further operations on the decision tree.The attribute decision phase uses the probabilistic conditions to generate a decision tree, which it uses to identify events. For example, to determine a levee collapse, this phase would
• build the cumulative distribution function (CDF) of the levee collapse event using the given structural strength detected by sensors, or
• build the CDF of the levee collapse event using a given water level detected by sensors.