From the results and analysis presented in section 4.2 it can be seen that the SCENT
model is capable of discovering interesting hierarchical structure in unlabelled data, as
well as providing a straightforward codevector reduction. The SCENT model has
succeeded in producing a hierarchical tree structure reflecting the clear visual differences
between the images and is capable of learning stable hierarchical clusters that include
both super and sub categories from semantic data.
The development of classification hierarchy models such as HART and SCENT allows
for rapid high level analysis of data sets. Critical features of the data are easily
identified, as are any features of little statistical significance, at a number of levels
within any natural hierarchy of the data set. The top-down approach of SCENT allows
for the development of flexible prototype clusters based on hierarchical code vectors of
an unlabelled data set.
The full theoretical relationship between training data, in general, and the resulting
structures produced by SCENT is an issue currently being investigated.