The field maps were digitized and registered to an image mosaic
created from the Nikon D3X images. There were geometric mismatches
between the Nikon D3X images and the AisaEAGLE data.
Thus the polygons were edited using AisaEAGLE mosaic to match
the targets more accurately in the classification. Each tree was
drawn as an individual polygon and the largest fields were split
into smaller segments that included mainly a single crop. Polygons
where target species were not visually detected were omitted. A
subset of the polygons was created that contained only the most dominant species, which were included in the classification and
are hereafter referred as target species (maize, mango, sugarcane,
banana, yam, acacia and grevillea). Additionally, shadows from
trees were collected as polygons and included in the classification.
This subset was further divided into training polygons (30% of the
polygons) and validation polygons (70% of the polygons) (Table 1.).
The remaining 192 polygons containing 28 non-target species
were used for assessing the performance of the classification algorithm
in the areas that are known to contain none of the target
species.
Training and validation pixels for each class were collected
within the polygons as single pixels (Table 1; Fig. 1a). Pixels were
collected from points where it was possible to get good signal from
the target species. For example pixels were not collected from bare
soil areas within maize fields or heavily shadowed edges of the
trees. As polygons were divided into training and validation groups
before the pixels were collected, it was ensured that training and
validation pixels are not located in the same fields or tree crowns.
Because of the small pixel size it was not possible to collect ground
reference data in the same scale. Thus further validation was done
using the validation polygons that match the ground reference data
collected in the field (Table 1). This will be referred to as polygonwise
validation.