Lueschow et al.
(1994), Ito et al. (1995).
The question is whether there are computational approaches which accommodate
the constraints listed above. Namely can efficient selection be achieved at a
range of scales, with a minimal number of false negatives and a small number of
false positives, using a small number of functional layers. Can the computation
be parallelized, making minimal assumptions regarding the information carried
by an individual neuron. Such computational approaches, if identified, may serve
as models for information processing beyond V1. They can provide a source of
hypotheses which can be experimentally tested.
The aim of this paper is to present such a model in the form of a neural network
architecture which is able to select candidate locations for any object representation
evoked in a memory model, the network does not need to be changed for
detecting different objects. The network has a sequence of layers similar to those
found in visual cortex. The operations of all units are simple integer counts. A
unit is either on or off and its status depends on the number of on units feeding
into it. Selection at a fixed resolution is invariant to a range of scales, and all
locations, as well as small linear and non-linear deformations.