In the network as described in this paper, a new set of pseudopatterns was chosen each
time a new pattern had to be learned. In other words, each time a new pattern had to be
learned, the pseudo-inputs to the final-storage memory were randomly chosen. Another
technique would be to always use the same set of pseudo-inputs. Or it might even be
possible for the network to gradually learn an set of pseudo-inputs that would generate
pseudopatterns that reflect the contents of the final-storage area better than pseudopatterns
with random pseudo-inputs. For example, if we have a yes-no classifier network (i.e., all
learned items produce either a 1 or 0 on output), pseudo-inputs that produced outputs close
to 0 or 1 might be “better” than pseudo-inputs that were purely random. Or another
important issue: must the early-processing area converge for the entire set {Pi, ψ1, ψ2, . . .
ψn}, as in done here, or only for Pi, as is done in Robins (1995)? These questions are all the
focus of on-going research.
In the network as described in this paper, a new set of pseudopatterns was chosen eachtime a new pattern had to be learned. In other words, each time a new pattern had to belearned, the pseudo-inputs to the final-storage memory were randomly chosen. Anothertechnique would be to always use the same set of pseudo-inputs. Or it might even bepossible for the network to gradually learn an set of pseudo-inputs that would generatepseudopatterns that reflect the contents of the final-storage area better than pseudopatternswith random pseudo-inputs. For example, if we have a yes-no classifier network (i.e., alllearned items produce either a 1 or 0 on output), pseudo-inputs that produced outputs closeto 0 or 1 might be “better” than pseudo-inputs that were purely random. Or anotherimportant issue: must the early-processing area converge for the entire set {Pi, ψ1, ψ2, . . .ψn}, as in done here, or only for Pi, as is done in Robins (1995)? These questions are all thefocus of on-going research.
การแปล กรุณารอสักครู่..
