From the perspective of self-organization [6, 7, 8, 9], it is interesting to study how Hebbian, self-limiting synaptic plasticity rules can emerge from a set of gov- erning principles, in terms of objective functions. Information theoretical mea- sures such as the entropy of the output firing rate distribution have been used in the past to generate rules for either intrinsic or synaptic plasticity [10, 11, 12]. The objective function with which we work here can be motivated from the Fisher information, which measures the sensitivity of a certain probability dis- tribution to a parameter, in this case defined with respect to the Synaptic Flux operator [13], which measures the overall increase of synaptic weights. Minimiz- ing the Fisher information corresponds, in this context, to looking for a steady state solution where the output probability distribution is insensitive to local changes in the synaptic weights. This method, then constitutes an implementa- tion of the stationarity principle, stating that once the features of a stationary input distribution have been acquired, learning should stop, avoiding runaway growth of the synaptic weights.
From the perspective of self-organization [6, 7, 8, 9], it is interesting to study how Hebbian, self-limiting synaptic plasticity rules can emerge from a set of gov- erning principles, in terms of objective functions. Information theoretical mea- sures such as the entropy of the output firing rate distribution have been used in the past to generate rules for either intrinsic or synaptic plasticity [10, 11, 12]. The objective function with which we work here can be motivated from the Fisher information, which measures the sensitivity of a certain probability dis- tribution to a parameter, in this case defined with respect to the Synaptic Flux operator [13], which measures the overall increase of synaptic weights. Minimiz- ing the Fisher information corresponds, in this context, to looking for a steady state solution where the output probability distribution is insensitive to local changes in the synaptic weights. This method, then constitutes an implementa- tion of the stationarity principle, stating that once the features of a stationary input distribution have been acquired, learning should stop, avoiding runaway growth of the synaptic weights.
การแปล กรุณารอสักครู่..