3.Model
Here we restrict attention to linear ESNs, in which both the transfer function of the reservoir
nodes and the output layer are linear functions, Figure 1. The readout layer is a linear combination
of the reservoir states. The readout weights are determined using supervised learning
techniques, where the network is driven by a teacher input and its output is compared with
a corresponding teacher output to estimate the error. Then, the weights can be calculated using
any closed-form regression technique [10] in offline training contexts. Mathematically, the
input-driven reservoir is defined as follows. Let N be the size of the reservoir. We represent
the time-dependent inputs as a column vector u(t), the reservoir state as a column vector x(t),
and the output as a column vector y(t). The input connectivity is represented by the matrix V
and the reservoir connectivity is represented by an N × N weight matrixW. For simplicity, we assume one input signal and one output, but the notation can be extended to multiple inputs
and outputs. The time evolution of the linear reservoir is given by:
x(t + 1) = Wx(t) + Vu(t), (1)
The output is generated by the multiplication of an output weight matrix Wout of length N
and the reservoir state vector x(t):
y(t) = Woutx(t). (2)
The coefficient vector Wout is calculated to minimize the squared output error E = ||y(t) −
y(t)||2 given the target output y(t). Here, || · || is the L2 norm and · the time average. The
output weights are calculated using ordinary linear regression using a pseudo-inverse form:
Wout =
XX−1 XY, (3)
where each row t in the matrix X corresponds to the state vector x(t), andY is the target output
matrix, whose rows correspond to target output vectors y(t).