When using the dataset, we usually divide it in minibatches (see Stochastic Gradient Descent).
We encourage you to store the dataset into shared variables and access it based on the minibatch
index, given a fixed and known batch size. The reason behind shared variables is related to using
the GPU. There is a large overhead when copying data into the GPU memory. If you would
copy data on request ( each minibatch individually when needed) as the code will do if you do
not use shared variables, due to this overhead, the GPU code will not be much faster then the
CPU code (maybe even slower). If you have your data in Theano shared variables though, you
give Theano the possibility to copy the entire data on the GPU in a single call when the shared
variables are constructed. Afterwards the GPU can access any minibatch by taking a slice
from this shared variables, without needing to copy any information from the CPU memory
and therefore bypassing the overhead. Because the datapoints and their labels are usually of
different nature (labels are usually integers while datapoints are real numbers) we suggest to
use different variables for label and data. Also we recommend using different variables for
the training set, validation set and testing set to make the code more readable (resulting in 6
different shared variables).