Deep learning architectures, specifically those built from artificial neural networks (ANN), date back at least to the Neocognitron introduced by Kunihiko Fukushima in 1980.[19] The ANNs themselves date back even further. The challenge was how to train networks with multiple layers. In 1989, Yann LeCun et al. were able to apply the standard backpropagation algorithm, which had been around since 1974,[20] to a deep neural network with the purpose of recognizing handwritten ZIP codes on mail. Despite the success of applying the algorithm, the time to train the network on this dataset was approximately 3 days, making it impractical for general use.[21] In 1995, Brendan Frey trained a network containing six hidden layers and several hundred hidden units using the wake-sleep algorithm, which was devised by Peter Dayan and Geoffrey Hinton.[22] However, training took two days.