Successes and Failures
It's fine in theory to talk about neural nets that tell males from females, but if that was all they were useful for, they would be a sad project indeed. In fact, neural nets have been enjoying growing success in a number of fields, and significantly: their successes tend to be in fields that posed large difficulties for symbolic AI. Neural networks are, by design, pattern processors - they can identify trends and important features, even in relatively complex information. What's more, they can work with less-than-perfect information, such as blurry or static-filled pictures, which has been an insurmountable difficulty for symbolic AI systems. Discerning patterns allows neural nets to read handwriting, detect potential sites for new mining and oil extraction, predict the stock market, and even learn to drive.
Interestingly, neural nets seem to be good at the same things we are, and struggle with the same things we struggle with. Symbolic AI is very good at producing machines that play grandmaster-level chess, that deduce logic theorems, and that compute complex mathematical functions. But Symbolic AI has enormous difficulty with things like processing a visual scene (discussed in a later chapter), dealing with noisy or imperfect data, and adapting to change. Neural nets are almost the exact reverse - their strength lies in the complex, fault-tolerant, parallel processing involved in vision, and their weaknesses are in formal reasoning and rule-following. Although humans are capable of both forms of intellectual functioning, it is generally thought that humans possess exceptional pattern recognition ability. In contrast, the limited capacity of human information processing systems often makes us less-than-perfect in tasks requiring abstract reasoning and logic.
Critics charge that a neural net's inability to learn something like logic, which has distinct and unbreakable rules, proves that neural nets cannot be an explanation of how the mind works. Neural net advocates have countered that a large part of the problem is that abstract rule-following ability requires many more nodes than current artificial neural nets implement. Some attempts are now being made at producing larger networks, but the computational load increases dramatically as nodes are added, making larger networks very difficult. Another set of critics charge that neural nets are too simplistic to be considered accurate models of human brain function. While artificial neural networks do contain some neuron-like attributes (connection strengths, inhibition/excitation, etc.) they overlook many other factors which may be significant to the brain's functioning. The nervous system uses many different neurotransmitters, for instance, and artificial neural nets do not account for those differences. Different neurons have different conduction velocities, different energy supplies, even different spatial locations, which may be significant. Moreover, brains do not start as a jumbled, randomised set of connection strengths, there is a great deal of organization present even during fetal development. Any or all of these can be seen as absolutely essential to the functioning of the brain, and without their inclusion in the artificial neural network models, it is possible that the models end up oversimplified.
One of the fundamental objections that has been raised towards back-propogation style networks like the ones discussed here is that humans seem to learn even in the absence of an explicit 'teacher' which corrects our outputs and models the response. For neural networks to succeed as a model of cognition, it is imperative that they produce a more biologically (or psychologically) plausible simulation of learning. In fact, research is being conducted with a new type of neural net, known as an 'Unsupervised Neural Net', which appears to successfully learn in the absence of an external teacher.