In this paper I have discussed a number of different approaches to modelling what goes on when
people acquire words in a second language. A very large part of this work is concerned are trying to
provide detailed descriptions of how individual words get integrated into an L2 lexicon. Ideally, this
work would like to provide a complete account of how words move from wholly unfamiliar sequences
of letters or sounds to become functional units in an effective lexicon. Although this seems like a very
laudable aim, I have argued here here that there is serious problem with the type of research that these
aims generate. They seem to force us to focus more and more on the ever finer details of lexical
knowledge, at the expensive of a deeper understanding of the global features of lexicon competence.
It seems to me that the best future for research and vocabulary does not lie in pursuing detail at this
level. What really need is not so much a more detailed understanding of words, but rather a very much
deeper understanding of lexicons. The area that we work in seems to one where the whole is
considerably more interesting than the sum of its parts. The problem with some current models, I
believe, is that they are in danger of losing sight of the wood through concentrating too hard on the
individual trees.