Content validity for IELTS is regarded, by Bachman et al.(1995) at least, as being
high. This opinion is mirrored by Weir (1990: 7-15) who states that IELTS is a variety of
communicative tests because real-life tasks are presented to the candidates. This opinion,
however, dates back to 1990, which possibly reduces its value, as the format of IELTS has
been revised and updated in that time. However, studies by Farhady (2005) found that, in the
listening module at least, candidates taking IELTS preferred being tested on real-life contexts,
which again suggests good content validity for the test as a whole.
Initial research into the test as we know it today, called the IELTS Impact Study (IIS)
conducted by Hawkey et al. (2001) through questionnaires sent to institutions both teaching
and testing IELTS, commissioned by Cambridge ESOL and reported in ‘Research Notes’
(2004), also leans towards high content validity. Teachers and candidates alike thought that
13
the content was relevant to target language activities, but some felt that the writing and some
reading tasks were maybe too general. Whilst generating and collating information on content
validity is deemed useful, it is however not necessarily a sufficient way of validating a test
(O’Sullivan et al. 2002: 38), especially when the evidence presented is researched by those
responsible for the construction and distribution of the test. However, clearly, this is not an
accurate or objective indicator of face validity.