Abstract
This study was originally constructed to examine the effect of content-area reading on ESL writing proficiency. The experiment was restructured and extended because composition topics proved to be confounding factors. Although the results raise several issues, the most significant are 1) the influence of topic on the acquisition of ESL composition skills and 2) the influence of topic on the cognitive task of demonstrating ESL writing proficiency. More specifically, an information-process explanation is offered for some of the confounding factors the topic variable introduced into the equation.
Composition courses based on the connection between reading and writing were first developed for native English writers. There are numerous textbooks which prepare native English-speaking students to write across disciplines by presenting topics on which students will read articles and then write compositions. The number of ESL (English as a Second Language) composition textbooks of this nature is comparatively small (Shih, 1986, pp. 635-36). Recently, however, the “reading/writing connection” has also become a buzz phrase in ESL composition pedagogy.
In a 1988 study of professors’ reactions to nonnative-speaker academic compositions, Santos found that university professors grade more harshly on content deficiencies than they do on language usage; they are much more lenient with errors of linguistic form. Santos concludes that composition instruction that deals more strongly with content is indicated.
Shih (1986) discusses five approaches to instructing students in content-based writing. The present study examines the approach used in what Shih has termed “content-based academic writing courses” (pp. 635-37), in which students read sets of passages that relate to the topic areas of their writing assignments. Prior to the reading and writing, study questions for reading and discussion are introduced to stimulate the students’ close examination of the topic. These prereading questions help the reader to build appropriate schemata (Taglieber, Johnson, and Yarbrough, 1988, p. 466). [-1-] According to Anderson, Reynolds, Schallert, and Goetz, “Every act of comprehension involves one’s knowledge of the world as well” (Carrell & Eisterhold, 1983, p. 553). In other words, these questions help to prepare students to comprehend the new information from the reading passages, and to utilize what they already know, both when they read and when they write.
Zamel (1987) stresses that a process approach to writing gives learners an optimal opportunity to develop their ideas by allowing them to put concerns about linguistic form aside until the editing stage of composing, the last step before the final draft. Reading is one way of generating ideas in a process approach to writing. Shih says, “Empirical data are needed to support the belief held by many that content-based instruction can help ESL students to become more confident and competent when they tackle academic writing” (p. 642). Thus, a preliminary question is to what extent reading in the content area contributes to the quality of the composition. In the case of the present study, reading is defined as outside-the-classroom input in the content area. Other forms of receiving outside-the-classroom information were not examined. Therefore, it is not clear whether other types of outside input, e.g., lectures on the topic area, would have the same effect that reading has. Thus, the question becomes: is it possible for students to improve their ability to write effective compositions without outside-the-classroom input (in this case, without reading) to the same degree that it is possible for them to improve with this input?
This study was constructed to examine the quality of writing both with content-area readings and without content-area readings. However, it became apparent that another question must also be asked in conjunction with the question of whether or not reading contributes to the quality of writing students produce.
Carson, Carrell, Silberstein, Kroll, and Kuehn (1990) reported on a study they conducted using Japanese and Chinese subjects. They wanted to know if the proficiency levels of reading and writing in the subjects’ L1 would predict the reading and writing proficiency levels in the subjects’ L2. The following was noted:
In addition to the weak relationship noted in the L1-L2 writing correlations for both groups, the multiple regression analyses indicate that although reading scores predict reading scores in either language for both groups, writing never appears as a variable that predicts writing. (p. 260)
Although this study does not relate to any connection between reading in the content area and the quality of writing, it does point to a potentially confounding factor that must be ferreted out in the experimental design as a variable. In the Carson et al. study the writing topics differed in the L1 writing task and the L2 writing task. Could the difference in topics explain the lack of [-2-] connection between writing scores in the two languages? Moreover, by extension, within the same language, if the topics differ, will the quality of one writing predict the quality of another writing? If groups of students start out at approximately the same level in writing ability, might the topics that are used throughout the course determine how much they will improve?
Witte (1988) reported that when native speakers were asked to write compositions in response to various prompts (topics), it became obvious that not all prompts produced similar results across groups, even though the prompts had been devised to be topics with which all students would be familiar.
The topics to which students are asked to respond in composition would appear to make a difference in the quality of writing that students produce; however, research in the area of topic is sorely lacking. As Hoetker (1982) says, “there is little hard evidence anywhere that students will write any worse (or any better) on topics such as those I have just criticized than on the most thoughtfully considered and carefully edited topics” (p. 14).
The research that exists is not only far from conclusive, but often produces conflicting results. Hoetker cites White in a discussion of the extreme differences in quality that were found in the compositions produced by students taking the California State University and College Equivalency Examination between the years 1973 and 1974. White concluded that the 1974 topic, which produced lower scores, was more cognitively demanding, i.e., required abstract reasoning, that the 1973 topic relied more on personal experience, and that this difference accounted for the extreme difference in scores. Pytlik (1986), however, reports on a study conducted by Jones, whose findings showed that students performed better with textbook topics than with topics of their own (p. 7). Moreover, Greenberg, expecting that topics that asked students for their personal experience would produce better compositions, was surprised to find that students’ writing performance was not significantly affected by the type of essay question to which they responded (Pytlik, 1986, pp. 7-8).
O’Donnell (1984) cites Hoetker and his colleagues in a discussion of topics that are offensive to students. She says that there are three subjects that students found “difficult, uninteresting, or inappropriate, and that required special knowledge. . . (1) neglect of the urban environment, (2) favorite gadgets, and (3) dream homes” (p. 246). However, she says that there are also topics that produce favorable results; she cites Brossell and Ash who found that students wrote “more organized, more sharply focused, and more fluent” essays on the topic of violence in the schools (p. 246). [-3-]
Some researchers have questioned the use of topic options for composition exams. Hoetker states that “the strongest argument for options is that we know so little about topics that it is presumptuous for us to say we can know which topic will elicit student’s [sic] best performance” (p. 18). However, he cites a study conducted by DuCette and Wolk who found that when students were given more topic options, they performed less well than when they were given a single topic. Another question that arises in regard to providing students with options is whether or not students are able to judge which topic will show their best writing. Hoetker cites Meyer, who argues that students do not have the ability to select topics that elicit their best performance (p. 17). In thinking about the question of whether or not to provide topic options in an essay exam, one must consider how much time students will give to choosing the topic, rather than to actually writing. Pytlik says Jones concludes that students might perform better when provided with a few, rather than with many, options.
These studies on topic and composition proficiency have been conducted entirely with native speakers of English, except for the Carson et al. study, which does not actually examine the role topic plays. There is little research that provides much insight into the ways topics influence native-speaker writing, and the picture is even bleaker when it comes to nonnative-speaker writing. As Hoetker (cited in O’Donnell, 1984) states:
[W]e know little about topic variables because research attention has been devoted almost entirely to issues of rater reliability, ignoring for the part [sic] the issue of validity as well as the other two sources of error in an essay examination–thetopics and the writer. (p. 4)
The question to be examined by this research is whether reading in the content area contributes more to students’ quality of writing, or whether topic contributes more.
The results of this study suggest that the topic assigned, or chosen by the students, plays a significant role in the quality of writing students are abl