Debates over many of life’s developmental forces have raged for centuries. Many learned and distinguished scientists, theorists and practitioners have looked for clues into how we grow and why we do what we do. They have formulated theories ranging from the practical to the absurd, and have had their legions of followers. One such prickly debate is the question of the primary influence on intelligence, or as it is more popularly portrayed, I.Q.
This long-standing argument centers on the effect of two primary conditions. Nature, as expressed in hereditary factors we are born with and Nurture, as expressed by the social and environmental factors we are exposed to and which of the two plays a more significant role on our intelligence. In either case, we must first look at how intelligence is measured and quantified before a discourse on factors influencing such can take place.
Attempts have been made throughout history to measure intelligence. The ancient Chinese used a form of testing to select more favorable candidates for civil service positions. The search for clues into the human intellect attracted scholars such as Plato, who believed that “knowledge is not given by the senses but acquired through them as reason organizes and makes sense out of that which is perceived (Zusne, p. 6).” Also, William Duff, an ordained Scottish minister who in the late 18th century, “investigated the creative and cognitive capabilities of genius” and speculated that “imagination was an important part of intelligence (Smith, Elder & Co).” These were but a few of the many who influenced the development intelligence testing.
During the mid 19th century, the formation of major schools of Psychology in Europe began to catalyze the development of more invasive techniques in measuring intelligence. Already, a new breed of scientists had begun to use new forms of information gathering, such as Francis Galton, who pioneered the use of the survey and questionnaire, and who believed that statistical analysis could be used in investigation of behavioral and mental phenomena.
In response to a problem in 1904, Alfred Binet, a French Psychologist and Lawyer, helped develop the first successful intelligence test. The French government wanted to identify students in need of alternate education, Binet, who was on the commission investigating this issue, and a colleague, Theodore Simon, began testing schoolchildren to help weed out those children who would not benefit from the standard curriculum used in schools of the day. This test and subsequent tests until Binet’s death in 1911 had one major innovation that became the standard used to this day in intelligence testing: the concept of mental age. The Binet-Simon Intelligence Scale was published in 1905, receiving wide acclaim, but the revision, in 1908 was the first in which the concept of mental age was proposed. Binet and Simon theorized that a child of eight should test to a mental age of eight. ‘A child testing to a mental age of six was considered retarded and a child testing the mental age of 10 was considered advanced (Fancher, 1985).’ In 1914, William Stern, a German Psychologist, devised a formula that used the concept Binet-Simon had fashioned, and put it into practical terms: he divided the subjects tested mental age by his or her chronological age and the Intelligence Quotient was born.
Revisions made by Lewis M. Terman, a professor at Stanford University, standardized and expanded the test for American subjects, and added a new element to the Intelligence Quotient formula. Terman multiplied the Stern formula by 100 to remove the decimal and shortened the description of such to I.Q. His revisions were so widely accepted that the test soon became known as the Stanford-Binet Intelligence Scale.
David Wechsler, a Psychologist working at the US Army’s Camp Logan, used the Stanford-Binet test on military recruits to better asses their abilities for job placement within the military. But he realized that to fully explore the mental capabilities of his subjects, the tests administered would have to be broader and more valid. In order to determine a meaningful representation of adult intelligence, Wechsler introduced his own test, the WAIS (Wechsler Adult Intelligence Scales). This test, consisting of ten or eleven verbal and performance subtests, was fashioned to be a more accurate test of ‘real-life’ situations. This test led to the formulation by Wechsler of the Deviation Quotient, an IQ devised by “considering the individual’s mental ability in comparison with the average individual of his or her own age (David Wechsler, 2001)”.
These tests and many others devised and in use have led to a fundamental question: Which has more influence on intelligence, environment or biology? This is not an easy question to answer. Various tests have shown that there is a socio-economic link to IQ as people who are better off financially score 17 point higher on IQ tests than those financially
ดำเนินผ่านหลายกองพัฒนาของชีวิตมี raged มานานหลายศตวรรษ หลายคนเรียนรู้ และนักวิทยาศาสตร์ที่แตกต่าง theorists และผู้มองหาเบาะแสว่าเราเติบโตและทำไมเราทำสิ่งที่เราทำ พวกเขามีสูตรทฤษฎีตั้งแต่ทางการปฏิบัติไร้สาระ และมีของ legions ของลูกศิษย์ หนึ่งอภิปรายดังกล่าวเต็มไปด้วยหนามคือ คำถามที่มีอิทธิพลหลักในปัญญา หรือมันเป็นมากกว่ารู้จักเซ็กส์ I.Q.อาร์กิวเมนต์นี้ยาวนานศูนย์บนผลของเงื่อนไขหลักสอง ธรรมชาติ แสดงในรัชทายาทแห่งปัจจัยที่เราเกิดมาด้วยและสำคัญ แสดงโดยปัจจัยทางสังคม และสิ่งแวดล้อมที่เรากำลังเผชิญกับทั้งสองที่มีบทบาทสำคัญมากในปัญญาของเรา ในกรณีอย่างใดอย่างหนึ่ง เราก่อนต้องมองที่วิธีปัญญาวัด และ quantified ก่อนการอภิปรายเกี่ยวกับปัจจัยที่มีอิทธิพลต่อดังกล่าวสามารถเกิดขึ้นAttempts have been made throughout history to measure intelligence. The ancient Chinese used a form of testing to select more favorable candidates for civil service positions. The search for clues into the human intellect attracted scholars such as Plato, who believed that “knowledge is not given by the senses but acquired through them as reason organizes and makes sense out of that which is perceived (Zusne, p. 6).” Also, William Duff, an ordained Scottish minister who in the late 18th century, “investigated the creative and cognitive capabilities of genius” and speculated that “imagination was an important part of intelligence (Smith, Elder & Co).” These were but a few of the many who influenced the development intelligence testing.During the mid 19th century, the formation of major schools of Psychology in Europe began to catalyze the development of more invasive techniques in measuring intelligence. Already, a new breed of scientists had begun to use new forms of information gathering, such as Francis Galton, who pioneered the use of the survey and questionnaire, and who believed that statistical analysis could be used in investigation of behavioral and mental phenomena.In response to a problem in 1904, Alfred Binet, a French Psychologist and Lawyer, helped develop the first successful intelligence test. The French government wanted to identify students in need of alternate education, Binet, who was on the commission investigating this issue, and a colleague, Theodore Simon, began testing schoolchildren to help weed out those children who would not benefit from the standard curriculum used in schools of the day. This test and subsequent tests until Binet’s death in 1911 had one major innovation that became the standard used to this day in intelligence testing: the concept of mental age. The Binet-Simon Intelligence Scale was published in 1905, receiving wide acclaim, but the revision, in 1908 was the first in which the concept of mental age was proposed. Binet and Simon theorized that a child of eight should test to a mental age of eight. ‘A child testing to a mental age of six was considered retarded and a child testing the mental age of 10 was considered advanced (Fancher, 1985).’ In 1914, William Stern, a German Psychologist, devised a formula that used the concept Binet-Simon had fashioned, and put it into practical terms: he divided the subjects tested mental age by his or her chronological age and the Intelligence Quotient was born.Revisions made by Lewis M. Terman, a professor at Stanford University, standardized and expanded the test for American subjects, and added a new element to the Intelligence Quotient formula. Terman multiplied the Stern formula by 100 to remove the decimal and shortened the description of such to I.Q. His revisions were so widely accepted that the test soon became known as the Stanford-Binet Intelligence Scale.David Wechsler, a Psychologist working at the US Army’s Camp Logan, used the Stanford-Binet test on military recruits to better asses their abilities for job placement within the military. But he realized that to fully explore the mental capabilities of his subjects, the tests administered would have to be broader and more valid. In order to determine a meaningful representation of adult intelligence, Wechsler introduced his own test, the WAIS (Wechsler Adult Intelligence Scales). This test, consisting of ten or eleven verbal and performance subtests, was fashioned to be a more accurate test of ‘real-life’ situations. This test led to the formulation by Wechsler of the Deviation Quotient, an IQ devised by “considering the individual’s mental ability in comparison with the average individual of his or her own age (David Wechsler, 2001)”.These tests and many others devised and in use have led to a fundamental question: Which has more influence on intelligence, environment or biology? This is not an easy question to answer. Various tests have shown that there is a socio-economic link to IQ as people who are better off financially score 17 point higher on IQ tests than those financially
การแปล กรุณารอสักครู่..
