Trading Places” (1983). Yet, despite so much attention and so much research devoted to resolving this timeless debate, the arguments continue to this day.
This basic question has been debated at settings ranging from scientific conferences to dinner tables for many decades. The media also has covered it in forms ranging from documentaries to the popular comedy movie “A lack of a clear answer, however, by no means implies that we have not made major advances in our understanding. This short review takes a look at the progression of this seemingly eternal question by categorizing the development of the nature versus nurture question into three main stages. While such a partitioning is somewhat oversimplified with regard to what the various positions on this issue have been at different times, it does illustrate the way that the debate has gradually evolved.
Part 1: Nature versus nurture
The origins of the nature versus nurture debate date back far beyond the past 50 years. The ancient Greek philosopher Galen postulated that personality traits were driven by the relative concentrations of four bodily fluids or “humours.” In 1874, Sir Francis Galton published “English Men of Science: Their Nature and Nurture,” in which he advanced his ideas about the dominance of hereditary factors in intelligence and character at the beginning of the eugenics movement.1 These ideas were in stark opposition to the perspective of earlier scholars, such as the philosopher John Locke, who popularized the theory that children are born a “blank slate” and from there develop their traits and intellectual abilities through their environment and experiences.
Fifty years ago, some of the same arguments were being heard and supported by early research. The behaviorism movement, started by people such as John Watson, PhD, was a prominent force at that time, with notable psychologists such as B.F. Skinner, PhD, showing evidence in many experiments with both animals and people regarding the importance of rewards and punishments in shaping behavior.The other primary school of thought in the mid-1960s was psychoanalysis, which was based on the ideas of Sigmund Freud, MD. Psychoanalysis maintains that the way that unconscious sexual and aggressive drives were channeled through various defense mechanisms was of primary importance to the understanding of both psychopathology and typical human behavior.
While these two perspectives were often very much in opposition to each other, they shared in common the view that the environment and a person’s individual experiences, i.e. nurture, were the prevailing forces in development. In the background, more biologically oriented research and clinical work was slowly beginning to work its way into the field, especially at certain institutions, such as Washington University in St. Louis. Several medications of various types were then available, including chlorpromazine, imipramine, and diazepam.
Overall, however, it is probably fair to say that, 50 years ago, it was the nurture perspective that held the most sway since psychodynamic treatment and behaviorist research dominated, while the emerging fields of genetics and neuroscience were only beginning to take hold.
Part 2: Nature and nurture
From the 1970s to the end of the 20th century, a noticeable shift occurred as knowledge of the brain and genetics – supported by remarkable advances in research techniques – began to swing the pendulum back toward an increased appreciation of nature as a critical influence on a person’s thoughts, feelings, and behavior.
Researchers Stella Chess, MD, and Alexander Thomas, MD, for example, conducted the New York Longitudinal Study, in which they closely observed a group of young children over many years. Their studies compelled them to argue for the significance of more innate temperament traits as critical aspects of a youth’s overall adjustment.2 The Human Genome Project was launched in 1990, and the entire decade was designated as the “Decade of the Brain.” During this time, neuroscience research exploded as techniques, such as MRI and PET, allowed scientists to view the living brain like never before.
The type of research investigation that perhaps was most directly relevant to the nature-nurture debate and that became quite popular during this time was the twin study. By comparing the relative similarities among monozygotic and dizygotic twins raised in the same household, it became possible to calculate directly the degree to which a variable of interest (intelligence, height, aggressive behavior) could be attributed to genetic versus environmental factors. When it came to behavioral variables, a repeated finding that emerged was that both genetic and environmental influences are important, often at close to a 50/50 split in terms of magnitude.3,4 These studies were complemented by molecular genetic studies, which were beginning to be able to identify specific genes that conveyed usually small amounts of risk for a wide range of psychiatric disorders.
Yet, while twin studies and many other lines of research made it increasingly difficult to argue for the overwhelming supremacy of either nature or nurture, the two domains generally were treated as being independent of each other. Specific traits or symptoms in an individual often were thought of as being the result of either psychological (nurture) or biological (nature) causes. Terms such as “endogenous depression,” for example, were used to distinguish those who had symptoms that were thought generally to be out of reach for “psychological” treatments, such as psychotherapy. Looking back, it might be fair to say that one of the principle flaws in this perspective was the commonly held belief that, if something was brain based or biological, then it therefore implied a kind of automatic “wiring” of the brain that was generally driven by genes and beyond the influence of environmental factors.