The Faith Instinct Page 3
THE COMPLEXITY OF RELIGION—its intricate role in human history, the strong mix of emotions it raises in participants’ hearts—all seem to some degree explicable in terms of the forces that shaped the emergence of religious behavior during the dawn of human evolution. It is these shaping forces that must now be explored. But before examining religious behavior itself, it is pertinent to consider an essential human faculty that is a pillar of religion, though also separable from it, and that is the moral instinct. At the social level, religion has long been seen as essential to morality and probably still is. For even though individuals can behave morally without religion, most atheists and agnostics take good care to observe the moral standards of their community, which even in highly secular countries are influenced by religion.
Religion and morality share a common feature that reflects their origins as evolved behaviors: both are rooted in the emotions. Religious knowledge is not like knowing the day of the week; it is something a person feels and is deeply committed to. Moral intuitions usually appear in the mind as strong convictions, not as neutral facts. Religious and moral beliefs can be discussed in a rational way, but both have emotion-laden components that are shaped in regions of the brain to which the conscious mind does not have access. Natural selection has tagged them with a compelling quality of which mere facts are free.
Morality is older than religion—its roots can be seen in monkeys and apes—and religious behavior was engrafted on top of it in the human lineage alone. Understanding how the moral instincts evolved makes it easier to see that religious behavior too has an evolutionary origin.
2
THE MORAL INSTINCT
Man was destined for society. His morality, therefore, was to be formed to this object. He was endowed with a sense of right & wrong merely relative to this. This sense is as much a part of his nature, as the sense of hearing, seeing, feeling. . . . State a moral case to a ploughman & a professor. The former will decide it as well, & often better than the latter, because he has not been led astray by artificial rules.
THOMAS JEFFERSON10
Everyone knows the difference between right and wrong. But where does that sure knowledge come from? From reason, as some philosophers have taught? Or from divine revelation, as theologians say?
In the last few years a startling new idea has been introduced to the age-old debate about the nature of morality. Biologists have come to realize that social animals, in interacting with other members of their community, have developed rules for restraining their self-interest. It is these rules of self-restraint, which are likely to have a genetic basis, that make up the social fabric of a baboon troop or band of chimpanzees.
No one is imputing morality to animals, but observers have found that monkeys and apes show many behaviors, such as empathy and a sense of reciprocity, that could be building blocks of the moral sense that is so evident in people. Humans would have inherited these building blocks from their apelike ancestors and developed them into moral instincts.
Biologists thus began to see that they might be able to construct a new explanation of morality: moral behavior does not originate from outside the human mind or even from conscious reasoning, the sources favored by theologians and philosophers, but rather has been wired into the genetic circuitry of the mind by evolution.
The clearest statement of the new program came from the distinguished biologist Edward O. Wilson. “The time has come,” he wrote in his book Sociobiology, “for ethics to be removed temporarily from the hands of the philosophers and biologicized.”11 A few years later he confidently predicted that “Science for its part will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religious sentiments.”12
Both philosophers and psychologists took some time to respond to Wilson’s challenge, but a highly interesting investigation is now being undertaken by both groups, working partly in collaboration.
The new view of morality, that it is at least partly shaped by evolution, has not been arrived at easily. Philosophers long focused on reason as the basis of morality. David Hume, the eighteenth-century Scottish philosopher, defied this tradition in arguing strongly that morals spring not from conscious reasoning but from the emotions. “Morals excite passions, and produce or prevent actions. Reason of itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason,” Hume wrote in his Treatise on Human Nature.
But Hume’s suggestion only made philosophers keener to found morality in reason. The German philosopher Immanuel Kant sought to base morality outside of nature, in a world of pure reason and of moral imperatives that met the test of being fit to be universal laws. This proposal, Wilson wrote acidly, made no sense at all: “Sometimes a concept is baffling not because it is profound but because it is wrong. This idea does not accord, we know now, with the evidence of how the brain works.”13
Psychologists too, however, were long committed to the philosophers’ program of deriving morality exclusively from reason. The Swiss psychologist Jean Piaget, following Kant’s ideas, argued that children learned ideas about morality as they passed through various stages of mental development. Lawrence Kohlberg, an American psychologist, built on Piaget’s ideas, arguing that children went through six stages of moral reasoning. But his analysis was based on interviewing children and having them describe their moral reasoning, so reason was all he could perceive.
Even primatologists, who would eventually contribute to the new view of morality, were muzzled because animal behaviorists, under the baleful influence of the psychologist B. F. Skinner, accused anyone of anthropomorphism if they attributed emotions like empathy to animals.
With everyone on the wrong track, and Hume’s insight neglected, the study of morality was at something of a stalemate. “It is an astonishing circumstance that the study of ethics has advanced so little since the nineteenth century,” Wilson wrote in 1998, dismissing a century’s work.14
A development that helped break the logjam was an article in 2001 by Jonathan Haidt, a psychologist at the University of Virginia. Haidt had taken an interest in the emotion of disgust and was intrigued by a phenomenon he called moral dumbfounding. He would read people stories about a family that cooked and ate its pet dog after it had been run over, or a woman who cleaned a toilet with the national flag. His subjects were duly disgusted and firmly insisted these actions were wrong. But several were unable to explain why they held this opinion, given that no one in the stories was harmed.
It seemed to Haidt that if people could not explain their moral judgments, then evidently they were not reasoning their way toward them.
The observation prompted him to develop a new perspective on how people make moral decisions. Drawing on his own research and that of others, he argued that people make two kinds of moral decision. One, which he called moral intuition, comes from the unconscious mind and is made instantly. The other, moral reasoning, is a slower, after-the-fact process made by the conscious mind. “Moral judgments appear in consciousness automatically and effortlessly as the result of moral intuitions.... Moral reasoning is an effortful process, engaged in after a moral judgment is made, in which a person searches for arguments that will support an already made judgment,” he wrote.15
The moral reasoning decision, which had received the almost exclusive attention of philosophers and psychologists for centuries, is just a façade, in Haidt’s view, and it is mostly intended to impress others that a person has made the right decision. People don’t in fact know how they make their morally intuitive decisions, because these are formed in the unconscious mind and are inaccessible to them. So when asked why they made a certain decision, they will review a menu of logically possible explanations, choose the one that seems closest to the facts, and argue like a lawyer that that was their reason. This, he points out, is why moral arguments are often so bitter and indecisive. Each party makes lawyerlike rebuttals of the opponent’s arguments in the hope
of changing his mind. But since the opponent arrived at his position intuitively, not for his stated reasons, he is of course not persuaded. The hope of changing his mind by reasoning is as futile as trying to make a dog happy by wagging its tail for it.
Haidt then turned to exploring how the moral intuition process works. He argued, based on a range of psychological experiments, that the intuitive process is partly genetic, built in by evolution, and partly shaped by culture.
The genetic component of the process probably shapes specialized neural circuits or modules in the brain. Some of these may prompt universal moral behaviors such as empathy and reciprocity. Others probably predispose people to learn the particular moral values of their society at an appropriate age.
This learning process begins early in life. By the age of two, writes the psychologist Jerome Kagan, children have developed a mental list of prohibited actions. By three, they apply the concepts of good and bad to things and actions, including their own behavior. Between the ages of three and six, they show feelings of guilt at having violated a standard. They also learn to distinguish between absolute standards and mere conventions. “As children grow, they follow a universal sequence of stages in the development of morality,” Kagan writes.16
That children everywhere follow the same sequence of stages suggests that a genetic program is unfolding to guide the learning of morality, including the development of what Haidt calls moral intuition.
Such a program would resemble those known to shape other important brain functions. The brain does much of its maturing after birth, forming connections and refining its neural circuitry when the infant encounters relevant experience from the outside world. Vision is one faculty that matures at a critical age; language is another, and moral intuition is a third.
Damage to a special region of the prefrontal cortex, its ventromedial area located just behind the bridge of the nose, is associated with poor judgment and antisocial behavior. Neural circuitry in the brain’s prefrontal cortex is evidently associated with the cultural shaping of moral intuitions.
The existence of special neural circuitry in the brain dedicated to moral decisions is further evidence that morality is an evolved faculty with a genetic basis. In the well-known case of Phineas Gage, a thin iron rod was shot through Gage’s frontal lobe in a railroad construction accident in 1848. Gage, astonishingly, survived the accident but his personality was changed. Previously hardworking and responsible, he was now “fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires,” according to a physician who examined him 20 years later.17
A more specific damage to moral sensibilities is seen in patients with Huntington’s disease. Strangely, they become very utilitarian, making moral judgments by weighing only the consequences and ignoring strong social taboos. Consider a situation where a man’s wife has just died. Her body is there on the bed, and he decides to have intercourse with her one last time. Is that OK? Most people will say absolutely not. Huntington’s patients see no problem. Their sense of disgust, an emotion that intensifies certain moral judgments, seems strangely relaxed: if shown a piece of chocolate molded in the form of dog turd, most people will lose any appetite for it, but many Huntington’s patients will happily wolf it down.18
There seem to be neural circuitries for morality and for disgust, since specific damage to the brain can cause a loss of either behavior. But these behaviors, though at their core very similar in every society, are heavily shaped by culture. Because of cultural differences, societies may vary widely in terms of the actions they consider morally permissible. In Western societies, for instance, killing an infant is generally regarded as murder. But among the !Kung San, a hunting and gathering people in the Kalahari desert of southern Africa, it is the mother’s moral duty to kill after birth any infant that is deformed, and one of each pair of twins.19 A !Kung mother must carry her infant wherever she goes, and does for some 5,000 miles before the child learns to walk. Since she must also carry food, water and possessions, she cannot carry twins. So the duty to kill a twin, and to avoid investment in a defective child with limited prospects of survival, can be seen not as any moral deficiency on the !Kungs’ part but rather as a shaping of human moral intuitions to their particular circumstances.
Standards of sexual morality vary widely, particularly in regions like aboriginal Australia and neighboring Melanesia where conception is not regarded as dependent on the father’s sperm and men are therefore less jealous of sexual access to their partners. Thus at kayasa, the festival gatherings held by people of the Trobriand Islands off the eastern end of Papua New Guinea, the sportive element in games was taken somewhat further than is customary in Western countries. “At a tug-of-war kayasa in the south,” reports the anthropologist Bronislaw Malinowski, “men and women would always be on opposite sides. The winning side would ceremonially deride the vanquished with the typical ululating scream (katugogova), and then assail their prostrate opponents, and the sexual act would be carried out in public. On one occasion when I discussed this matter with a mixed crowd from the north and the south, both sides categorically confirmed the correctness of this statement.”20 As Rudyard Kipling had occasion to note, “The wildest dreams of Kew are the facts of Khatmandu, And the crimes of Clapham chaste in Martaban.”
But the commonalities in morality are generally more striking than the variations. The fundamental moral principle of “do as you would be done by” is found in all societies, as are prohibitions against murder, theft and incest. Many of these universal moral principles are likely to be shaped by innate neural circuits, while the variations spring from moral learning systems that are more guided by cultural traditions and a society’s particular ecological circumstances.
Returning to moral intuition and moral reasoning, the two basic psychological processes that underlie morality, the question arises as to why evolution has so generously equipped us with two processes, when one might seem plenty. The most plausible answer is that the two processes emerged from different stages of human evolution.
Moral intuition is the more ancient system, presumably put in place before humans gained either the power of reasoning or the faculty of language. After the evolution of language, when people needed to explain and justify their actions to others, moral reasoning would have developed. But evolution would have had no compelling rationale for handing over control of individual behavior to this novel faculty, at the expense of the moral intuition that had safeguarded human societies for so long. So the arrangement that evolved was that both systems were retained. The moral intuitive system continues to work beneath the level of consciousness, delivering its snap judgments to the conscious mind. The moral reasoning system then takes over, working like a lawyer or public relations agent to rationalize the moral input it has been given and to justify an individual’s actions to himself and his society.
Moral Intuition and Trolley Problems
Though the moral intuitive system is inaccessible to the conscious mind, some intriguing traces of its presence can be seen in the subtle moral exercises known as trolley problems. First devised by the moral philosopher Philippa Foot, trolley problems have been developed by psychologists interested in probing the invisible moral rules of the intuitive system. The problems are entirely artificial, which avoids real-life complications and purifies the moral decision to be made.
In the typical trolley problem, a trolley or train is barreling down on five people who are rashly walking on the tracks, oblivious to the danger until the last moment and unable to escape because the track runs through an embankment with steep sides. An individual standing between the train and the five people has the power to save them, but only after making a fraught moral decision.
So consider first the case of Denise, who is standing by a switch that can divert the train onto a side-track. However, a hiker is walking on
the side-track and he too cannot escape the train. Would Denise be right to pull the switch and divert the train, saving five people but killing one?
Ethicists may debate the correct answer but psychologists are more interested in the practical matter of what answer do most people in fact give. Marc Hauser, a psychologist at Harvard University, posed the question on his Web site. Tallying the results after several thousand people had taken the test, he found 90 percent said it was OK for Denise to pull the switch.
Subjects were next asked to consider Fred’s dilemma. Fred is standing on a bridge above the railroad tracks. He can save the five people on the track ahead by throwing a heavy object down in the train’s path and slowing it. just such an object is standing beside him. It’s a thick-set man. Can Fred push the man in the train’s path, killing him to save the five?
Only 10 percent of the respondents to Hauser’s survey thought it was OK for Fred to kill the man.
Then came the interesting question: Why is Denise’s action OK but Fred’s not, when the practical outcome in both cases is identical—one life lost to save five?
Some 70 percent of the subjects were unable to give any plausible reason for the distinction. “The fact that most people have no idea why they draw a distinction between these cases reinforces the point that people tend to make moral judgments without being aware of the underlying principles,” Hauser writes.21 He also notes the problem posed by this result for those who think that all morality is taught. For if people cannot articulate the reasons for their moral decisions, how can they teach them?