Read The Moral Landscape: How Science Can Determine Human Values Page 13


  It seems to me that we need not have any illusions about a casual agent living within the human mind to condemn such a mind as unethical, negligent, or even evil, and therefore liable to occasion further harm. What we condemn in another person is the intention to do harm —and thus any condition or circumstance (e.g., accident, mental illness, youth) that makes it unlikely that a person could harbor such an intention would mitigate guilt, without any recourse to notions of free will. Likewise, degrees of guilt could be judged, as they are now, by reference to the facts of the case: the personality of the accused, his prior offenses, his patterns of association with others, his use of intoxicants, his confessed intentions with regard to the victim, etc. If a person's actions seem to have been entirely out of character, this will influence our sense of the risk he now poses to others. If the accused appears unrepentant and anxious to kill again, we need entertain no notions of free will to consider him a danger to society.

  Of course, we hold one another accountable for more than those actions that we consciously plan, because most voluntary behavior comes about without explicit planning. 108 But why is the conscious decision to do another person harm particularly blameworthy? Because consciousness is, among other things, the context in which our intentions become completely available to us. What we do subsequent to conscious planning tends to most fully reflect the global properties of our minds—our beliefs, desires, goals, prejudices, etc. If, after weeks of deliberation, library research, and debate with your friends, you still decide to kill the king—well, then killing the king really reflects the sort of person you are. Consequently, it makes sense for the rest of society to worry about you.

  While viewing human beings as forces of nature does not prevent us from thinking in terms of moral responsibility, it does call the logic of retribution into question. Clearly, we need to build prisons for people who are intent upon harming others. But if we could incarcerate earthquakes and hurricanes for their crimes, we would build prisons for them as well. 109 The men and women on death row have some combination of bad genes, bad parents, bad ideas, and bad luck—which of these quantities, exactly, were they responsible for? No human being stands as author to his own genes or his upbringing, and yet we have every reason to believe that these factors determine his character throughout life. Our system of justice should reflect our understanding that each of us could have been dealt a very different hand in life. In fact, it seems immoral not to recognize just how much luck is involved in morality itself.

  Consider what would happen if we discovered a cure for human evil. Imagine, for the sake of argument, that every relevant change in the human brain can be made cheaply, painlessly, and safely. The cure for psychopathy can be put directly into the food supply like vitamin D. Evil is now nothing more than a nutritional deficiency.

  If we imagine that a cure for evil exists, we can see that our retributive impulse is profoundly flawed. Consider, for instance, the prospect of withholding the cure for evil from a murderer as part of his punishment. Would this make any moral sense at all? What could it possibly mean to say that a person deserves to have this treatment withheld? What if the treatment had been available prior to the persons crime? Would he still be responsible for his actions? It seems far more likely that those who had been aware of his case would be indicted for negligence. Would it make any sense at all to deny surgery to the man in example 5 as a punishment if we knew the brain tumor was the proximate cause of his violence? Of course not. The urge for retribution, therefore, seems to depend upon our not seeing the underlying causes of human behavior.

  Despite our attachment to notions of free will, most us know that disorders of the brain can trump the best intentions of the mind. This shift in understanding represents progress toward a deeper, more consistent, and more compassionate view of our common humanity—and we should note that this is progress away from religious metaphysics. It seems to me that few concepts have offered greater scope for human cruelty than the idea of an immortal soul that stands independent of all material influences, ranging from genes to economic systems.

  And yet one of the fears surrounding our progress in neuroscience is that this knowledge will dehumanize us. Could thinking about the mind as the product of the physical brain diminish our compassion for one another? While it is reasonable to ask this question, it seems to me that, on balance, soul/body dualism has been the enemy of compassion. For instance, the moral stigma that still surrounds disorders of mood and cognition seems largely the result of viewing the mind as distinct from the brain. When the pancreas fails to produce insulin, there is no shame in taking synthetic insulin to compensate for its lost function. Many people do not feel the same way about regulating mood with antidepressants (for reasons that appear quite distinct from any concern about potential side effects). If this bias has diminished in recent years, it has been because of an increased appreciation of the brain as a physical organ.

  However, the issue of retribution is a genuinely tricky one. In a fascinating article in The New Yorker, Jared Diamond recently wrote of the high price we often pay for leaving vengeance to the state. 110 He compares the experience of his friend Daniel, a New Guinea highlander, who avenged the death of a paternal uncle and felt exquisite relief, to the tragic experience of his late father-in-law, who had the opportunity to kill the man who murdered his family during the Holocaust but opted instead to turn him over to the police. After spending only a year in jail, the killer was released, and Diamond's father-in-law spent the last sixty years of his life "tormented by regret and guilt." While there is much to be said against the vendetta culture of the New Guinea Highlands, it is clear that the practice of taking vengeance answers to a common psychological need.

  We are deeply disposed to perceive people as the authors of their actions, to hold them responsible for the wrongs they do us, and to feel that these debts must be repaid. Often, the only compensation that seems appropriate requires that the perpetrator of a crime suffer or forfeit his life. It remains to be seen how the best system of justice would steward these impulses. Clearly, a full account of the causes of human behavior should undermine our natural response to injustice, at least to some degree. It seems doubtful, for instance, that Diamond's father-in-law would have suffered the same pangs of unrequited vengeance if his family had been trampled by an elephant or laid low by cholera. Similarly, we can expect that his regret would have been significantly eased if he had learned that his family's killer had lived a flawlessly moral life until a virus began ravaging his medial prefrontal cortex.

  It may be that a sham form of retribution could still be moral, if it led people to behave far better than they otherwise would. Whether it is useful to emphasize the punishment of certain criminals—rather than their containment or rehabilitation—is a question for social and psychological science. But it seems quite clear that a retributive impulse, based upon the idea that each person is the free author of his thoughts and actions, rests on a cognitive and emotional illusion—and perpetuates a moral one.

  It is generally argued that our sense of free will presents a compelling mystery: on the one hand, it is impossible to make sense of it in causal terms; on the other, there is a powerful subjective sense that we are the authors of our own actions. 111 However, I think that this mystery is itself a symptom of our confusion. It is not that free will is simply an illusion: our experience is not merely delivering a distorted view of reality; rather, we are mistaken about the nature of our experience. We do not feel as free as we think we feel. Our sense of our own freedom results from our not paying attention to what it is actually like to be what we are. The moment we do pay attention, we begin to see that free will is nowhere to be found, and our subjectivity is perfectly compatible with this truth. Thoughts and intentions simply arise in the mind. What else could they do? The truth about us is stranger than many suppose: The illusion of free will is itself an illusion.

  Chapter 3 BELIEF

  A candidate for the presidency of the Unit
ed States once met a group of potential supporters at the home of a wealthy benefactor. After brief introductions, he spotted a bowl of potpourri on the table beside him. Mistaking it for a bowl of trail mix, he scooped up a fistful of this decorative debris—which consisted of tree bark, incense, flowers, pinecones, and other inedible bits of woodland—and delivered it greedily into his mouth.

  What our hero did next went unreported (suffice it to say that he did not become the next president of the United States). We can imagine the psychology of the scene, however: the candidate wide-eyed in ambush, caught between the look of horror on his host's face and the panic of his own tongue, having to quickly decide whether to swallow the vile material or disgorge it in full view of his audience. We can see the celebrities and movie producers feigning not to notice the great man's gaffe and taking a sudden interest in the walls, ceiling, and floorboards of the room. Some were surely less discreet. We can imagine their faces from the candidates point of view: a pageant of ill-concealed emotion, ranging from amazement to schadenfreude.

  All such responses, their personal and social significance, and their moment-to-moment physiological effects, arise from mental capacities that are distinctly human: the recognition of another's intentions and state of mind, the representation of the self in both physical and social space, the impulse to save face (or to help others to save it), etc.

  While such mental states undoubtedly have analogs in the lives of other animals, we human beings experience them with a special poignancy. There may be many reasons for this, but one is clearly paramount: we alone, among all earth's creatures, possess the ability to think and communicate with complex language.

  The work of archeologists, paleoanthropologists, geneticists, and neuroscientists—not to mention the relative taciturnity of our primate cousins—suggests that human language is a very recent adaptation. 1 Our species diverged from its common ancestor with the chimpanzees only 6.3 million years ago. And it now seems that the split with chimps may have been less than decisive, as comparisons between the two genomes, focusing on the greater-than-expected similarity of our X chromosomes, reveal that our species diverged, interbred for a time, and then diverged for good. 2 Such rustic encounters notwithstanding, all human beings currently alive appear to have descended from a single population of hunter-gatherers that lived in Africa around 50,000 bce. These were the first members of our species to exhibit the technical and social innovations made possible by language. 3

  Genetic evidence indicates that a band of perhaps 150 of these people left Africa and gradually populated the rest of the earth. Their migration would not have been without its hardships, however, as they were not alone: Homo neanderthalensis laid claim to Europe and the Middle East, and Homo erectus occupied Asia. Both were species of archaic humans that had developed along separate evolutionary paths after one or more prior migrations out of Africa. Both possessed large brains, fashioned stone tools similar to those of Homo sapiens, and were well armed. And yet over the next twenty thousand years, our ancestors gradually displaced, and may have physically eradicated, all rivals. 4 Given the larger brains and sturdier build of the Neanderthals, it seems reasonable to suppose that only our species had the advantage of fully symbolic, complex speech. 5

  While there is still controversy over the biological origins of human language, as well as over its likely precursors in the communicative behavior of other animals, 6 there is no question that syntactic language lies at the root of our ability to understand the universe, to communicate ideas, to cooperate with one another in complex societies, and to build (one hopes) a sustainable, global civilization. 7 But why has language made such a difference? How has the ability to speak (and to read and write of late) given modern humans a greater purchase on the world? What, after all, has been worth communicating these last 50,000 years? I hope it will not seem philistine of me to suggest that our ability to create fiction has not been the driving force here. The power of language surely results from the fact that it allows mere words to substitute for direct experience and mere thoughts to simulate possible states of the world. Utterances like, "I saw some very scary guys in front of that cave yesterday," would have come in quite handy 50,000 years ago. The brain's capacity to accept such propositions as true —as valid guides to behavior and emotion, as predictive of future outcomes, etc.—explains the transformative power of words. There is a common term we use for this type of acceptance; we call it "belief." 8

  What Is "Belief"?

  It is surprising that so little research has been done on belief, as few mental states exert so sweeping an influence over human life. While we often make a conventional distinction between "belief" and "knowledge," these categories are actually quite misleading. Knowing that George Washington was the first president of the United States and believing the statement "George Washington was the first president of the United States" amount to the same thing. When we distinguish between belief and knowledge in ordinary conversation, it is generally for the purpose of drawing attention to degrees of certainty: I'm apt to say "I know it" when I am quite certain that one of my beliefs about the world is true; when I'm less sure, I may say something like "I believe it is probably true." Most of our knowledge about the world falls between these extremes. The entire spectrum of such convictions—ranging from better-than-a-coin-toss to I-would-bet-my-life-on-it—expresses gradations of "belief."

  It is reasonable to wonder, however, whether "belief" is really a single phenomenon at the level of the brain. Our growing understanding of human memory should make us cautious: over the last fifty years, the concept of "memory" has decomposed into several forms of cognition that are now known to be neurologically and evolutionarily distinct. 9 This should make us wonder whether a notion like "belief" might not also shatter into separate processes when mapped onto the brain. In fact, belief overlaps with certain types of memory, as memory can be equivalent to a belief about the past (e.g., "I had breakfast most days last week"), 10 and certain beliefs are indistinguishable from what is often called "semantic memory" (e.g., "The earth is the third planet from the sun ).

  There is no reason to think that any of our beliefs about the world are stored as propositions, or within discrete structures, inside the brain. 11 Merely understanding a simple proposition often requires the unconscious activation of considerable background knowledge 12 and an active process of hypothesis testing. 13 For instance, a sentence like "The team was terribly disappointed because the second stage failed to fire," while easy enough to read, cannot be understood without some general concept of a rocket launch and a team of engineers. So there is more to even basic communication than the mere decoding of words. We must expect that a similar penumbra of associations will surround specific beliefs as well.

  And yet our beliefs can be represented and expressed as discrete statements. Imagine hearing any one of the following assertions from a trusted friend:

  1. The CDC just announced that cell phones really do cause brain cancer.

  2. My brother won $ 100,000 in Las Vegas over the weekend.

  3. Your car is being towed.

  We trade in such representations of the world all the time. The acceptance of such statements as true (or likely to be true) is the mechanism by which we acquire most of our knowledge about the world. While it would not make any sense to search for structures in the brain that correspond to specific sentences, we may be able to understand the brain states that allow us to accept such sentences as true. 14 When someone says "Your car is being towed," it is your acceptance of this statement as true that sends you racing out the door. "Belief," therefore, can be thought of as a process taking place in the present; it is the act of grasping, not the thing grasped.

  The Oxford English Dictionary defines multiple senses of the term "belief":

  1. The mental action, condition, or habit, of trusting to or confiding in a person or thing; trust, dependence, reliance, confidence, faith.

  2. Mental acceptance of a proposition, stateme
nt, or fact as true, on the ground of authority or evidence; assent of the mind to a statement, or to the truth of a fact beyond observation, on the testimony of another, or to a fact or truth on the evidence of consciousness; the mental condition involved in this assent.

  3. The thing believed; the proposition or set of propositions held true.

  Definition 2 is exactly what we are after, and 1 may apply as well. These first two senses of the term are quite different from the data-centered meaning given in 3.

  Consider the following claim: Starbucks does not sellplutonium. I suspect that most of us would be willing to wager a fair amount of money that this statement is generally true—which is to say that we believe it. However, before reading this statement, you are very unlikely to have considered the prospect that the world's most popular coffee chain might also trade in one of the world's most dangerous substances. Therefore, it does not seem possible for there to have been a structure in your brain that already corresponded to this belief. And yet you clearly harbored some representation of the world that amounts to this belief. Many modes of information processing must lay the groundwork for us to judge the above statement as "true." Most of us know, in a variety of implicit and explicit ways, that Starbucks is not a likely proliferator of nuclear material. Several distinct capacities—episodic memory, semantic knowledge, assumptions about human behavior and economic incentives, inductive reasoning, etc.—conspire to make us accept the above proposition. To say that we already believed that one cannot buy plutonium at Starbucks is to merely put a name to the summation of these processes in the present moment: that is, "belief," in this case, is the disposition to accept a proposition as true (or likely to be).

  This process of acceptance often does more than express our prior commitments, however. It can revise our view of the world in an instant. Imagine reading the following headline in tomorrow's New York Times: "Most of the World's Coffee Is Now Contaminated by Plutonium." Believing this statement would immediately influence your thinking on many fronts, as well as your judgment about the truth of the former proposition. Most of our beliefs have come to us in just this form: as statements that we accept on the assumption that their source is reliable, or because the sheer number of sources rules out any significant likelihood of error.