Read The Moral Landscape: How Science Can Determine Human Values Page 9


  But perhaps there are two possible worlds that maximize the well being of their inhabitants to precisely the same degree: in world X everyone is focused on the welfare of all others without bias, while in world Y everyone shows some degree of moral preference for their friends and family. Perhaps these worlds are equally good, in that their inhabitants enjoy precisely the same level of well-being. These could be thought of as two peaks on the moral landscape. Perhaps there are others. Does this pose a threat to moral realism or to consequentialism? No, because there would still be right and wrong ways to move from our current position on the moral landscape toward one peak or the other, and movement would still be a matter of increasing well-being in the end.

  To bring the discussion back to the especially low-hanging fruit of conservative Islam: there is absolutely no reason to think that demoniz-ing homosexuals, stoning adulterers, veiling women, soliciting the murder of artists and intellectuals, and celebrating the exploits of suicide bombers will move humanity toward a peak on the moral landscape. This is, I think, as objective a claim as we ever make in science.

  Consider the Danish cartoon controversy: an eruption of religious insanity that still flows to this day. Kurt Westergaard, the cartoonist who drew what was arguably the most inflammatory of these utterly benign cartoons has lived in hiding since pious Muslims first began calling for his murder in 2006. A few weeks ago—more than three years after the controversy first began—a Somali man broke into Westergaard's home with an axe. Only the construction of a specially designed "safe room" allowed Westergaard to escape being slaughtered for the glory of God (his five-year-old granddaughter also witnessed the attack). Westergaard now lives with continuous police protection—as do the other eighty-seven men in Denmark who have the misfortune of being named "Kurt Westergaard." 32

  The peculiar concerns of Islam have created communities in almost every society on earth that grow so unhinged in the face of criticism that they will reliably riot, burn embassies, and seek to kill peaceful people, over cartoons. This is something they will not do, incidentally, in protest over the continuous atrocities committed against them by their fellow Muslims. The reasons why such a terrifying inversion of priorities does not tend to maximize human happiness are susceptible to many levels of analysis—ranging from biochemistry to economics. But do we need further information in this case? It seems to me that we already know enough about the human condition to know that killing cartoonists for blasphemy does not lead anywhere worth going on the moral landscape.

  There are other results in psychology and behavioral economics that make it difficult to assess changes in human well-being. For instance, people tend to consider losses to be far more significant than forsaken gains, even when the net result is the same. For instance, when presented with a wager where they stand a 50 percent chance of losing $ 100, most people will consider anything less than a potential gain of $200 to be unattractive. This bias relates to what has come to be known as "the endowment effect": people demand more money in exchange for an object that has been given to them than they would spend to acquire the object in the first place. In psychologist Daniel Kahneman's words, "a good is worth more when it is considered as something that could be lost or given up than when it is evaluated as a potential gain." 33 This aversion to loss causes human beings to generally err on the side of maintaining the status quo. It is also an important impediment to conflict resolution through negotiation: for if each party values his opponent's concessions as gains and his own as losses, each is bound to perceive his sacrifice as being greater. 34

  Loss aversion has been studied with functional magnetic resonance imaging (fMRI). If this bias were the result of negative feelings associated with potential loss, we would expect brain regions known to govern negative emotion to be involved. However, researchers have not found increased activity in any areas of the brain as losses increase. Instead, those regions that represent gains show decreasing activity as the size of the potential losses increases. In fact, these brain structures themselves exhibit a pattern of "neural loss aversion": their activity decreases at a steeper rate in the face of potential losses than they increase for potential gains. 35

  There are clearly cases in which such biases seem to produce moral illusions—where a person's view of right and wrong will depend on whether an outcome is described in terms of gains or losses. Some of these illusions might not be susceptible to full correction. As with many perceptual illusions, it may be impossible to "see" two circumstances as morally equivalent, even while "knowing" that they are. In such cases, it may be ethical to ignore how things seem. Or it may be that the path we take to arrive at identical outcomes really does matter to us—and, therefore, that losses and gains will remain incommensurable.

  Imagine, for instance, that you are empaneled as the member of a jury in a civil trial and asked to determine how much a hospital should pay in damages to the parents of children who received substandard care in their facility. There are two scenarios to consider:

  Couple A learned that their three-year-old daughter was inadvertently given a neurotoxin by the hospital staff. Before being admitted, their daughter was a musical prodigy with an IQ of 195. She has since lost all her intellectual gifts. She can no longer play music with any facility and her IQ is now a perfectly average 100.

  Couple B learned that the hospital neglected to give their three-year-old daughter, who has an IQ of 100, a perfectly safe and inexpensive genetic enhancement that would have given her remarkable musical talent and nearly doubled her IQ. Their daughter s intelligence remains average, and she lacks any noticeable musical gifts. The critical period for giving this enhancement has passed.

  Obviously the end result under either scenario is the same. But what if the mental suffering associated with loss is simply bound to be greater than that associated with forsaken gains? If so, it may be appropriate to take this difference into account, even when we cannot give a rational explanation of why it is worse to lose something than not to gain it. This is another source of difficulty in the moral domain: unlike dilemmas in behavioral economics, it is often difficult to establish the criteria by which two outcomes can be judged equivalent. 36 There is probably another principle at work in this example, however: people tend to view sins of commission more harshly than sins of omission. It is not clear how we should account for this bias either. But, once again, to say that there are right answers to questions of how to maximize human well-being is not to say that we will always be in a position to answer such questions. There will be peaks and valleys on the moral landscape, and movement between them is clearly possible, whether or not we always know which way is up.

  There are many other features of our subjectivity that have implications for morality. For instance, people tend to evaluate an experience based on its peak intensity (whether positive or negative) and the quality of its final moments. In psychology, this is known as the "peak/end rule." Testing this rule in a clinical environment, one group found that patients undergoing colonoscopies (in the days when this procedure was done without anesthetic) could have their perception of suffering markedly reduced, and their likelihood of returning for a follow-up exam increased, if their physician needlessly prolonged the procedure at its lowest level of discomfort by leaving the colonoscope inserted for a few extra minutes. 37 The same principle seems to hold for aversive sounds 38 and for exposure to cold. 39 Such findings suggest that, under certain conditions, it is compassionate to prolong a persons pain unnecessarily so as to reduce his memory of suffering later on. Indeed, it might be unethical to do otherwise. Needless to say, this is a profoundly counterintuitive result. But this is precisely what is so important about science: it allows us to investigate the world, and our place within it, in ways that get behind first appearances. Why shouldn't we do this with morality and human values generally?

  Fairness and Hierarchy

  It is widely believed that focusing on the consequences of a person's actions is merely one of several ap
proaches to ethics—one that is beset by paradox and often impossible to implement. Imagined alternatives are either highly rational, as in the work of a modern philosopher like John Rawls, 40 or decidedly otherwise, as we see in the disparate and often contradictory precepts that issue from the world's major religions.

  My reasons for dismissing revealed religion as a source of moral guidance have been spelled out elsewhere, 41 so I will not ride this hobbyhorse here, apart from pointing out the obvious: (1) there are many revealed religions available to us, and they offer mutually incompatible doctrines; (2) the scriptures of many religions, including the most well subscribed (i.e., Christianity and Islam), countenance patently unethical practices like slavery; (3) the faculty we use to validate religious precepts, judging the Golden Rule to be wise and the murder of apostates to be foolish, is something we bring to scripture; it does not, therefore, come from scripture; (4) the reasons for believing that any of the world's religions were "revealed" to our ancestors (rather than merely invented by men and women who did not have the benefit of a twenty-first-century education) are either risible or nonexistent—and the idea that each of these mutually contradictory doctrines is inerrant remains a logical impossibility. Here we can take refuge in Bertrand Russell's famous remark that even if we could be certain that one of the world's religions was perfectly true, given the sheer number of conflicting faiths on offer, every believer should expect damnation purely as a matter of probability.

  Among the rational challenges to consequentialism, the "contractu-alism" of John Rawls has been the most influential in recent decades. In his book A Theory of Justice Rawls offered an approach to building a fair society that he considered an alternative to the aim of maximizing human welfare. 42 His primary method, for which this work is duly famous, was to ask how reasonable people would structure a society, guided by their self-interest, if they couldn't know what sort of person they would be in it. Rawls called this novel starting point "the original position," from which each person must judge the fairness of every law and social arrangement from behind a "veil of ignorance." In other words, we can design any society we like as long as we do not presume to know, in advance, whether we will be black or white, male or female, young or old, healthy or sick, of high or low intelligence, beautiful or ugly, etc.

  As a method for judging questions of fairness, this thought experiment is undeniably brilliant. But is it really an alternative to thinking about the actual consequences of our behavior? How would we feel if, after structuring our ideal society from behind a veil of ignorance, we were told by an omniscient being that we had made a few choices that, though eminently fair, would lead to the unnecessary misery of millions, while parameters that were ever-so-slightly less fair would entail no such suffering? Could we be indifferent to this information? The moment we conceive of justice as being fully separable from human well-being, we are faced with the prospect of there being morally "right" actions and social systems that are, on balance, detrimental to the welfare of everyone affected by them. To simply bite the bullet on this point, as Rawls seemed to do, saying "there is no reason to think that just institutions will maximize the good" 43 seems a mere embrace of moral and philosophical defeat.

  Some people worry that a commitment to maximizing a society's welfare could lead us to sacrifice the rights and liberties of the few wherever these losses would be offset by the greater gains of the many. Why not have a society in which a few slaves are continually worked to death for the pleasure of the rest? The worry is that a focus on collective welfare does not seem to respect people as ends in themselves. And whose welfare should we care about? The pleasure that a racist takes in abusing some minority group, for instance, seems on all fours with the pleasure a saint takes in risking his life to help a stranger. If there are more racists than saints, it seems the racists will win, and we will be obliged to build a society that maximizes the pleasure of unjust men.

  But such concerns clearly rest on an incomplete picture of human well-being. To the degree that treating people as ends in themselves is a good way to safeguard human well-being, it is precisely what we should do. Fairness is not merely an abstract principle—it is a felt experience. We all know this from the inside, of course, but neuroimaging has also shown that fairness drives reward-related activity in the brain, while accepting unfair proposals requires the regulation of negative emotion. 44 Taking others' interests into account, making impartial decisions (and knowing that others will make them), rendering help to the needy—these are experiences that contribute to our psychological and social well-being. It seems perfectly reasonable, within a consequential-ist framework, for each of us to submit to a system of justice in which our immediate, selfish interests will often be superseded by considerations of fairness. It is only reasonable, however, on the assumption that everyone will tend to be better off under such a system. As, it seems, they will. 45

  While each individual's search for happiness may not be compatible in every instance with our efforts to build a just society, we should not lose sight of the fact that societies do not suffer; people do. The only thing wrong with injustice is that it is, on some level, actually or potentially bad for people. 46 Injustice makes its victims demonstrably less happy, and it could be easily argued that it tends to make its perpetrators less happy than they would be if they cared about the well-being of others. Injustice also destroys trust, making it difficult for strangers to cooperate. Of course, here we are talking about the nature of conscious experience, and so we are, of necessity, talking about processes at work in the brains of human beings. The neuroscience of morality and social emotions is only just beginning, but there seems no question that it will one day deliver morally relevant insights regarding the material causes of our happiness and suffering. While there may be some surprises in store for us down this path, there is every reason to expect that kindness, compassion, fairness, and other classically "good" traits will be vindicated neuroscientifically—which is to say that we will only discover further reasons to believe that they are good for us, in that they generally enhance our lives.

  We have already begun to see that morality, like rationality, implies the existence of certain norms—that is, it does not merely describe how we tend to think and behave; it tells us how we should think and behave. One norm that morality and rationality share is the interchangeabil-ity of perspective. 47 The solution to a problem should not depend on whether you are the husband or the wife, the employer or employee, the creditor or debtor, etc. This is why one cannot argue for the Tightness of one's views on the basis of mere preference. In the moral sphere, this requirement lies at the core of what we mean by "fairness." It also reveals why it is generally not a good thing to have a different ethical code for friends and strangers.

  We have all met people who behave quite differently in business than in their personal lives. While they would never lie to their friends, they might lie without a qualm to their clients or customers. Why is this a moral failing? At the very least, it is vulnerable to what could be called the principle of the unpleasant surprise. Consider what happens to such a person when he discovers that one of his customers is actually a friend: "Oh, why didn't you say you were Jennifer's sister! Uh ...Okay, don't buy that model; this one is a much better deal." Such moments expose a rift in a person's ethics that is always unflattering. People with two ethical codes are perpetually susceptible to embarrassments of this kind. They are also less trustworthy—and trust is a measure of how much a person can be relied upon to safeguard other people's well-being. Even if you happen to be a close friend of such a person—that is, on the right side of his ethics—you can't trust him to interact with others you may care about ("I didn't know she -wasyour daughter. Sorry about that").

  Or consider the position of a Nazi living under the Third Reich, having fully committed himself to exterminating the world's Jews, only to learn, as many did, that he was Jewish himself. Unless some compelling argument for the moral necessity of his suicide were
forthcoming, we can imagine that it would be difficult for our protagonist to square his Nazi ethics with his actual identity. Clearly, his sense of right and wrong was predicated on a false belief about his own genealogy. A genuine ethics should not be vulnerable to such unpleasant surprises. This seems another way of arriving at Rawls's "original position." That which is right cannot be dependent upon one's being a member of a certain tribe—if for no other reason than one can be mistaken about the fact of one's membership.