Read The Moral Landscape: How Science Can Determine Human Values Page 24


  Science and rationality generally are based on intuitions and concepts that cannot be reduced or justified. Just try defining "causation" in noncircular terms. Or try justifying transitivity in logic: if A = B and B = C, then A = C. A skeptic could say, "This is nothing more than an assumption that we've built into the definition of equality' Others will be free to define 'equality' differently." Yes, they will. And we will be free to call them "imbeciles." Seen in this light, moral relativism—the view that the difference between right and wrong has only local validity within a specific culture—should be no more tempting than physical, biological, mathematical, or logical relativism. There are better and wotse ways to define our terms; there are more and less coherent ways to think about reality; and there are—is there any doubt about this?—many ways to seek fulfillment in this life and to not find it.

  22. We can, therefore, let this metaphysical notion of "ought" fall away, and we will be left with a scientific picture of cause and effect. To the degree that it is in our power to produce the worst possible misery for everyone in this universe, we can say that if we don't want everyone to experience the worst possible misery, we shouldn't do X. Can we readily conceive of someone who might hold altogether different values and want all conscious beings, himself included, reduced to the state of worst possible misery? I don't think so. And I don't think we can intelligibly ask questions like, "What if the worst possible misery for everyone is actually good?" Such questions seem analytically confused. We can also pose questions like "What if the most perfect circle is really a square?" or "What if all true statements are actually false?" But if someone persists in speaking this way, I see no obligation to take his views seriously.

  23. And even if minds were independent of the physical universe, we could still speak about facts relative to their well-being. But we would be speaking about some other basis for these facts (souls, disembodied consciousness, ectoplasm, etc.).

  24. On a related point, the philosopher Russell Blackford wrote in response to my TED talk, "I've never yet seen an argument that shows that psychopaths are necessarily mistaken about some fact about the world. Moreover, I don't see how the argument could run." While I discuss psychopathy in greater detail in the next chapter, here is such an argument in brief: We already know that psychopaths have brain damage that prevents them from having certain deeply satisfying experiences (like empathy) that seem good for people both personally and collectively (in that they tend to increase well-being on both counts). Psychopaths, therefore, don't know what they are missing (but we do). The position of a psychopath also cannot be generalized; it is not, therefore, an alternative view of how human beings should live (this is one point Kant got right: even a psychopath couldn't want to live in a world filled with psychopaths). We should also realize that the psychopath we are envisioning is a straw man: watch interviews with real psychopaths, and you will find that they do not tend to claim to be in possession of an alternative morality or to be living deeply fulfilling lives. These people are generally ruled by compulsions that they don't understand and cannot resist. It is absolutely clear that, whatever they might believe about what they are doing, psychopaths are seeking some form of well-being (excitement, ecstasy, feelings of power, etc.), but because of their neurological and social deficits, they are doing a very bad job of it. We can say that a psychopath like Ted Bundy takes satisfaction in the wrong things, because living a life purposed toward raping and killing women does not allow for deeper and more generalizable forms of human flourishing. Compare Bundy's deficits to those of a delusional physicist who finds meaningful patterns and mathematical significance in the wrong places. The mathematician John Nash, while suffering the symptoms of his schizophrenia, seems a good example: his "Eureka!" detectors were poorly calibrated; he saw meaningful patterns where his peers would not—and these patterns were a very poor guide to the proper goals of science (i.e., understanding the physical world). Is there any doubt that Ted Bundy's "Yes! I love this!" detectors were poorly coupled to the possibilities of finding deep fulfillment in this life, or that his obsession with raping and killing young women was a poor guide to the proper goals of morality (i.e., living a fulfilling life with others)?

  While people like Bundy may want some very weird things out of life, no one wants utter, interminable misery. People with apparently different moral codes are still seeking forms of well-being that we recognize—like freedom from pain, doubt, fear, etc.—and their moral codes, however vigorously they might want to defend them, are undermining their well-being in obvious ways. And if someone claims to want to be truly miserable, we are free to treat them like someone who claims to believe that 2 + 2 = 5 or that all events are self-caused. On the subject of morality, as on every other subject, some people are not worth listening to.

  25. From the White House press release: www.bioethics.gov/about/creation .html.

  26. Oxytocin is a neuroactive hormone that appears to govern social recognition in animals and the experience of trust (and its reciprocation) in humans (Zak, Kurzban, & Matzner, 2005; Zak, Stanton, & Ahmadi, 2007).

  27. Appiah,2008,p.4l.

  28. The Stanford Encyclopedia of Philosophy has this to say on the subject of moral relativism:

  In 1947, on the occasion of the United Nations debate about universal human rights, the American Anthropological Association issued a statement declaring that moral values are relative to cultures and that there is no way of showing that the values of one culture are better than those of another. Anthropologists have never been unanimous in asserting this, and in recent years human rights advocacy on the part of some anthropologists has mitigated the relativist orientation of the discipline. Nonetheless, prominent contemporary anthropologists such as Clifford Geertz and Richard A. Shweder continue to defend relativist positions, http://plato.stanford.edu/ entries/moral-relativism/.

  1947? Please note that this was the best the social scientists in the United States could do with the crematoria of Auschwitz still smoking. My spoken and written collisions with Richard Shweder, Scott Atran, Mel Konner, and other anthropologists have convinced me that awareness of moral diversity does not entail, and is a poor surrogate for, clear thinking about human well-being.

  29. Pinker, 2002, p. 273.

  30. Harding, 2001.

  31. For a more complete demolition of feminist and multicultural critiques of Western science, see P. R. Gross, 1991; P. R. Gross & Levitt, 1994.

  32. Weinberg, 2001, p. 105.

  33. Dennett, 1995.

  34. Ibid., p. 487.

  35. See, for instance, M. D. Hauser, 2006. Experiments show that even eight-month-old infants want to see aggressors punished (Bloom, 2010).

  36. www.gallup.com/poll/118378/Majority-Americans-Continue-Oppose-Gay -Marriage.aspx.

  37. There is now a separate field called "neuroethics," formed by a confluence of neuroscience and philosophy, which loosely focuses on matters of this sort. Neuroethics is more than bioethics with respect to the brain (that is, it is more than an ethical framework for the conduct of neuroscience): it encompasses our efforts to understand ethics itself as a biological phenomenon. There is a quickly growing literature on neuroethics (recent, book-length introductions can be found in Gazzaniga, 2005, and Levy, 2007), and there are other neuroethical issues that are relevant to this discussion: concerns about mental privacy, lie detection, and the other implications of an advancing science of neuroimaging; personal responsibility in light of deterministic and random processes in the brain (neither of which lend any credence to common notions of "free will"); the ethics of emotional and cognitive enhancement; the implications of understanding "spiritual" experience in physical terms; etc.

  Chapter 2: Good and Evil

  1. Consider, for instance, how much time and money we spend to secure our homes, places of business, and cars against unwanted entry (and to have doors professionally unlocked when keys are lost). Consider the cost of internet and credit card security, and the time dissipated in the use and
retrieval of passwords. When phone service is interrupted for five minutes in a modern society the cost is measured in billions of dollars. I think it safe to say that the costs of preventing theft are far higher. Add to the expense of locking doors, the pains we take to prepare formal contracts—locks of another sort—and the costs soar beyond all reckoning. Imagine a world that had no need for such prophylactics against theft (admittedly, it is difficult). It would be a world of far greater disposable wealth (measured in both time and money).

  2. There are other ways of thinking about human cooperation, including politics and law, but I take the normative claims of ethics to be foundational.

  3. Hamilton, 1964a, 1964b.

  4. McElreath & Boyd, 2007, p. 82.

  5. Trivers, 1971.

  6. G. F. Miller, 2007.

  7. For a recent review that also looks at the phenomenon of indirect reciprocity

  (i.e., A gives to B; and then Bgives to C, or Cgives to A, or both), see Nowak, 2005. For doubts about the sufficiency of kin selection and reciprocal altruism to account for cooperation—especially among eusocial insects—see D. S. Wilson & Wilson, 2007; E. O. Wilson, 2005.

  8. Tomasello, 2007.

  9. Smith, [1759] 1853, p. 3.

  10. Ibid. pp. 192-193.

  11. Benedict, 1934, p. 172.

  12. Consequentialism has undergone many refinements since the original utilitarianism of Jeremy Bentham and John Stuart Mill. My discussion will ignore most of these developments, as they are generally of interest only to academic philosophers. The Stanford Encyclopedia of Philosophy provides a good summary article (Sinnott-Armstrong, 2006).

  13. J. D. Greene, 2007; J. D. Greene, Nystrom, Engell, Darley, & Cohen, 2004; J. D. Greene, Sommerville, Nystrom, Darley, & Cohen, 2001.

  14. J. D. Greene, 2002, pp. 59-60.

  15. Ibid., pp. 204-205.

  16. Ibid., p. 264.

  17. Let us briefly cover a few more philosophical bases: What would have to be true for a practice like the forced veiling of women to be objectively wrong? Would this practice have to cause unnecessary suffering in all possible worldsi No. It only need cause unnecessary suffering in this world. Must it be analytically true that compulsory veiling is immoral—that is, must the wrongness of the act be built into the meaning of the word "veil"? No. Must it be true a priori —that is, must this practice be wrong independent of human experience? No. The wrongness of the act very much depends on human experience. It is wrong to force women and girls to wear burqas because it is unpleasant and impractical to live fully veiled, because this practice perpetuates a view of women as being the property of men, and because it keeps the men who enforce it brutally obtuse to the possibility of real equality and communication between the sexes. Hobbling half of the population also directly subtracts from the economic, social, and intellectual wealth of a society. Given the challenges that face every society, this is a bad practice in almost every case. Must compulsory veiling be ethically unacceptable without exception in our world? No. We can easily imagine situations in which forcing one's daughter to wear a burqa could be perfectly moral—perhaps to escape the attention of thuggish men while traveling in rural Afghanistan. Does this slide from brute, analytic, a priori, and necessary truth to synthetic, a posteriori, contingent, exception-ridden truth pose a problem for moral realism? Recall the analogy I drew between morality and chess. Is it always wrong to surrender your Queen in a game of chess? No. But generally speaking, it is a terrible idea. Even granting the existence of an uncountable number of exceptions to this rule, there are still objectively good and objectively bad moves in every game of chess. Are we in a position to say that the treatment of women in traditional Muslim societies is generally bad? Absolutely we are. Should there be any doubt, I recommend that readers consult Ayaan Hirsi Ali's several fine books on the subject (A. Hirsi Ali, 2006, 2007, 2010).

  18. J. D. Greene, 2002, pp. 287-288.

  19. The philosopher Richard Joyce (2006) has argued that the evolutionary origins of moral beliefs undermine them in ways that the evolutionary origins of mathematical and scientific beliefs do not. I do not find his reasoning convincing, however. For instance, Joyce asserts that our mathematical and scientific intuitions could have been selected for only by virtue of their accuracy, whereas our moral intuitions were selected for based on an entirely different standard. In the case of arithmetic (which he takes as his model), this may seem plausible. But science has progressed by violating many (if not most) of our innate, pro to-scientific intuitions about the nature of reality. By Joyce's reasoning, we should view these violations as a likely step away from the Truth.

  20. Greene's argument actually seems somewhat peculiar. Consequentialism is not true, because there is simply too much diversity of opinion about morality; but he seems to believe that most people will converge on consequentialist principles if given enough time to reflect.

  21. Faison, 1996.

  22. Dennett, 1995, p. 498.

  23. Churchland, 2008a.

  24. Slovic, 2007.

  25. This seems related to a more general finding in the reasoning literature, in which people are often found to put more weight on a salient anecdote than on large-sample statistics (Fong, Krantz, & Nisbett, 1986/07; Stanovich & West, 2000). It also appears to be an especially perverse version of what Kahneman and Frederick call "extension neglect" (Kahneman & Frederick, 2005): where our valuations reliably fail to increase with the size of a problem. For instance, the value most people will place on saving 2,000 lives will be less than twice as large as the value they will place on 1,000 lives. Slovic's result, however, suggests that it could be less valuable (even if the larger group contained the smaller). If ever there were a nonnormative result in moral psychology, this is it.

  26. There may be some exceptions to this principle: for instance, if you thought that either child would suffer intolerably if the other died, you might believe that both dying would be preferable than one dying. Whether such cases actually exist, they are clearly exceptions to the general rule that negative consequences should be additive.

  27. Does this sound crazy? Jane McGonigal designs games with such real-world outcomes in mind: www.iftf.org/user/46.

  28. Parfit, 1984.

  29. While Parfit's argument is rightfully celebrated, and Reasons and Persons is a philosophical masterpiece, a very similar observation first appears in Rawls, [1971] 1999, pp. 140-141.

  30. For instance:

  How Only France Survives. In one possible future, the worst-off people in the world soon start to have lives that are well worth living. The quality of life in different nations then continues to rise. Though each nation has its fair share of the world's resources, such things as climate and cultural traditions give to some nations a higher quality of life. The best-off people, for many centuries, are the French.

  In another possible future, a new infectious disease makes nearly everyone sterile. French scientists produce just enough of an antidote for all of France's population. All other nations cease to exist. This has some bad effects on the quality of life for the surviving French. Thus there is no new foreign art, literature, or technology that the French can import. These and other bad effects outweigh any good effects. Throughout this second possible future the French therefore have a quality of life that is slightly lower than it would be in the first possible future (Parfit, ibid., p. 421).

  31. P. Singer, 2009, p. 139.

  32. Graham Holm, 2010.

  33. Kahneman, 2003.

  34. LaBoeuf & Shafir, 2005.

  35. Tom, Fox, Trepel, & Poldrack, 2007. But as the authors note, this protocol examined the brain's appraisal of potential loss (i.e., decision utility) rather than experienced losses, where other studies suggest that negative affect and associated amygdala activity can be expected.

  36. Pizarro and Uhlmann make a similar observation (D. A. Pizarro & Uhl-mann, 2008).

  37. Redelmeier, Katz, & Kahneman, 2003.

  38. Schreiber & Kahneman, 2000
.

  39. Kahneman, 2003.

  40. Rawls, [1971] 1999; Rawls & Kelly, 2001.

  41. S. Harris, 2004, 2006a, 2006d.

  42. He later refined his view, arguing that justice as fairness must be understood as "a political conception of justice rather than as part of a comprehensive moral doctrine" (Rawls & Kelly, 2001, p. xvi).

  43. Rawls, [1971] 1999, p. 27.

  44. Tabibnia, Satpute, & Lieberman, 2008.

  45. It is not unreasonable, therefore, to expect people who are seeking to maximize their well-being to also value fairness. Valuing fairness, they will tend to view its breach as less than ethical—that is, as not being conducive to their collective well-being. But what if they don't? What if the laws of nature allow for different and seemingly antithetical peaks on the moral landscape? What if there is a possible world in which the Golden Rule has become an unshakable instinct, while there is another world of equivalent happiness where the inhabitants reflexively violate it? Perhaps this is a world of perfectly matched sadists and masochists. Let's assume that in this world every person can be paired, one-for-one, with the saints in the first world, and while they are different in every other way, these pairs are identical in every way relevant to their well-being. Stipulating all these things, the consequentialist would be forced to say that these worlds are morally equivalent. Is this a problem? I don't think so. The problem lies in how many details we have been forced to ignore in the process of getting to this point. What possible reason do we have to worry that the principles of human well-being are this elastic? This is like worrying that there is a possible world in which the laws of physics, while as consistent as they are in our world, are completely antithetical to physics as we know it. Okay, what if? Exactly how much should this possibility concern us as we try to predict the behavior of matter in our world?