Read The Moral Landscape: How Science Can Determine Human Values Page 14


  In fact, everything we know outside of our personal experience is the result of our having encountered specific linguistic propositions— the sun is a star; Julius Caesar was a Roman emperor, broccoli is good for you— and found no reason (or means) to doubt them. It is "belief" in this form, as an act of acceptance, which I have sought to better understand in my neuroscientific research. 15

  Looking for Belief in the Brain

  For a physical system to be capable of complex behavior, there must be some meaningful separation between its input and output. As far as we know, this separation has been most fully achieved in the frontal lobes of the human brain. Our frontal lobes are what allow us to select among a vast range of responses to incoming information in light of our prior goals and present inferences. Such "higher-level" control of emotion and behavior is the stuff of which human personalities are made.

  Clearly, the brain's capacity to believe or disbelieve statements of fact— You left your wallet on the bar, that white powder is anthrax; your boss is in love with you —is central to the initiation, organization, and control of our most complex behaviors.

  But we are not likely to find a region of the human brain devoted solely to belief. The brain is an evolved organ, and there does not seem to be a process in nature that allows for the creation of new structures dedicated to entirely novel modes of behavior or cognition. Consequently, the brain's higher-order functions had to emerge from lower-order mechanisms. An ancient structure like the insula, for instance, helps monitor events in our gut, governing the perception of hunger and primary emotions like disgust. But it is also involved in pain perception, empathy, pride, humiliation, trust, music appreciation, and addictive behavior. 16 It may also play an important role in both belief formation and moral reasoning. Such promiscuity of function is a common feature of many regions of the brain, especially in the frontal lobes. 17

  No region of the brain evolved in a neural vacuum or in isolation from the other mutations simultaneously occurring within the genome. The human mind, therefore, is like a ship that has been built and rebuilt, plank by plank, on the open sea. Changes have been made to her sails, keel, and rudder even as the waves battered every inch of her hull. And much of our behavior and cognition, even much that now seems essential to our humanity, has not been selected for at all. There are no aspects of brain function that evolved to hold democratic elections, to run financial institutions, or to teach our children to read. We are, in every cell, the products of nature—but we have also been born again and again through culture. Much of this cultural inheritance must be realized differently in individual brains. The way in which two people think about the stock market, or recall that Christmas is a national holiday, or solve a puzzle like the Tower of Hanoi, will almost surely differ between individuals. This poses an obvious challenge when attempting to identify mental states with specific brain states. 18

  Another factor that makes the strict localization of any mental state difficult is that the human brain is characterized by massive interconnectivity: it is mostly talking to itself. 19 And the information it stores must also be more fine-grained than the concepts, symbols, objects, or states that we subjectively experience. Representation results from a pattern of activity across networks of neurons and does not generally entail stable, one-to-one mappings of things/events in the world, or concepts in the mind, to discrete structures in the brain. 20 For instance, thinking a simple thought like Jake is married cannot be the work of any single node in a network of neurons. It must emerge from a pattern of connections among many nodes. None of this bodes well for one who would seek a belief "center" in the human brain.

  As part of my doctoral research at UCLA, I studied belief, disbelief, and uncertainty with functional magnetic resonance imaging (fMRI). 21 To do this, we had volunteers read statements from a wide variety of categories while we scanned their brains. After reading a proposition like, "California is part of the United States" or "You have brown hair," participants would judge them to be "true," "false," or "undecidable" with the click of a button. This was, to my knowledge, the first time anyone had attempted to study belief and disbelief with the tools of neuroscience. Consequently, we had no basis to form a detailed hypothesis about which regions of the brain govern these states of mind. 22 It was, nevertheless, reasonable to expect that the prefrontal cortex (PFC) would be involved, given its wider role in controlling emotion and complex behavior. 23

  The seventeenth-century philosopher Spinoza thought that merely understanding a statement entails the tacit acceptance of its being true, while disbelief requires a subsequent process of rejection. 24 Several psychological studies seem to support this conjecture. 25 Understanding a proposition may be analogous to perceiving an object in physical space: we may accept appearances as reality until they prove otherwise. The behavioral data acquired in our research support this hypothesis, as subjects judged statements to be "true" more quickly than they judged them to be "false" or "undecidable." 26

  When we compared the mental states of belief and disbelief, we found that belief was associated with greater activity in the medial prefrontal cortex (MPFC). 27 This region of the frontal lobes is involved in linking factual knowledge with relevant emotional associations, 28 in changing behavior in response to reward, 29 and in goal-based actions. 30 The MPFC is also associated with ongoing reality monitoring, and injuries here can cause people to confabulate—that is, to make patently false statements without any apparent awareness that they are not telling the truth. 31 Whatever its cause in the brain, confabulation seems to be a condition in which belief processing has run amok. The MPFC has often been associated with self-representation, 32 and one sees more activity here when subjects think about themselves than when they think about others. 33

  The greater activity we found in the MPFC for belief compared to disbelief may reflect the greater self-relevance and/or reward value of true statements. When we believe a proposition to be true, it is as though we have taken it in hand as part of our extended self: we are saying, in effect, "This is mine. I can use this. This fits my view of the world." It seems to me that such cognitive acceptance has a distinctly positive emotional valence. We actually like the truth, and we may, in fact, dislike falsehood. 34

  The involvement of the MPFC in belief processing suggests an anatomical link between the purely cognitive aspects of belief and emotion/ reward. Even judging the truth of emotionally neutral propositions engaged regions of the brain that are strongly connected to the limbic system, which governs our positive and negative affect. In fact, mathematical belief (e.g., "2 + 6 + 8 = 16") showed a similar pattern of activity to ethical belief (e.g., "It is good to let your children know that you love them"), and these were perhaps the most dissimilar sets of stimuli used in our experiment. This suggests that the physiology of belief may be the same regardless of a proposition's content. It also suggests that the division between facts and values does not make much sense in terms of underlying brain function. 35

  Of course, we can differentiate my argument concerning the moral landscape from my fMRI work on belief. I have argued that there is no gulf between facts and values, because values reduce to a certain type of fact. This is a philosophical claim, and as such, I can make it before ever venturing into the lab. However, my research on belief suggests that the split between facts and values should look suspicious: First, belief appears to be largely mediated by the MPFC, which seems to already constitute an anatomical bridge between reasoning and value. Second, the MPFC appears to be similarly engaged, irrespective of a belief's content. This finding of content-independence challenges the fact/value distinction very directly: for if, from the point of view of the brain, believing "the sun is a star" is importantly similar to believing "cruelty is wrong," how can we say that scientific and ethical judgments have nothing in common?

  And we can traverse the boundary between facts and values in other ways. As we are about to see, the norms of reasoning seem to apply equally to beliefs about fa
cts and to beliefs about values. In both spheres, evidence of inconsistency and bias is always unflattering. Similarities of this kind suggest that there is a deep analogy, if not identity, between the two domains.

  The Tides of Bias

  If one wants to understand how another person thinks, it is rarely sufficient to know whether or not he believes a specific set of propositions. Two people can hold the same belief for very different reasons, and such differences generally matter. In the year 2003, it was one thing to believe that the United States should not invade Iraq because the ongoing war in Afghanistan was more important; it was another to believe it because you think it is an abomination for infidels to trespass on Muslim land. Knowing what a person believes on a specific subject is not identical to knowing how that person thinks.

  Decades of psychological research suggest that unconscious processes influence belief formation, and not all of them assist us in our search for truth. When asked to judge the probability that an event will occur, or the likelihood that one event caused another, people are frequently misled by a variety of factors, including the unconscious influence of extraneous information. For instance, if asked to recall the last four digits of their Social Security numbers and then asked to estimate the number of doctors practicing in San Francisco, the resulting numbers will show a statistically significant relationship. Needless to say, when the order of questions is reversed, this effect disappears. 36 There have been a few efforts to put a brave face on such departures from rationality, construing them as random performance errors or as a sign that experimental subjects have misunderstood the tasks presented to them—or even as proof that research psychologists themselves have been beguiled by false norms of reasoning. But efforts to exonerate our mental limitations have generally failed. There are some things that we are just naturally bad at. And the mistakes people tend to make across a wide range of reasoning tasks are not mere errors; they are systematic errors that are strongly associated both within and across tasks. As one might expect, many of these errors decrease as cognitive ability increases. 37 We also know that training, using both examples and formal rules, mitigates many of these problems and can improve a person's thinking. 38

  Reasoning errors aside, we know that people often acquire their beliefs about the world for reasons that are more emotional and social than strictly cognitive. Wishful thinking, self-serving bias, in-group loyalties, and frank self-deception can lead to monstrous departures from the norms of rationality. Most beliefs are evaluated against a background of other beliefs and often in the context of an ideology that a person shares with others. Consequently, people are rarely as open to revising their views as reason would seem to dictate.

  On this front, the internet has simultaneously enabled two opposing influences on belief: On the one hand, it has reduced intellectual isolation by making it more difficult for people to remain ignorant of the diversity of opinion on any given subject. But it has also allowed bad ideas to flourish—as anyone with a computer and too much time on his hands can broadcast his point of view and, often enough, find an audience. So while knowledge is increasingly open-source, ignorance is, too.

  It is also true that the less competent a person is in a given domain, the more he will tend to overestimate his abilities. This often produces an ugly marriage of confidence and ignorance that is very difficult to correct for. 39 Conversely, those who are more knowledgeable about a subject tend to be acutely aware of the greater expertise of others. This creates a rather unlovely asymmetry in public discourse—one that is generally on display whenever scientists debate religious apologists. For instance, when a scientist speaks with appropriate circumspection about controversies in his field, or about the limits of his own understanding, his opponent will often make wildly unjustified assertions about just which religious doctrines can be inserted into the space provided. Thus, one often finds people with no scientific training speaking with apparent certainty about the theological implications of quantum mechanics, cosmology, or molecular biology.

  This point merits a brief aside: while it is a standard rhetorical move in such debates to accuse scientists of being "arrogant," the level of humility in scientific discourse is, in fact, one of its most striking characteristics. In my experience, arrogance is about as common at a scientific conference as nudity. At any scientific meeting you will find presenter after presenter couching his or her remarks with caveats and apologies. When asked to comment on something that lies to either side of the very knife edge of their special expertise, even Nobel laureates will say things like, "Well, this isn't really my area, but I would suspect that X is ..." or "I'm sure there a several people in this room who know more about this than I do, but as far as I know, X is..." The totality of scientific knowledge now doubles every few years. Given how much there is to know, all scientists live with the constant awareness that whenever they open their mouths in the presence of other scientists, they are guaranteed to be speaking to someone who knows more about a specific topic than they do.

  Cognitive biases cannot help but influence our public discourse. Consider political conservatism: this is a fairly well-defined perspective that is characterized by a general discomfort with societal change and a ready acceptance of social inequality. As simple as political conservatism is to describe, we know that it is governed by many factors. The psychologist John Jost and colleagues analyzed data from twelve countries, acquired from 23,000 subjects, and found this attitude to be correlated with dogmatism, inflexibility, death anxiety, need for closure, and anticor-related with openness to experience, cognitive complexity, self-esteem, and social stability. 40 Even the manipulation of a single of these variables can affect political opinions and behavior. For instance, merely reminding people of the fact of death increases their inclination to punish transgressors and to reward those who uphold cultural norms. One experiment showed that judges could be led to impose especially harsh penalties on prostitutes if they were simply prompted to think about death prior to their deliberations. 41

  And yet after reviewing the literature linking political conservatism to many obvious sources of bias, Jost and his coauthors reach the following conclusion:

  Conservative ideologies, like virtually all other belief systems, are adopted in part because they satisfy various psychological needs. To say that ideological belief systems have a strong motivational basis is not to say that they are unprincipled, unwarranted, or unresponsive to reason and evidence. 42

  This has more than a whiff of euphemism about it. Surely we can say that a belief system known to be especially beholden to dogmatism, inflexibility, death anxiety, and a need for closure will be less principled, less warranted, and less responsive to reason and evidence than it would otherwise be.

  This is not to say that liberalism isn't also occluded by certain biases. In a recent study of moral reasoning, 43 subjects were asked to judge whether it was morally correct to sacrifice the life of one person to save one hundred, while being given subtle clues as to the races of the people involved. Conservatives proved less biased by race than liberals and, therefore, more even-handed. Liberals, as it turns out, were very eager to sacrifice a white person to save one hundred nonwhites, but not the other way around—all the while maintaining that considerations of race had not entered into their thinking. The point, of course, is that science increasingly allows us to identify aspects of our minds that cause us to deviate from norms of factual and moral reasoning—norms which, when made explicit, are generally acknowledged to be valid by all parties.

  There is a sense in which all cognition can be said to be motivated: one is motivated to understand the world, to be in touch with reality, to remove doubt, etc. Alternately, one might say that motivation is an aspect of cognition itself. 44 Nevertheless, motives like wanting to find the truth, not wanting to be mistaken, etc., tend to align with epistemic goals in a way that many other commitments do not. As we have begun to see, all reasoning may be inextricable from emotion. But if a person's pri
mary motivation in holding a belief is to hew to a positive state of mind—to mitigate feelings of anxiety, embarrassment, or guilt, for instance—this is precisely what we mean by phrases like "wishful thinking" and "self-deception." Such a person will, of necessity, be less responsive to valid chains of evidence and argument that run counter to the beliefs he is seeking to maintain. To point out non-epistemic motives in another's view of the world, therefore, is always a criticism, as it serves to cast doubt upon a person's connection to the world as it is. 45

  Mistaking Our Limits

  We have long known, principally through the neurological work of Antonio Damasio and colleagues, that certain types of reasoning are inseparable from emotion. 46 To reason effectively, we must have a feeling for the truth. Our first fMRI study of belief and disbelief seemed to bear this out. 47 If believing a mathematical equation (vs. disbelieving another) and believing an ethical proposition (vs. disbelieving another) produce the same changes in neurophysiology, the boundary between scientific dispassion and judgments of value becomes difficult to establish.

  However, such findings do not in the least diminish the importance of reason, nor do they blur the distinction between justified and unjustified belief. On the contrary, the inseparability of reason and emotion confirms that the validity of a belief cannot merely depend on the conviction felt by its adherents; it rests on the chains of evidence and argument that link it to reality. Feeling may be necessary to judge the truth, but it cannot be sufficient.