37. Stanovich & West, 2000.
38. Fong et al., 1986/07. Once again, asking whether something is rationally or morally normative is distinct from asking whether it has been evolutionarily adaptive. Some psychologists have sought to minimize the significance of the research on cognitive bias by suggesting that subjects make decisions using heuristics that conferred adaptive fitness on our ancestors. As Stanovich and West (2000) observe, what serves the genes does not necessarily advance the interests of the individual. We could also add that what serves the individual in one context may not serve him in another. The cognitive and emotional mechanisms that may (or may not) have optimized us for face-to-face conflict (and its resolution) have clearly not prepared us to negotiate conflicts waged from afar—whether with email or other long-range weaponry.
39. Ehrlinger, Johnson, Banner, Dunning, & Kruger, 2008; Kruger & Dunning, 1999.
40. Jost, Glaser, Kruglanski, & Sulloway, 2003. Amodio et al. (2007) used EEG to look for differences in neurocognitive function between liberals and conservatives on a Go/No-Go task. They found that liberalism correlated with increased event-related potentials in the anterior cingulate cortex (ACC). Given the ACC's well-established role in mediating cognitive conflict, they concluded that this difference might, in part, explain why liberals are less set in their ways than conservatives, and more aware of nuance, ambiguity, etc. Inzlicht (2009) found a nearly identical result for religious nonbelievers versus believers.
41. Rosenblatt, Greenberg, Solomon, Pyszczynski, & Lyon, 1989.
42. Jost et al., 2003, p. 369.
43. D. A. Pizarro & Uhlmann, 2008.
44. Kruglanski, 1999. The psychologist Drew Westen describes motivated reasoning as "a form of implicit affect regulation in which the brain converges on solutions that minimize negative and maximize positive affect states" (Westen, Blagov, Harenski, Kilts, & Hamann, 2006). This seems apt.
45. The fact that this principle often breaks down, spectacularly and unselfconsciously, in the domain of religion is precisely why one can reasonably question whether the world's religions are in touch with reality at all.
46. Bechara et al., 2000; Bechara, Damasio, Tranel, & Damasio, 1997; A. Damasio, 1999.
47. S.Harris etal, 2008.
48. Burton, 2008.
49. Frith, 2008, p. 45.
50. Silver, 2006, pp. 77-78.
51. But this allele has also been linked to a variety of psychological traits, like novelty seeking and extraversion, which might also account for its persistence in the genome (Benjamin et al., 1996).
52. Burton, 2008, pp. 188-195.
53. Joseph, 2009.
54. Houreld, 2009; LaFraniere, 2007; Harris, 2009.
55. Mlodinow, 2008.
56. Wittgenstein, 1969, p. 206.
57. Analogical reasoning is generally considered a form of induction (Holyoak, 2005).
58. Sloman & Lagnado, 2005; Tenenbaum, Kemp, & Shafto, 2007.
59. For a review of the literature on deductive reasoning see Evans, 2005.
60. Cf. J. S. B. T. Evans, 2005, pp. 178-179.
61. For example, Canessa et al, 2005; Goel, Gold, Kapur, & Houle, 1997; Osherson et al., 1998; Prabhakaran, Rypma, & Gabrieli, 2001; Prado, Noveck, & Van Der Henst, 2009; Rodriguez-Moreno & Hirsch, 2009; Strange, Henson, Friston, & Dolan, 2001. Goel and Dolan (2003a) found that when syllogistic reasoning was modulated by a strong belief bias, the ventromedial prefrontal cortex was preferentially engaged, while such reasoning without an effective belief bias appeared to be driven by a greater activation of the (right) lateral prefrontal cortex. Elliot et al. (1997) found that guessing appears to be mediated by the ventromedial prefrontal cortex. Bechara et al. (1997) report that patients suffering ventromedial prefrontal damage fail to act according to their correct conceptual beliefs while engaged in a gambling task. Prior to our 2008 study, it was unclear how these findings would relate to belief and disbelief per se. They suggested, however, that the medial prefrontal cortex would be among our regions of interest.
While decision making is surely related to belief processing, the "decisions" that neuroscientists have tended to study are those that precede voluntary movements in tests of sensory discrimination (Glimcher, 2002). The initiation of such movements requires the judgment that a target stimulus has appeared—we might even say that this entails the "belief" that an event has occurred—but such studies are not designed to examine belief as a propositional attitude. Decision making in the face of potential reward is obviously of great interest to anyone who would understand the roots of human and animal behavior, but the link to belief per se appears tenuous. For instance, in a visual-decision task (in which monkeys were trained to detect the coherent motion of random dots and signal their direction with eye movements), Gold and Shadlen found that the brain regions responsible for this sensory judgment were the very regions that subsequently initiated the behavioral response (Gold & Shadlen, 2000, 2002; Shadlen & Newsome, 2001). Neurons in these regions appear to act as integrators of sensory information, initiating the trained behavior whenever a threshold of activation has been reached. We might be tempted to say, therefore, that the "belief" that a stimulus is moving to the left is located in the lateral intraparietal area, the frontal eye fields, and the superior colliculus—as these are the brain regions responsible for initiating eye movements. But here we are talking about the "beliefs" of a monkey—a monkey that has been trained to reproduce a stereotyped response to a specific stimulus in expectation of an immediate teward. This is not the kind of "belief" that has been the subject of my research.
The literature on decision making has generally sought to address the link between voluntary action, error detection, and reward. Insofar as the brain's reward system involves a prediction that a specific behavior will lead to future reward, we might say that this is a matter of belief formation—but there is nothing to indicate that such beliefs are explicit, linguistically mediated, or propositional. We know that they cannot be, as most studies of reward processing have been done in rodents, monkeys, titmice, and pigeons. This literature has investigated the link between sensory judgments and motor responses, not the difference between belief and disbelief in matters of propositional truth. This is not to minimize the fascinating progress that has occurred in this field. In fact, the same economic modeling that allows behavioral ecologists to account for the foraging behavior of animal groups also allows neurophysiologists to describe the activity of the neuronal assemblies that govern an individual animal's response to differential rewards (Glimcher, 2002). There is also a growing literature on neuroeconomics, which examines human decision making (as well as trust and reciprocity) using neuroimaging. Some of these findings are discussed here.
62. This becomes especially feasible using more sophisticated techniques of data analysis, like multivariate pattern classification (Cox & Savoy, 2003; P. K. Douglas, Harris, & Cohen, 2009). Most analyses of fMRI data are univariate and merely look for correlations between the activity at each point in the brain and the task paradigm. This approach ignores the interrelationships that surely exist between regions. Cox and Savoy demonstrated that a multivariate approach, in which statistical pattern recognition methods are used to look for correlations across all regions, allows for a very subtle analysis of fMRI data in a way that is far more sensitive to distributed patterns of activity (Cox & Savoy, 2003). With this approach, they were able to determine which visual stimulus a subject was viewing (out of ten possible types) by examining a mere 20 seconds of his experimental run.
Pamela Douglas, a graduate student in Mark Cohen's cognitive neuroscience lab at UCLA, recently took a similar approach to analyzing my original belief data (P. K. Douglas, Harris, & Cohen, 2009). She created an unsupervised machine-learning classifier by first performing an independent component (IC) analysis on each of our subjects' three scanning sessions. She then selected the IC time-course values that corresponded to the maximum value of the hemodynamic response function (HRF) following eith
er "belief" or "disbelief" events. These values were fed into a selection process, whereby ICs that were "good predictors" were promoted as features in a classification network for training a Naive Bayes classifier. To test the accuracy of her classification, Douglas performed a leave-one-out cross-validation. Using this criterion, her Naive Bayes classifier correctly labeled the "left out" trial 90 percent of the time. Given such results, it does not seem far-fetched that, with further refinements in both hardware and techniques of data analysis, fMRI could become a means for accurate lie detection.
63. Holden, 2001.
64. Broad, 2002.
65. Pavlidis, Eberhardt, & Levine, 2002.
66. Allen & Iacono, 1997; Farwell & Donchin, 1991. Spence et al. (2001) appear to have published the first neuroimaging study on deception. Their research suggests that "deception" is associated with bilateral increases in activity in the ventrolateral prefrontal cortex (BA47), a region often associated with response inhibition and the suppression of inappropriate behavior (Goldberg, 2001).
The results of the Spence study were susceptible to some obvious limitations, however—perhaps most glaring was the fact that the subjects were told precisely when to lie by being given a visual cue. Needless to say, this did much to rob the experiment of verisimilitude. The natural ecology of deception is one in which a potential liar must notice when questions draw near to factual terrain that he is committed to keeping hidden, and he must lie as the situation warrants, while respecting the criteria for logical coherence and consistency that he and his interlocutor share. (It is worth noting that unless one respects the norms of reasoning and belief formation, it is impossible to lie successfully. This is not an accident.) To be asked to lie automatically in response to a visual cue simply does not simulate ordinary acts of deception. Spence et al. did much to remedy this problem in a subsequent study, where subjects could lie at their own discretion and on subjects related to their personal histories (Spence, Kaylor-Hughes, Farrow, & Wilkinson, 2008). This study largely replicated their findings with respect to the primary involvement of the ventrolateral PFC (though now almost entirely in the left hemisphere). There have been other neuroimaging studies of deception—as "guilty knowledge" (Lan-gleben et al., 2002), "feigned memory impairment" (Lee et al., 2005), etc.—but the challenge, apart from reliably finding the neural correlates of any of these states, is to find a result that generalizes to all forms of deception.
It is not entirely obvious that these studies have given us a sound basis for detecting deception through neuroimaging. Focusing on the neural correlates of belief and disbelief might obviate whatever differences exist between types of deception, the mode of stimulus presentation, etc. Is thete a difference, for instance, between denying what is true and asserting what is false? Recasring the question in terms of a proposition to be believed or disbelieved might circumvent any problem posed by the "directionality" of a lie. Another group (Abe et al., 2006) took steps to address the directionality issue by asking subjects to alternately deny true knowledge and assert false knowledge. However, this study suffered from the usual limitations, in that subjects were directed when to lie, and their lies were limited to whether they had previously viewed an experimental stimulus.
A functional neuroanatomy of belief might also add to our understanding of the placebo response—which can be both profound and profoundly unhelpful to the process of vetting pharmaceuticals. For instance, 65 percent to 80 percent of the effect of antidepressant medication seems attributable to positive expectation (Kirsch, 2000). There are even forms of surgery that, while effective, are no more effective than sham procedures (Ariely, 2008). While some neuroimaging work has been done in this area, the placebo response is currently operationalized in terms of symptom relief, without reference to a subject's underlying state of mind (Lieber-man et al., 2004; Wager et al., 2004). Finding the neural correlates of belief might allow us to eventually control for this effect during the process of drug design.
67. Stoller&Wolpe,2007.
68. Grann, 2009.
69. There are, however, reasons to doubt that our current methods of neuroimaging, like fMRI, will yield a practical mind-reading technology. Functional MRI studies as a gtoup have several important limitations. Perhaps first and most important are those of statistical power and sensitivity. If one chooses to analyze one's data at extremely conservative thresholds to exclude the possibility of type I (false positive) detection errors, this necessarily increases one's type II (false negative) error. Further, most studies implicitly assume uniform detection sensitivity throughout the brain, a condition known to be violated for the low-bandwidth, fast-imaging scans used for fMRI. Field inhomogeneity also tends to increase the magnitude of motion artifacts. When motion is correlated to the stimuli, this can produce false positive activations, especially in the cortex.
We may also discover that the underlying physics of neuroimaging grants only so much scope for human ingenuity. If so, an era of cheap, covert lie detection might never dawn, and we will be forced to rely upon some relentlessly costly, cumbersome technology. Even so, I think it safe to say that the time is not far off when lying, on the weightiest matters—in court, before a grand jury, during important business negotiations, etc.—will become a practical impossibility. This fact will be widely publicized, of course, and the relevant technology will be expected to be in place, or accessible, whenever the stakes are high. This very assurance, rather than the incessant use of these machines, will change us.
70. Ball, 2009.
71. Pizarro & Uhlmann, 2008.
72. Kahneman, 2003.
73. Rosenhan, 1973.
74. McNeil, Pauker, Sox, &Tversky, 1982.
75. There are other reasoning biases that can affect medical decisions. It is well known, for instance, that the presence of two similat options can create "decisional conflict," biasing a choice in favor of a third alternative. In one experiment, neurologists and neurosurgeons were asked to determine which patients to admit to surgery first. Half the subjects were given a choice between a woman in her early fifties and a man in his seventies. The other half were given the same two patients, plus another woman in her fifties who was difficult to distinguish from the first: 38 percent of doctors chose to operate on the older man in the first scenario; 58 percent chose him in the second (LaBoeuf & Shafir, 2005). This is a bigger change in outcomes than might be apparent at first glance: in the first case, the woman's chance of getting the surgery is 62 percent; in the second it is 21 percent.
Chapter 4: Religion
1. Marx, [1843] 1971.
2. Freud, [1930] 1994; Freud & Strachey, [1927] 1975.
3. Weber, [1922] 1993.
4. Zuckerman, 2008.
5. Norris & Inglehart, 2004.
6. Finke& Stark, 1998.
7. Norris & Inglehart, 2004, p. 108.
8. It does not seem, however, that socioeconomic inequality explains religious extremism in the Muslim world, where radicals are, on average, wealthier and more educated than moderates (Atran, 2003; Esposito, 2008).
9. http://pewglobal.org/reports/display.phpPReportIDs258.
10. http://pewforum.org/surveys/campaign08/.
11. Pyysiainen & Hauser, 2010.
12. Zuckerman, 2008.
13. Paul, 2009.
14. Hall, Matz,& Wood, 2010.
15. Decades of cross-cultural research on "subjective well-being" (SWB) by the World Values Survey (www.worldvaluessurvey.org) indicate that religion may make an important contribution to human happiness and life satisfaction at low levels of societal development, security, and freedom. The happiest and most secure societies, however, tend to be the most secular. The greatest predictors of a society's mean SWB are social tolerance (of homosexuals, gender equality, other religions, etc.) and personal freedom (Inglehart, Foa, Peterson, & Welzel, 2008). Of course, tolerance and personal freedom are directly linked, and neither seems to flourish under the shadow of orthodox religion.
16. Paul, 2009.
r /> 17. Culotta, 2009.
18. Buss, 2002.
19. I am indebted to the biologist Jerry Coyne for poinring this out (personal communication). The neuroscientist Mark Cohen has further observed (personal communication), however, that many traditional societies are far more tolerant of male promiscuity than female—for instance, the sanction for being raped has often been as bad, or worse, than for initiating a rape. Cohen speculates that in such cases religion may offer a post-hoc justification for a biological imperative. This may be so. I would only add that here, as elsewhere, the task of maximizing human well-being is clearly separable from Pleistocene biological imperatives.
20. Foster & Kokko, 2008.
21. Fincher, Thornhill, Murray, & Schaller, 2008.
22. Dawkins, 1994; D. Dennett, 1994; D. C. Dennett, 2006; D. S. Wilson & Wilson, 2007; E. O. Wilson, 2005; E. O. Wilson & Holldobler, 2005, pp. 169-172; Dawkins, 2006.
23. Boyer, 2001; Durkheim & Cosman, [1912] 2001.
24. Stark, 2001, pp. 180-181. 25- Livingston, 2005.
26. Dennett, 2006.
27. http://pewforum.org/docs/?DocID=215.
28. http://pewforum.org/docs/PDocIDs 153.
29. Boyer, 2001, p. 302.
30. Barrett, 2000.
31. Bloom, 2004.
32. Brooks, 2009.
33. E.M.Evans, 2001.
34. Hood, 2009.
35. D'Onofrio, Eaves, Murrelle, Maes, & Spilka, 1999.
36. Previc,2006.
37. In addition, the densities of a specific type of serotonin receptor have been inversely correlated with high scores on the "spiritual acceptance" subscale of the Temperament and Character Inventory (J. Borg, Andree, Soderstrom, & Farde, 2003).
38. Asheim, Hansen & Brodtkorb, 2003; Blumer, 1999; Persinger & Fisher, 1990.