The neurologist Robert Burton argues that the "feeling of knowing" (i.e., the conviction that one's judgment is correct) is a primary positive emotion that often floats free of rational processes and can occasionally become wholly detached from logical or sensory evidence. 48 He infers this from neurological disorders in which subjects display pathological certainty (e.g., schizophrenia and Cotard's delusion) and pathological uncertainty (e.g., obsessive-compulsive disorder). Burton concludes that it is irrational to expect too much of human rationality. On his account, rationality is mostly aspirational in character and often little more than a facade masking pure, unprincipled feeling.
Other neuroscientists have made similar claims. Chris Frith, a pioneer in the use of functional neuroimaging, recently wrote:
Where does conscious reasoning come into the picture? It is an attempt to justify the choice after it has been made. And it is, after all, the only way we have to try to explain to other people why we made a particular decision. But given our lack of access to the brain processes involved, our justification is often spurious: a post-hoc rationalization, or even a confabulation—a "story" born of the confusion between imagination and memory. 49
I doubt Frith meant to deny that reason ever plays a role in decision making (though the title of his essay was "No One Really Uses Reason"). He has, however, conflated two facts about the mind: while it is true that all conscious processes, including any effort of reasoning, depend upon events of which we are not conscious, this does not mean that reasoning amounts to little more than a post hoc justification of brute sentiment. We are not aware of the neurological processes that allow us to follow the rules of algebra, but this doesn't mean that we never follow these rules or that the role they play in our mathematical calculations is generally post hoc. The fact that we are unaware of most of what goes on in our brains does not render the distinction between having good reasons for what one believes and having bad ones any less clear or consequential. Nor does it suggest that internal consistency, openness to information, self-criticism, and other cognitive virtues are less valuable than we generally assume.
There are many ways to make too much of the unconscious underpinnings of human thought. For instance, Burton observes that one's thinking on many moral issues—ranging from global warming to capital punishment—will be influenced by one's tolerance for risk. In evaluating the problem of global warming, one must weigh the risk of melting the polar ice caps; in judging the ethics of capital punishment, one must consider the risk of putting innocent people to death. However, people differ significantly with respect to risk tolerance, and these differences appear to be governed by a variety of genes—including genes for the D4 dopamine receptor and the protein stathmin (which is primarily expressed in the amygdala). Believing that there can be no optimal degree of risk aversion, Burton concludes that we can never truly reason about such ethical questions. "Reason" will simply be the name we give to our unconscious (and genetically determined) biases. But is it really true to say that every degree of risk tolerance will serve our purposes equally well as we struggle to build a global civilization? Does Burton really mean to suggest that there is no basis for distinguishing healthy from unhealthy—or even suicidal—attitudes toward risk?
As it turns out, dopamine receptor genes may play a role in religious belief as well. People who have inherited the most active form of the D4 receptor are more likely to believe in miracles and to be skeptical of science; the least active forms correlate with "rational materialism." 50 Skeptics given the drug L-dopa, which increases dopamine levels, show an increased propensity to accept mystical explanations for novel phenomena. 51 The fact that religious belief is both a cultural universal and appears to be tethered to the genome has led scientists like Burton to conclude that there is simply no getting rid of faith-based thinking.
It seems to me that Burton and Frith have misunderstood the significance of unconscious cognitive processes. On Burton's account, world-views will remain idiosyncratic and incommensurable, and the hope that we might persuade one another through rational argument and, thereby, fuse our cognitive horizons is not only vain but symptomatic of the very unconscious processes and frank irrationality that we would presume to expunge. This leads him to conclude that any rational criticism of religious irrationality is an unseemly waste of time:
The science-religion controversy cannot go away; it is rooted in biology... Scorpions sting. We talk of religion, afterlife, soul, higher powers, muses, purpose, reason, objectivity, pointlessness, and randomness. We cannot help ourselves... To insist that the secular and the scientific be universally adopted flies in the face of what neuroscience tells us about different personality traits generating idiosyncratic worldviews ... Different genetics, temperaments, and experience led to contrasting worldviews. Reason isn't going to bridge this gap between believers and nonbelievers. 52
The problem, however, is that we could have said the same about witchcraft. Historically, a preoccupation with witchcraft has been a cultural universal. And yet belief in magic is now in disrepute almost everywhere in the developed world. Is there a scientist on earth who would be tempted to argue that belief in the evil eye or in the demonic origins of epilepsy is bound to remain impervious to reason?
Lest the analogy between religion and witchcraft seem quaint, it is worth remembering that belief in magic and demonic possession is still epidemic in Africa. In Kenya elderly men and women are regularly burned alive as witches. 53 In Angola, Congo, and Nigeria the hysteria has mostly targeted children: thousands of unlucky boys and girls have been blinded, injected with battery acid, and otherwise put to torture in an effort to purge them of demons; others have been killed outright; many more have been disowned by their families and rendered homeless. 54 Needless to say, much of this lunacy has spread in the name of Christianity. The problem is especially intractable because the government officials charged with protecting these suspected witches also believe in witchcraft. As was the case in the Middle Ages, when the belief in witchcraft was omnipresent in Europe, only a truly panoramic ignorance about the physical causes of disease, crop failure, and life's other indignities allows this delusion to thrive.
What if we were to connect the fear of witches with the expression of a certain receptor subtype in the brain? Who would be tempted to say that the belief in witchcraft is, therefore, ineradicable?
As someone who has received many thousands of letters and emails from people who have ceased to believe in the God of Abraham, I know that pessimism about the power of reason is unwarranted. People can be led to notice the incongruities in their faith, the self-deception and wishful thinking of their coreligionists, and the growing conflict between the claims of scripture and the findings of modern science. Such reasoning can inspire them to question their attachment to doctrines that, in the vast majority of cases, were simply drummed into them on mother's knee. The truth is that people can transcend mere sentiment and clarify their thinking on almost any subject. Allowing competing views to collide—through open debate, a willingness to receive criticism, etc.—performs just such a function, often by exposing inconsistencies in a belief system that make its adherents profoundly uncomfortable. There are standards to guide us, even when opinions differ, and the violation of such standards generally seems consequential to everyone involved. Self-contradiction, for instance, is viewed as a problem no matter what one is talking about. And anyone who considers it a virtue is very unlikely to be taken seriously. Again, reason is not starkly opposed to feeling on this front; it entails a feeling for the truth.
Conversely, there are occasions when a true proposition just doesn't seem right no matter how one squints one's eyes or cocks one's head, and yet its truth can be acknowledged by anyone willing to do the necessary intellectual work. It is very difficult to grasp that tiny quantities of matter contain vast amounts of explosive energy, but the equations of physics—along with the destructive yield of our nuclear bombs— confirms that this is so. Similarly, we
know that most people cannot produce or even recognize a series of digits or coin tosses that meets a statistical test for randomness. But this has not stopped us from understanding randomness mathematically—or from factoring our innate blindness to randomness into our growing understanding of cognition and economic behavior. 55
The fact that reason must be rooted in our biology does not negate the principles of reason. Wittgenstein once observed that the logic of our language allows us to ask, "Was that gunfire?" but not "Was that a noise?" 56 This seems to be a contingent fact of neurology, rather than an absolute constraint upon logic. A synesthete, for instance, who experiences crosstalk between his primary senses (seeing sounds, tasting colors, etc.), might be able to pose the latter question without any contradiction. How the world seems to us (and what can be logically said about its seemings) depends upon facts about our brains. Our inability to say that an object is "red and green all over" is a fact about the biology of vision before it is a fact of logic. But that doesn't prevent us from seeing beyond this very contingency. As science advances, we are increasingly coming to understand the natural limits of our understanding.
Belief and Reasoning
There is a close relationship between belief and reasoning. Many of our beliefs are the product of inferences drawn from particular instances (induction) or from general principles (deduction), or both. Induction is the process by which we extrapolate from past observations to novel instances, anticipate future states of the world, and draw analogies from one domain to another. 57 Believing that you probably have a pancreas (because people generally have the same parts), or intetpreting the look of disgust on your son's face to mean that he doesn't like Marmite, are examples of induction. This mode of thinking is especially important for ordinary cognition and for the practice of science, and there have been a variety of efforts to model it computationally. 58 Deduction, while less central to our lives, is an essential component of any logical argument. 59 If you believe that gold is more expensive than silver, and silver more expensive than tin, deduction reveals that you also believe gold to be more expensive than tin. Induction allows us to move beyond the facts already in hand; deduction allows us to make the implications of our current beliefs more explicit, to search for counterexamples, and to see whether our views are logically coherent. Of course, the boundaries between these (and other) forms of reasoning are not always easy to specify, and people succumb to a wide range of biases in both modes.
It is worth reflecting on what a reasoning bias actually is: a bias is not merely a source of error; it is a reliable pattern of error. Every bias, therefore, reveals something about the structure of the human mind. And diagnosing a pattern of errors as a "bias" can only occur with reference to specific norms—and norms can sometimes be in conflict. The norms of logic, for instance, don't always correspond to the norms of practical reasoning. An argument can be logically valid, but unsound in that it contains a false premise and, therefore, leads to a false conclusion (e.g., Scientists are smart; smart people do not make mistakes; therefore, scientists do not make mistakes). 60 Much research on deductive reasoning suggests that people have a "bias" for sound conclusions and will judge a valid argument to be invalid if its conclusion lacks credibility. It's not clear that this "belief bias" should be considered a symptom of native irrationality. Rather, it seems an instance in which the norms of abstract logic and practical reason may simply be in conflict.
Neuroimaging studies have been performed on various types of human reasoning. 61 As we have seen, however, accepting the fruits of such reasoning (i.e., belief) seems to be an independent process. While this is suggested by my own neuroimaging research, it also follows directly from the fact that reasoning accounts only for a subset of our beliefs about the world. Consider the following statements:
1. All known soil samples contain bacteria; so the soil in my garden probably contains bacteria as well (induction).
2. Dan is a philosopher, all philosophers have opinions about Nietzsche; therefore, Dan has an opinion about Nietzsche (deduction).
3. Mexico shares a border with the United States.
4. You are reading at this moment.
Each of these statements must be evaluated by different channels of neural processing (and only the first two require reasoning). And yet each has the same cognitive valence: being true, each inspires belief (or being believed, each is deemed "true"). Such cognitive acceptance allows any apparent truth to take its place in the economy of our thoughts and actions, at which time it becomes as potent as its prepositional content demands.
A World Without Lying?
Knowing what a person believes is equivalent to knowing whether or not he is telling the truth. Consequently, any external means of determining which propositions a subject believes would constitute a de facto "lie detector." Neuroimaging research on belief and disbelief may one day enable researchers to put this equivalence to use in the study of deception. 62 It is possible that this new approach could circumvent many of the impediments that have hindered the study of deception in the past.
When evaluating the social cost of deception, we need to consider all of the misdeeds—premeditated murders, terrorist atrocities, genocides, Ponzi schemes, etc.—that must be nurtured and shored up, at every turn, by lies. Viewed in this wider context, deception commends itself, perhaps even above violence, as the principal enemy of human cooperation. Imagine how our world would change if, when the truth really mattered, it became impossible to lie. What would international relations be like if every time a person shaded the truth on the floor of the United Nations an alarm went off throughout the building?
The forensic use of DNA evidence has already made the act of denying one's culpability for certain actions comically ineffectual. Recall how Bill Clinton's cantatas of indignation were abruptly silenced the moment he learned that a semen-stained dress was en route to the lab. The mere threat of a DNA analysis produced what no grand jury ever could—instantaneous communication with the great man's conscience, which appeared to be located in another galaxy. We can be sure that a dependable method of lie detection would produce similar transformations, on far more consequential subjects.
The development of mind-reading technology is just beginning— but reliable lie detection will be much easier to achieve than accurate mind reading. Whether or not we ever crack the neural code, enabling us to download a persons private thoughts, memories, and perceptions without distortion, we will almost surely be able to determine, to a moral certainty, whether a person is representing his thoughts, memories, and perceptions honestly in conversation. The development of a reliable lie detector would only require a very modest advance over what is currently possible through neuroimaging.
Traditional methods for detecting deception through polygraphy never achieved widespread acceptance, 63 as they measure the peripheral signs of emotional arousal rather than the neural activity associated with deception itself. In 2002, in a 245-page report, the National Research Council (an arm of the National Academy of Sciences) dismissed the entire body of research underlying polygraphy as "weak" and "lacking in scientific rigor." 64 More modern approaches to lie detection, using thermal imaging of the eyes, 65 suffer a similar lack of specificity. Techniques that employ electrical signals at the scalp to detect "guilty knowledge" have limited application, and it is unclear how one can use these methods to differentiate guilty knowledge from other forms of knowledge in any case. 66
Methodological problems notwithstanding, it is difficult to exaggerate how fully our world would change if lie detectors ever became reliable, affordable, and unobtrusive. Rather than spirit criminal defendants and hedge fund managers off to the lab for a disconcerting hour of brain scanning, there may come a time when every courtroom or boardroom will have the requisite technology discreetly concealed behind its wood paneling. Thereafter, civilized men and women might share a common presumption: that wherever important conversations are held, the truthfulness of all participants will b
e monitored. Well-intentioned people would happily pass between zones of obligatory candor, and these transitions will cease to be remarkable. Just as we've come to expect that certain public spaces will be free of nudity, sex, loud swearing, and cigarette smoke—and now think nothing of the behavioral constraints imposed upon us whenever we leave the privacy of our homes—we may come to expect that certain places and occasions will require scrupulous truth telling. Many of us might no more feel deprived of the freedom to lie during a job interview or at a press conference than we currently feel deprived of the freedom to remove our pants in the supermarket. Whether or not the technology works as well as we hope, the belief that it generally does work would change our culture profoundly.
In a legal context, some scholars have already begun to worry that reliable lie detection will constitute an infringement of a person's Fifth Amendment privilege against self-incrimination. 67 However, the Fifth Amendment has already succumbed to advances in technology. The Supreme Court has ruled that defendants can be forced to provide samples of their blood, saliva, and other physical evidence that may incriminate them. Will neuroimaging data be added to this list, or will it be considered a form of forced testimony? Diaries, emails, and other records of a person's thoughts are already freely admissible as evidence. It is not at all clear that there is a distinction between these diverse sources of information that should be ethically or legally relevant to us.
In fact, the prohibition against compelled testimony itself appears to be a relic of a more superstitious age. It was once widely believed that lying under oath would damn a person's soul for eternity, and it was thought that no one, not even a murderer, should be placed between the rock of Justice and so hard a place as hell. But I doubt whether even many fundamentalist Christians currently imagine that an oath sworn on a courtroom Bible has such cosmic significance.