Read The Blank Slate: The Modern Denial of Human Nature Page 30


  For that matter, many writers think, why stop there? Better still to insist that all categories are social constructions and therefore figments, because that would really make invidious stereotypes figments. Rorty notes with approval that many thinkers today “go on to suggest that quarks and genes probably are [inventions] too.” Postmodernists and other relativists attack truth and objectivity not so much because they are interested in philosophical problems of ontology and epistemology but because they feel it is the best way to pull the rug out from under racists, sexists, and homophobes. The philosopher Ian Hacking provides a list of almost forty categories that have recently been claimed to be “socially constructed.” The prime examples are race, gender, masculinity, nature, facts, reality, and the past. But the list has been growing and now includes authorship, AIDS, brotherhood, choice, danger, dementia, illness, Indian forests, inequality, the Landsat satellite system, the medicalized immigrant, the nation-state, quarks, school success, serial homicide, technological systems, white-collar crime, women refugees, and Zulu nationalism. According to Hacking, the common thread is a conviction that the category is not determined by the nature of things and therefore is not inevitable. The further implication is that we would be much better off if it were done away with or radically transformed.9

  This whole enterprise is based on an unstated theory of human concept formation: that conceptual categories bear no systematic relation to things in the world but are socially constructed (and can therefore be reconstructed). Is it a correct theory? In some cases it has a grain of truth. As we saw in Chapter 4, some categories really are social constructions: they exist only because people tacitly agree to act as if they exist. Examples include money, tenure, citizenship, decorations for bravery, and the presidency of the United States.10 But that does not mean that all conceptual categories are socially constructed. Concept formation has been studied for decades by cognitive psychologists, and they conclude that most concepts pick out categories of objects in the world which had some kind of reality before we ever stopped to think about them.11

  Yes, every snowflake is unique, and no category will do complete justice to every one of its members. But intelligence depends on lumping together things that share properties, so that we are not flabbergasted by every new thing we encounter. As William James wrote, “A polyp would be a conceptual thinker if a feeling of ‘Hollo! thingumbob again!’ ever flitted through its mind.” We perceive some traits of a new object, place it in a mental category, and infer that it is likely to have the other traits typical of that category, ones we cannot perceive. If it walks like a duck and quacks like a duck, it probably is a duck. If it’s a duck, it’s likely to swim, fly, have a back off which water rolls, and contain meat that’s tasty when wrapped in a pancake with scallions and hoisin sauce.

  This kind of inference works because the world really does contain ducks, which really do share properties. If we lived in a world in which walking quacking objects were no more likely to contain meat than did any other object, the category “duck” would be useless and we probably would not have evolved the ability to form it. If you were to construct a giant spreadsheet in which the rows and columns were traits that people notice and the cells were filled in by objects that possess that combination of traits, the pattern of filled cells would be lumpy. You would find lots of entries at the intersection of the “quacks” row and the “waddles” column but none at the “quacks” row and the “gallops” column. Once you specify the rows and columns, the lumpiness comes from the world, not from society or language. It is no coincidence that the same living things tend to be classified together by the words in European cultures, the words for plant and animal kinds in other cultures (including preliterate cultures), and the Linnaean taxa of professional biologists equipped with calipers, dissecting tools, and DNA sequencers. Ducks, biologists say, are several dozen species in the subfamily Anatinae, each with a distinct anatomy, an ability to interbreed with other members of their species, and a common ancestor in evolutionary history.

  Most cognitive psychologists believe that conceptual categories come from two mental processes.12 One of them notices clumps of entries in the mental spreadsheet and treats them as categories with fuzzy boundaries, prototypical members, and overlapping similarities, like the members of a family. That’s why our mental category “duck” can embrace odd ducks that don’t match the prototypical duck, such as lame ducks, who cannot swim or fly, Muscovy ducks, which have claws and spurs on their feet, and Donald Duck, who talks and wears clothing. The other mental process looks for crisp rules and definitions and enters them into chains of reasoning. The second system can learn that true ducks molt twice a season and have overlapping scales on their legs and hence that certain birds that look like geese and are called geese really are ducks. Even when people don’t know these facts from academic biology, they have a strong intuition that species are defined by an internal essence or hidden trait that lawfully gives rise to its visible features.13

  Anyone who teaches the psychology of categorization has been hit with this question from a puzzled student: “You’re telling us that putting things into categories is rational and makes us smart. But we’ve always been taught that putting people into categories is irrational and makes us sexist and racist. If categorization is so great when we think about ducks and chairs, why is it so terrible when we think about genders and ethnic groups?” As with many ingenuous questions from students, this one uncovers a shortcoming in the literature, not a flaw in their understanding.

  The idea that stereotypes are inherently irrational owes more to a condescension toward ordinary people than it does to good psychological research. Many researchers, having shown that stereotypes existed in the minds of their subjects, assumed that the stereotypes had to be irrational, because they were uncomfortable with the possibility that some trait might be statistically true of some group. They never actually checked. That began to change in the 1980s, and now a fair amount is known about the accuracy of stereotypes.14

  With some important exceptions, stereotypes are in fact not inaccurate when assessed against objective benchmarks such as census figures or the reports of the stereotyped people themselves. People who believe that African Americans are more likely to be on welfare than whites, that Jews have higher average incomes than WASPs, that business students are more conservative than students in the arts, that women are more likely than men to want to lose weight, and that men are more likely than women to swat a fly with their bare hands, are not being irrational or bigoted. Those beliefs are correct. People’s stereotypes are generally consistent with the statistics, and in many cases their bias is to underestimate the real differences between sexes or ethnic groups.15 This does not mean that the stereotyped traits are unchangeable, of course, or that people think they are unchangeable, only that people perceive the traits fairly accurately at the time.

  Moreover, even when people believe that ethnic groups have characteristic traits, they are never mindless stereotypers who literally believe that each and every member of the group possesses those traits. People may think that Germans are, on average, more efficient than non-Germans, but no one believes that every last German is more efficient than every non-German.16 And people have no trouble overriding a stereotype when they have good information about an individual. Contrary to a common accusation, teachers’ impressions of their individual pupils are not contaminated by their stereotypes of race, gender, or socioeconomic status. The teachers’ impressions accurately reflect the pupil’s performance as measured by objective tests.17

  Now for the important exceptions. Stereotypes can be downright inaccurate when a person has few or no firsthand encounters with the stereotyped group, or belongs to a group that is overtly hostile to the one being judged. During World War II, when the Russians were allies of the United States and the Germans were enemies, Americans judged Russians to have more positive traits than Germans. Soon afterward, when the alliances reversed, Americans judged Ger
mans to have more positive traits than Russians.18

  Also, people’s ability to set aside stereotypes when judging an individual is accomplished by their conscious, deliberate reasoning. When people are distracted or put under pressure to respond quickly, they are more likely to judge that a member of an ethnic group has all the stereotyped traits of the group.19 This comes from the two-part design of the human categorization system mentioned earlier. Our network of fuzzy associations naturally reverts to a stereotype when we first encounter an individual. But our rule-based categorizer can block out those associations and make deductions based on the relevant facts about that individual. It can do so either for practical reasons, when information about a group-wide average is less diagnostic than information about the individual, or for social and moral reasons, out of respect for the imperative that one ought to ignore certain group-wide averages when judging an individual.

  The upshot of this research is not that stereotypes are always accurate but that they are not always false, or even usually false. This is just what we would expect if human categorization—like the rest of the mind—is an adaptation that keeps track of aspects of the world that are relevant to our long-term well-being. As the social psychologist Roger Brown pointed out, the main difference between categories of people and categories of other things is that when you use a prototypical exemplar to stand for a category of things, no one takes offense. When Webster’s dictionary used a sparrow to stand for all birds, “emus and ostriches and penguins and eagles did not go on the attack.” But just imagine what would have happened if Webster’s had used a picture of a soccer mom to illustrate woman and a picture of a business executive to illustrate man. Brown remarks, “Of course, people would be right to take offense since a prototype can never represent the variation that exists in natural categories. It’s just that birds don’t care but people do.”20

  What are the implications of the fact that many stereotypes are statistically accurate? One is that contemporary scientific research on sex differences cannot be dismissed just because some of the findings are consistent with traditional stereotypes of men and women. Some parts of those stereotypes may be false, but the mere fact that they are stereotypes does not prove that they are false in every respect.

  The partial accuracy of many stereotypes does not, of course, mean that racism, sexism, and ethnic prejudice are acceptable. Quite apart from the democratic principle that in the public sphere people should be treated as individuals, there are good reasons to be concerned about stereotypes. Stereotypes based on hostile depictions rather than on firsthand experience are bound to be inaccurate. And some stereotypes are accurate only because of self-fulfilling prophecies. Forty years ago it may have been factually correct that few women and African Americans were qualified to be chief executives or presidential candidates. But that was only because of barriers that prevented them from attaining those qualifications, such as university policies that refused them admission out of a belief that they were not qualified. The institutional barriers had to be dismantled before the facts could change. The good news is that when the facts do change, people’s stereotypes can change with them.

  What about policies that go farther and actively compensate for prejudicial stereotypes, such as quotas and preferences that favor underrepresented groups? Some defenders of these policies assume that gatekeepers are incurably afflicted with baseless prejudices, and that quotas must be kept in place forever to neutralize their effects. The research on stereotype accuracy refutes that argument. Nonetheless, the research might support a different argument for preferences and other gender- and color-sensitive policies. Stereotypes, even when they are accurate, might be self-fulfilling, and not just in the obvious case of institutionalized barriers like those that kept women and African Americans out of universities and professions. Many people have heard of the Pygmalion effect, in which people perform as other people (such as teachers) expect them to perform. As it happens, the Pygmalion effect appears to be small or nonexistent, but there are more subtle forms of self-fulfilling prophecies.21 If subjective decisions about people, such as admissions, hiring, credit, and salaries, are based in part on group-wide averages, they will conspire to make the rich richer and the poor poorer. Women are marginalized in academia, making them genuinely less influential, which increases their marginalization. African Americans are treated as poorer credit risks and denied credit, which makes them less likely to succeed, which makes them poorer credit risks. Race- and gender-sensitive policies, according to arguments by the psychologist Virginia Valian, the economist Glenn Loury, and the philosopher James Flynn, may be needed to break the vicious cycle.22

  Pushing in the other direction is the finding that stereotypes are least accurate when they pertain to a coalition that is pitted against one’s own in hostile competition. This should make us nervous about identity politics, in which public institutions identify their members in terms of their race, gender, and ethnic group and weigh every policy by how it favors one group over another. In many universities, for example, minority students are earmarked for special orientation sessions and encouraged to view their entire academic experience through the lens of their group and how it has been victimized. By implicitly pitting one group against another, such policies may cause each group to brew stereotypes about the other that are more pejorative than the ones they would develop in personal encounters. As with other policy issues I examine in this book, the data from the lab do not offer a thumbs-up or thumbs-down verdict on race- and gender-conscious policies. But by highlighting the features of our psychology that different policies engage, the findings can make the tradeoffs clearer and the debates better informed.

  OF ALL THE faculties that go into the piece of work called man, language may be the most awe-inspiring. “Remember that you are a human being with a soul and the divine gift of articulate speech,” Henry Higgins implored Eliza Doolittle. Galileo’s alter ego, humbled by the arts and inventions of his day, commented on language in its written form:

  But surpassing all stupendous inventions, what sublimity of mind was his who dreamed of finding means to communicate his deepest thoughts to any other person, though distant by mighty intervals of place and time! Of talking with those who are in India; of speaking to those who are not yet born and will not be born for a thousand or ten thousand years; and with what facility, by the different arrangements of twenty characters upon a page!23

  But a funny thing happened to language in intellectual life. Rather than being appreciated for its ability to communicate thought, it was condemned for its power to constrain thought. Famous quotations from two philosophers capture the anxiety. “We have to cease to think if we refuse to do it in the prisonhouse of language,” wrote Friedrich Nietzsche. “The limits of my language mean the limits of my world,” wrote Ludwig Wittgenstein.

  How could language exert this stranglehold? It would if words and phrases were the medium of thought itself, an idea that falls naturally out of the Blank Slate. If there is nothing in the intellect that was not first in the senses, then words picked up by the ears are the obvious source of any abstract thought that cannot be reduced to sights, smells, or other sounds. Watson tried to explain thinking as microscopic movements of the mouth and throat; Skinner hoped his 1957 book Verbal Behavior, which explained language as a repertoire of rewarded responses, would bridge the gap between pigeons and people.

  The other social sciences also tended to equate language with thought. Boas’s student Edward Sapir called attention to differences in how languages carve up the world into categories, and Sapir’s student Benjamin Whorf stretched those observations into the famous Linguistic Determinism hypothesis: “We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way—an agreement that holds throughout our speech community and is codified in the patterns of our language. The agreement is, of course, an implicit and unstated one, but its terms are absolutel
y obligatory!”24 More recently, the anthropologist Clifford Geertz wrote that “thinking consists not of ‘happenings in the head’ (though happenings there and elsewhere are necessary for it to occur) but of a traffic in what have been called… significant symbols—words for the most part.”25

  As with so many ideas in social science, the centrality of language is taken to extremes in deconstructionism, postmodernism, and other relativist doctrines. The writings of oracles like Jacques Derrida are studded with such aphorisms as “No escape from language is possible,” “Text is self-referential,” “Language is power,” and “There is nothing outside the text.” Similarly, J. Hillis Miller wrote that “language is not an instrument or tool in man’s hands, a submissive means of thinking. Language rather thinks man and his ‘world’… if he will allow it to do so.”26 The prize for the most extreme statement must go to Roland Barthes, who declared, “Man does not exist prior to language, either as a species or as an individual.”27

  The ancestry of these ideas is said to be from linguistics, though most linguists believe that deconstructionists have gone off the deep end. The original observation was that many words are defined in part by their relationship to other words. For example, he is defined by its contrast with I, you, they, and she, and big makes sense only as the opposite of little. And if you look up words in a dictionary, they are defined by other words, which are defined by still other words, until the circle is completed when you get back to a definition containing the original word. Therefore, say the deconstructionists, language is a self-contained system in which words have no necessary connection to reality. And since language is an arbitrary instrument, not a medium for communicating thoughts or describing reality, the powerful can use it to manipulate and oppress others. This leads in turn to an agitation for linguistic reforms: neologisms like co or na that would serve as gender-neutral pronouns, a succession of new terms for racial minorities, and a rejection of standards of clarity in criticism and scholarship (for if language is no longer a window onto thought but the very stuff of thought, the metaphor of “clarity” no longer applies).