Read The Transparent Society Page 6


  DARPA had an ulterior motive in developing this valuable research tool. A motive that would fundamentally affect all the networks that followed, and perhaps alter society forever. Ironically, this fantastic device for peaceful rambunctiousness arose out of bloody-minded contemplations, then called “thinking about the unthinkable.”

  Pondering what to do during and after a nuclear war.

  Back in 1964, Pentagon officials asked the Rand Corporation to imagine a transcontinental communication system that might stand a chance of surviving even an atomic cataclysm. Since every major telephone, telegraph, and radio junction would surely be targeted, generals were desperate for some way to coordinate with government, industry, and troops in the field, even after a first strike against U.S. territory.

  Rand researcher Paul Baran found that such a survivable system was theoretically possible. It would be a dispersed entity, avoiding all the classic principles of communications infrastructure, such as central switching and control centers. Instead, Baran reasoned that a system robust enough to withstand mega-calamity ought to emulate the way early telephone companies proliferated across New York City a century ago, when wires were strung from lampposts to balconies and fire escapes. The early jumble of excess circuits and linkages seemed inefficient, and that chaotic phase passed swiftly as phone companies unified. But all those extra cables did offer one advantage. They ensured that whole chunks of the network could be (and often were) ripped or burned out, and calls could still be detoured around the damage.

  In those days, long-distance call routing was a laborious task of negotiation, planned well in advance by human operators arranging connections from one zone to the next. But this drudgery might be avoided in a dispersed computer network if the messages themselves could navigate, finding their own way from node to node, carrying destination information in their lead bits like the address on the front of an envelope. Early theoretical work by Alan Turing and John Von Neumann hinted this to be possible by allowing each part of a network to guess the best way to route a message past any damaged area and eventually reach its goal. In theory, such a system might keep operating even when others lay in tatters.

  In retrospect, the advantages of Baran’s insight seem obvious. Still, it remains a wonder that the Pentagon actually went ahead with experiments in decentralized, autonomous message processing. Certainly the Soviets, despite having excellent mathematicians, never made a move to surrender hierarchical control. The image of authorities ceding mastery to independent, distributed “network nodes” contradicts our notion of how bureaucrats think. Yet this innovative architecture was given support at the highest levels of the U.S. establishment.

  Why did generals and bureaucrats consent to establish a system that, by its nature, undermines rigid hierarchical authority? Whenever I ask this question, modern Internet aficionados answer that “they must not have realized where this would lead.” But even early versions of the Internet showed its essential features: hardiness, flexibility, diversity, and resistance to tight regulation. A more reasonable hypothesis may be that some of those who consented to creating the nascent Internet were influenced by Vannevar Bush, and had an inkling that they were midwifing something that might ultimately distribute authority rather than concentrate it. The critical moment came when a decision was made to let private networks interconnect with the government’s system. Steve Wolff of the National Science Foundation presided over this delicate era, as systems like Uunet, Csnet, and the anarchic Usenet linked up, taking matters beyond the point of no return.

  Whether it was fostered by visionary thinking or pure serendipity, the chief “designer” of this astonishingly capable and flexible system has clearly been the system itself. Some even say it illustrates the post-Darwinian principle of “pre-adaptation,” under which traits that served an organism for one purpose may emerge later as the basis for entirely new capabilities. (For example, the fins of some fish later became adapted as legs.) Indeed, the high autonomy of the Internet’s many segments might be viewed as letting each node create its own micro-ecology of users, programs, and services. These then transact with others in ways that start to resemble the stochastic and competitive behaviors of organic life. None of it might have been possible in a hierarchically designed system.

  Putting aside such extravagant speculations, we do know that a crucial decision to seek robustness, even at the cost of classical security and control, was to have consequences far beyond those early worries about nuclear conflict. In a great paradox of our time, the deep rifts dividing humanity during the Cold War ultimately led to a supremely open and connecting system. The same traits responsible for the Internet’s hardiness in the face of physical destruction seem also to protect its happy chaos against attempts to impose rigorous discipline, a point illustrated in a popular aphorism by John Gilmore: “The Net perceives censorship as damage, and routes around it.”

  I plan to reconnect with this thought in later chapters, for it bears directly on the issue of a transparent society. Censorship can be seen as just one particular variant of secrecy. In the long run, the Internet and other new media may resist and defeat any attempt to restrict the free flow of information. While some hope to fill the Net’s electronic corridors with anonymity and cryptic messages, they might find they are ultimately thwarted by the nature of the thing itself.

  From humble beginnings, mighty entities grow. The early work of pioneers such as Vinton Cerf and Robert Kahn proliferated rapidly. After the first crude network came online, more agencies tied into the embryonic ARPANET. The National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and many universities added their own innovative structures. E-mail thronged alongside more formal data streams, as workers exchanged official and private messages, or set up niches of “cyberspace” to explore ideas beyond the limits of their study grants. Off-duty techs played midnight chess with colleagues half a world away. Discussion circles, or newsgroups, staked out a territory called Usenet, a vast informal zone where interested parties could roam at will, exchanging information, rumors, and argument. Official authority was never very clear on the growing Internet. In lieu of some rigid, controlling agency, ad hoc committees achieved consensus on standard communications protocols, such as the system of address designations assigned to each linked computer node.

  Word eventually spread among people outside government and academe. Businesses and private citizens heard about a universe of sophisticated wonders that had been created by scholars, and clamored to be let in. Companies formed to act as gateways to the data cosmos. The Net explosion began. And with it, proclamations of a new age for humankind.

  PROJECTIONS OF CYBERNETIC PARADISE

  Amid speculative talk of 1,500 channels on your television set, interactive movies, brain-to-computer links, and virtual reality, one can lose track of which predictions are tangible and which seem more like “vapor.” Some very smart people can get swept up by hyperbole, as when John Perry Barlow, a cofounder of the Electronic Frontier Foundation, declared the Internet “the most important human advancement since the printing press.” Barlow later recanted, calling it simply the most important discovery since fire. Nor was he the sole prophet acclaiming an egalitarian realm of unlimited opportunity for all, just around the corner.

  As the number of users grows geometrically, some anticipate that by 2008 the Net might encompass the entire world population. In his 1993 book Virtual Reality, Howard Rheingold called for redefining the word community, since in the near future each sovereign individual may be able to sift among six or more billion souls, sorting by talent or avocation to find those compatible for consorting with at long range, via multimedia telepresence, in voluntary associations of shared interest. No longer will geography or birth-happenstance determine your friendships, but rather a natural affinity of passions and pastimes.

  Some pundits emphasize transnational features of an electronic world, predicting the end of the nation state. (See “A Witheri
ng Away?” after chapter 9.) Others proclaim the Internet a modern oracle, enabling simple folk to query libraries, databases, political organizations, or even corporate and university researchers, at last breaking the monopoly of “experts” and empowering multitudes with the same information used by the decisionmaking class. (See “A Century of Aficionados” later in this chapter.)

  Projecting this transcendent imagery forward in time, science fiction author Vernor Vinge foresees computerized media leading to a culturaltechnical “singularity.” When each person can share all stored knowledge, and exchange new ideas instantly, every field may advance at exponential rates, leading to a kind of human deification, a concept elaborated by UCLA researcher Gregory Stock in Metaman: The Merging of Humans and Machines into a Global Superorganism.

  This transcendent notion of apotheosis through technology is not new. It is illustrated by Benjamin Franklin’s 1780 letter to the chemist Joseph Priestley. “The rapid progress true science now makes occasions my regretting sometimes that I was born so soon. It is impossible to imagine the heights to which may be carried, in a thousand years, the power of man over matter.”

  What might Old Ben have thought of his heirs’ accomplishments in a mere quarter of that time?

  Inevitably, all this gushing hype has led to a backlash. In a recent book, computer scientist Clifford Stoll coined the term “silicon snake oil” to describe the recent ecstatic forecasts about electronic media. Despite his background, Stoll urged skepticism toward the more extravagant arm waving of Net enthusiasts, whose high-tech razzle-dazzle may distract users from building relationships with the real people around them. Taking Stoll’s objection further, University of California Professor Philip Agre warns that each major advance of the industrial age was associated with fits of transcendentalism, in which enthusiasts rushed eagerly to blur the distinction between themselves and the machines, and then between their favorite machinery and the world. (We mentioned earlier the fervor and disappointment that accompanied first nuclear fission and then space flight.) Agre says this peculiar mental aberration most often arises in bright, excitable males who, faced with complex social problems, seem drawn to miraculous solutions tinkered out of inanimate matter. Matter that is more easily understood than cantankerous, complex human beings.

  At the opposite extreme, we see waves of Luddite reaction, featuring antitechnology tirades by people who see devils (or at least soullessness) in the machines and call fervently for a nostalgic return to “better” days.

  Others foresee a danger of societal collapse resulting from our fragile, computer-dependent civilization falling prey either to some unexpected software glitch or to deliberate sabotage. This new era is especially rattling to the military and intelligence communities, for whom strict control of information used to be justified by a life-or-death need to retain their competitive advantage. The cultural transformation that these communities face in coming years will be all the more difficult, says strategic analyst Jeffery Cooper, “because the military must build its core competencies and forge its competitive advantages from tools that may be available to all.” (See chapter 10.)

  Is it possible to make sense out of these contrasting views—from brilliant to gloomy—about our electrified future? History certainly does warn us to be wary whenever a new communication technology arrives on the scene. While some seek to uplift humanity, others skillfully seize each innovation, applying it to the oldest of all magical arts—manipulating others.

  Take the introduction of Gutenberg’s working printing press, which ended the medieval control over literacy long held by the church and nobility. This invention liberated multitudes to shatter old constraints and sample provocative ideas. It also freed demagogues to cajole with new slanders, spread effectively via the printed word. According to James Burke, author of Connections, the chief short-term beneficiaries of printing turned out to be religious factionalism and nationalism. The following two centuries illustrated this, as Europe drifted into waves of unprecedentedly savage violence.

  More recently, in Germany of the 1930s, Junker aristocrats thought they could control the firebrand Adolf Hitler because they owned the newspapers. They were mistaken. Nazis went around the press, reaching vastly greater masses with the hypnotizing power of radio and loudspeakers. To people freshly exposed, without the technological immunization that often comes with familiarity, these new media seemed to amplify a skilled user like Hitler, making him appear larger than life.

  New communications technologies also have the potential to undermine authority. In prerevolutionary Iran, followers of the Ayatollah Khomeini bypassed the shah’s monopoly over radio and television by smuggling into the country one audiocassette per week. Khomeini’s sermon, soon duplicated a thousandfold, was played at Friday services in countless mosques, preparing for the storm to come.

  Fax machines came close to serving the same insurrectionary function in China, during the Tian An Men uprising. A few years later, fax and Internet connections helped foil the 1991 attempted coup in the last days of the Soviet Union. Members of the old guard briefly tried to reinstate rigorous one-party rule by seizing central organs of communication, but found themselves neatly bypassed by new media.

  Some effects go far beyond the merely political. Television plays no favorites, serving tyrants and educators alike, carrying both culture and propaganda, truth and lies, pandered drivel and deep insights. Innumerable nature programs have given urban citizens a better feel for ecological matters than their farmer ancestors who actually toiled on the land, thus boosting support for farsighted environmental policies. On the other hand, overuse of television effectively shortens the active life span of a sedentary “couch potato” by more years than he saves by voting for clean air laws.

  So it often goes with the fruits of science. New communication arts prove at once both empowering and potentially manipulative of the common man or woman. As for the vaunted Internet, both messianic utopians and pessimistic critics may be missing the point. Amid all the abstract theorizing, why aren’t we asking important, pragmatic questions, such as what will happen when personal computers become so cheap that citizens of the poorest Third World nations will have readier access to data than food or clean water?

  We are bound for interesting times.

  Nothing makes me happier on a sunny day than to think of how wrong I’ve been in the past. The old fears of people like me that technology leads to totalitarianism and cultural sterility do not come true. The computer, the fax, the car phone, the answering machine, all seem to lead to a more civilized life, affording us greater privacy and freedom, not less.

  GARRISON KEILLOR

  A PASSION TO BE DIFFERENT

  New media are important to the transformation that is taking place. But all by themselves, such technologies as the Internet will not determine our fate. Whether they wind up enhancing freedom or become all-seeing tools for Orwellian oppression will largely depend on the attitudes that prevail among millions of our fellow citizens. And so next we’ll explore whether there is both the will and the mettle to maintain an open society.

  Throughout recorded history, countless human clans and nations exhibited a tendency toward xenophobia. Ancient myths and legends are filled with warnings against strangers, from Little Red Riding Hood (don’t tell your business to hairy beasts you meet in the woods), to the tale of Coyote and the Green Buffalo (watch out for tricks pulled by the tribe over the hill). In many tongues the word or phrase that meant “human being”—someone whose violent killing would be murder—was reserved for initiated members of the tribe. While relatively primitive technology may have kept wars somewhat less bloody in olden times, chronicles show that they were also much more frequent.

  None of this should be surprising. The chief factor governing how well people tolerate outsiders appears to be their ambient level of fear. Under ceaseless threat of starvation or invasion, it was natural for our ancestors to be suspicious of strangers—and toward those within a com
munity who did not act in normal, predictable ways. Kings often found it useful to bolster this reflex, fostering dread of some external or internal group to promote social cohesion and control.

  I do not say any of this to insult past cultures. Far from it. Studying them rouses poignant sympathy for people coping under hard circumstances. As anthropologist Joseph Campbell pointed out, we gain insight into the human condition by studying the lore of our ancestors. Still, little good comes from romanticizing the past under a blur of nostalgia. We faced a long, hard road getting here. And along the way it was especially harsh to be a stranger—or an eccentric within one’s own culture. Being different often had dire, even fatal consequences.

  Elsewhere I have discussed a striking characteristic of our own neo-Western civilization—a salient feature of the last fifty years that has so far escaped much comment—arising from the unprecedented wealth and peace lately experienced by certain parts of the world. In large sections of the Americas, western Europe, Japan, Australia, and some other lucky regions, several generations have grown up with (on average) very little experience of hunger or foreign invasion. Despite justified continuing public angst over poverty, crime, and external threats, our day-to-day fear level is arguably the lowest experienced by any mass polity since humans first strode upright.

  This may seem at odds with the unease depicted in newspapers and on television. Villainy and violence are still widespread. (We’ll see how a transparent society may bring these rates down.) Yet most historical accounts show that citizens of other days were accustomed to far greater levels of daily disorder and death. Revisionist anthropologists have shown, for instance, that on a per capita basis even the !Kung bushmen of the Kalahari Desert, renowned for their gentleness, experience a statistically higher intratribal homicide rate than denizens of downtown Detroit. Some researchers estimate that 20 to 30 percent of males in preindustrial societies died at the hands of other males.