Read The Pentagon's Brain Page 31


  For Goldblatt, the answer was clear. He would provide his daughter with every opportunity to compete with other children, through performance enhancements like a motorized wheelchair and the best computers available, with everything in her bedroom remotely controlled. This vision carried over to DARPA, where, as director of DSO, Goldblatt would oversee performance enhancements for the warfighter on a national scale, spending over $100 million on programs to reengineer the twenty-first-century soldier fighting on foot.

  Asked about that morally dangerous path, Goldblatt rephrases his question, “How is having a cochlear implant that helps the deaf hear any different than having a chip in your brain that could help control your thoughts?” When questioned about unintended consequences, like controlling humans for nefarious ends, Goldblatt insists, “There are unintended consequences for everything.”

  It was June 2001 and the new president, George W. Bush, had been in office for six months. The biological weapons threat continued to interest the public and was regularly featured in the news. And war games, including the computer-based SIMNET, had become an integral part of national security strategizing. But in some arenas, old school role-playing prevailed. In the third week of June, a group of fifteen former senior officials and two journalists assembled at Andrews Air Force Base in Washington, D.C., to carry out a script-based, asymmetrical attack simulation called Dark Winter. In the fictional game scenario, the nation has been pummeled into chaos after terrorists attack Oklahoma with a biological weapon containing smallpox. The Dark Winter exercise involved three National Security Council meetings taking place over a period of two weeks. In the war game, the National Security Council members were role-played by former officials. The onetime U.S. senator and chairman of the Senate Armed Services Committee, Sam Nunn, played Dark Winter’s fictional president; the former special counselor to the president and White House communications director, David Gergen, played the national security advisor; a former vice chief of staff of the U.S. Army, General John. H. Tilelli, played the chairman of the Joint Chiefs of Staff; the former director of the CIA, James Woolsey, played Dark Winter’s fictional CIA, director; and the sitting governor of Oklahoma, Frank Keating, played the fictional governor of Oklahoma. Dark Winter’s war game plot revolved around how the players would respond to a hypothetical biological weapons attack.

  First, the game players were “briefed” on background events. “Last month Russian authorities, with support from the FBI, arrested Yusuuf Abdul Aziiz, a known operative in Al-Qaida and a close personal friend and suspected senior lieutenant of Usama bin Laden,” read the Dark Winter script. “Yusuuf was caught in a sting operation that had been developing during the last year. He was attempting to acquire 50 kilograms of plutonium and was also attempting to arrange the purchase of several biological pathogens that had been weaponized by the Soviet Union.”

  The war game scenario also involved Iraq. Dark Winter game players were told that two days earlier, “Iraqi forces in the South of Iraq moved into offensive positions along the Kuwaiti border,” just as they had done in real life in 1990, which set the Gulf War in motion. Also on background, the war gamers learned about domestic conditions: “US Economy is in good shape. Polls show a slim majority of Americans oppose a major deployment of US troops to the Persian Gulf. Most Americans agree that Saddam’s Iraqi regime represents a real threat to stability in the region and to American interests.” It is worth noting that in real life, the first two fictional statement were based in fact, but the third one, that most Americans saw Saddam’s Iraq as a threat, was not a fact. What was factual was that the man who had been secretary of defense during the Gulf War, Dick Cheney, was now the vice president of the United States, and he saw Saddam’s Iraq as a threat. As for Dark Winter, the game began when the fictional governor of Oklahoma informed the National Security Council that his state has been attacked with a smallpox weapon.

  Over the course of the fourteen days, for the game players, the scenario went from bad to worse to calamitous. Entire states shut down, chaos reigned, massive traffic jams ensued, civil liberties were suspended, many banks and post offices closed. As vaccines ran out, “angry citizens denounce[d] the government’s failure to stop the smallpox epidemic.” Civilians started shooting policemen. The National Guard started shooting civilians. Finally, a fictional “prominent Iraqi defector claim[ed] that Iraq arranged the bioweapons attack on the US through intermediaries,” most likely Yusuuf Abdul Aziiz, the fictional deputy of the real Osama bin Laden.

  In the Dark Winter war game, 3 million Americans died of smallpox. As a result, a fictional CNN-Gallup poll revealed that 48 percent of Americans wanted the president to consider using nuclear weapons in response. The game ended there.

  One month later, on July 23, 2001, former chairman of the Senate Armed Services Committee Sam Nunn—the man who played Dark Winter’s fictional president—told Congress during a House hearing on combating biological terrorism that the real emergency revealed in the war game was just how unprepared America was to handle an actual biological weapons attack.

  “I was honored to play the part of the President in the exercise Dark Winter,” Nunn told Congress. “You often don’t know what you don’t know until you’ve been tested,” he said. “And it’s a lucky thing for the United States that, as the emergency broadcast network used to say, ‘this is just a test, this is not a real emergency.’ But Mr. Chairman, our lack of preparation is a real emergency.”

  No one said, “But Dark Winter was only a game.”

  Lines were being blurred. Games were influencing reality. Man was merging with machine. What else would the technological advances of the twenty-first century bring?

  In August 2001, scientists from Los Alamos and the Lawrence Livermore National Laboratory—renamed in honor of its founder, Ernest O. Lawrence—traveled to the West Desert Test Center at Dugway Proving Ground in Utah. There, inside the Special Programs Division, the scientists tested a new sensor system designed to detect killer pathogens such as anthrax and botulinum toxin. The name of the program was the Biological Aerosol Sentry and Information Systems, or BASIS. It was hailed as a plan for “guarding the air we breathe.” In truth, all BASIS could do was “detect to treat.” Unlike chemical weapons, the presence of which could now be identified before release through an advanced technology called acoustic detection, biological weapons could be detected only after the fact. Even worse, the sensor systems were notorious for giving false alarms; the filter system was flawed. In open literature, Livermore acknowledged that false alarms were a serious concern but did not admit that their own problem was widespread. “Any technology that reports a terrorist incident where none exists may induce the very panic and social disruption it is intended to thwart. Therefore, the rate of false-positive alarms must be zero or very nearly so.”

  By the summer of 2001, Vice President Cheney was becoming increasingly concerned about a possible biological weapons attack directed at the White House. Plans were put in place to install Livermore’s BASIS system throughout the White House and its grounds.

  In the summer of 2001, DARPA’s biological weapons defense initiative was one of the fastest-growing programs in the defense sciences world. A decade earlier, before the defection of the Soviet scientists, the threat was not even known to exist. Now the industry was a several-hundred-million-dollar-a-year field.

  Programs were largely speculative: as of yet, in a conundrum that ran parallel to ARPA’s first quandary, ballistic missile defense, there was no way to defend against a biological weapons attack. Only if there were a terrorist attack involving the release of a deadly pathogen on American soil could biological weapons defense truly be put to the test. Defensive programs and countermeasure programs would then skyrocket. Which is exactly what happened next.

  PART IV

  THE WAR ON TERROR

  CHAPTER NINETEEN

  Terror Strikes

  Early on the morning of September 11, 2001, twenty-four-year-old David A. B
ray was in Atlanta, at the U.S. Centers for Disease Control (CDC), for a briefing with the Laboratory Response Network for Bioterrorism. Bray was the information technology chief for the Bioterrorism Preparedness and Response Program at CDC, a program established by President Clinton under his U.S. policy on counterterrorism. It was Bray’s job to make sure people got good information when and as they needed it. There was so much information out there, filtering out the important information was key. A man cannot drink from a fire hose. The meeting on September 11 was supposed to start at 9:00 a.m.

  “When I signed up for work in bioterrorism I thought to myself, what kind of world requires my job?” asks Bray. That spring, he says, “we had received a memo that said, ‘Be on alert for Al Qaeda activity June through August 2001.’ It specifically ended in August.”

  It was September now, and Bray and his team were getting ready for the Bioterrorism Preparedness and Response team briefing when an airplane hit the North Tower of the World Trade Center. “We got the news. Details were sketchy.” At 9:03, he recalls, “when the second airplane hit, we definitely knew it was a terrorism event.”

  Many of the CDC employees were dispatched elsewhere. “A large group started piling computers into cars and were sent to an undisclosed off-site bunker,” says Bray, explaining, “We were concerned that a second event would involve bioterrorism.”

  David Bray has always been a remarkably focused person. His area of expertise is informatics, the science of how information is gathered, stored, and retrieved. The son of a minister and a teacher, Bray started winning national science prizes in middle school. By age fifteen, he had his first job with the federal government, with the Department of Energy at its Continuous Electron Beam Accelerator Facility in Newport News, Virginia.

  “I was trying to understand the universe, and the lab was looking for new energy sources,” Bray says of his youth, when he had to get a special permit to work for the Department of Energy so as to comply with federal laws regarding child labor. By the time Bray was sixteen, he had been written up in the Washington Post for inventing a prizewinning computer program that predicted how best to clean up an oil spill. At age seventeen Bray was working for the Department of Defense. Before he had turned twenty-one, he had added jobs with the National Institutes of Health and the Department of Agriculture to his résumé. In between jobs he attended college, studying science, biology, and journalism. One summer he worked in South Africa as a health reporter for the Cape Argus News. What interested Bray most was information. How people get information and what they do with the information they have.

  As a reporter covering the AIDS crisis in South Africa, Bray observed how informed people were still willing to ignore dangers right in front of them. In 1997 more than one out of six people in South Africa had the AIDS virus, and the epidemic was spreading out of control. Bray went around the countryside talking to South African students about the risks they faced, and how easily they could protect themselves with prophylactics. “They knew that they should wear protection,” Bray says, “but I asked them if they would wear protection, and they said they would not.” This was hardly shocking. Bray said many Americans had the same attitude: “It’s not going to happen to me.” He began thinking about how to get people to follow the best course of action, certainly as far as public health goes, based on the information they have. At the Centers for Disease Control, he found a place where he could focus on this idea.

  The terrorist attacks on the morning of September 11 created what Bray calls a “hyper-turbulent environment.” In this kind of fear-fueled setting, “knowledge is the most strategically significant resource of an organization,” says Bray. Not more knowledge but better knowledge. Good, clear, factual information. Data about what is going on. Immediately after 9/11, says Bray, “we began reaching out to fifty states. We worked from the idea that the second event would be a biological event. We wanted to have information channels [open] with all fifty states” in the event that a bioterrorism attack were to occur.

  DARPA had been sponsoring a surveillance program called Bio-ALIRT, for Bio-Event Advanced Leading Indicator Recognition Technology, an information-based technology program designed to enable computers to quickly recognize a bioweapons attack. To get a computer to “recognize” a bioweapons attack from the data was an extraordinary enterprise, and the program wasn’t capable enough by 9/11.

  Originally designed to protect troops on foreign soil, the program had recently expanded with plans for a national surveillance program of U.S. civilians, using an individual’s medical records. The ramifications for collecting medical information on Americans for purposes of national security, but without their knowledge or consent, were profound. The Bio-ALIRT program fell under an emerging new industry called “biosurveillance,” a contentious concept that has largely avoided public scrutiny. DARPA’s military partner in this effort was the Walter Reed Army Institute of Research. Its civilian partners were the Johns Hopkins Applied Physics Laboratory, the University of Pittsburgh and Carnegie Mellon University, and the Stanford University Medical Informatics group. DARPA’s defense contractor partners were General Dynamics Advanced Information Systems and the IBM Corporation.

  The science behind Bio-ALIRT was intended to determine whether or not “automated detection algorithms” could identify an outbreak in either a bioweapons attack or a naturally occurring epidemic, like bird flu. Never mind the people—the doctors, nurses, and clinicians—reporting from the field. The idea behind Bio-ALIRT was to take human “bias” out of the equation and allow computers to do the job faster. As part of Bio-ALIRT, supercomputers would scan vast databases of medical records, in real time, as doctors entered data. Simultaneously, and also as part of Bio-ALIRT, supercomputers would scan sales at pharmacies of both prescription and nonprescription drugs, in real time. A privately held company called Surveillance Data, Inc., was hired to provide “de-identified” outpatient data, meaning that Surveillance Data, Inc., would “scrub” the medical information of personal details, such as names, social security numbers, and home addresses. It is unclear how much medical history was considered personal and how much the Bio-ALIRT supercomputers needed to differentiate between chronic medical conditions and new symptoms.

  There were many flaws in the system, privacy issues among them, but one flaw rendered the program all but worthless. Bio-ALIRT’s automated detection algorithms—the software that told the supercomputers what to look for—were based on data from the World Health Organization’s International Classification of Diseases, ninth revision, known as ICD-9. But the biological weapons that were the most deadly—the chimera viruses and the recombinant pathogens like the ones the Soviet defectors Ken Alibek, Vladimir Pasechnik, Sergei Popov, and others had been working on at Biopreparat—were neither listed in nor identifiable by ICD-9. If Bio-ALIRT programs had been further along than in their earliest stages, the CDC could potentially have benefited from the system. But on 9/11, the biosurveillance industry was still in its infancy, and the Laboratory Response Network for Bioterrorism, which Bray led as information chief, had to rely on humans in all fifty states for receiving information. Bray and his team had an overwhelming amount of work cut out for themselves in this hyper-turbulent environment. Bray welcomed the challenge.

  “It was a very long day,” recalls Bray, who was personally doing the work that one day a computer might do.

  On the morning of September 11, 2001, when the first airplane hit the North Tower of the World Trade Center, at 8:46 a.m., Vice President Dick Cheney was sitting in his office in the West Wing of the White House. He immediately focused his attention on the television screen. “It was a clear day, there were no weather problems, and then we saw the second airplane hit,” Cheney recalled in his memoir. “At that moment, you knew this was a deliberate act. This was a terrorist act.”

  Vice President Cheney called President Bush, who was in Sarasota, Florida, visiting an elementary school. Vice President Cheney was on the phone with a presiden
tial aide in Florida when his door burst open and a Secret Service agent rushed in. “He grabbed me and propelled me out of my office, and into the underground shelter in the White House,” Cheney told CNN’s John King. Later that same night, the Secret Service transferred the vice president to a more secure underground location outside the capital. En route from the White House in a helicopter, Cheney asked to view the damage to the Pentagon, which had been struck by a third plane at 9:37 a.m. “As we lifted off and headed up the Potomac, you could look out and see the Pentagon, see that black hole where it’d been hit,” Cheney recalled. For the first time in the Pentagon’s history, the very symbol of American military power stood broken and exposed with a huge gash in one of its five sides.

  Cheney was helicoptered to an “undisclosed location,” which was Site R, the underground bunker facility inside the Raven Rock Mountain Complex seventy-five miles from the White House, near Camp David. The location was disclosed in 2004 by journalist James Bamford. This was the Cold War–era underground command center that had caused President Eisenhower so much grief back in 1956, during the heated post–Castle Bravo debate over civil defense. Site R was originally designed to be the place where the president would be taken in the event of a nuclear attack. Eisenhower had struggled with the concept throughout his presidency, mindful that it was designed to provide safety for the president and his close advisors during a time when the very population the president was sworn to protect would be most vulnerable, exposed, and unaware.