Read The Pentagon's Brain Page 44


  After Goldblatt left the agency, in scientific journals DARPA researchers identified a series of “groundbreaking advances” in “Man/Machine Systems.” In 2014 DARPA program managers stated that “the future of brain-computer interface technologies” depended on merging all the technologies of DARPA’s brain programs, the noninvasive and the invasive ones, specifically citing RAM, REPAIR, REMIND, and SUBNETS. Was DARPA conducting what were, in essence, intelligence, surveillance, and reconnaissance missions inside the human brain? Was this the long-sought information that would provide DARPA scientists with the key to artificial intelligence? “With respect to the President’s BRAIN initiative,” write DARPA program managers, “novel BCI [brain-computer interface] technologies are needed that not only extend what information can be extracted from the brain, but also who is able to conduct and participate in those studies.”

  For decades scientists have been trying to create artificially intelligent machines, without success. AI scientists keep hitting the same wall. To date, computers can only obey commands, following rules set forth by software algorithms. I wondered if the transhumanism programs that Michael Goldblatt pioneered at DARPA would allow the agency to tear down this wall. Were DARPA’s brain-computer interface programs the missing link?

  Goldblatt chuckled. He’d left DARPA a decade ago, he said. He could discuss only unclassified programs. But he pointed me in a revelatory direction. This came up when we were discussing the Jason scientists and a report they published in 2008. In this report, titled “Human Performance,” in a section called “Brain Computer Interface,” the Jasons addressed noninvasive interfaces including DARPA’s CT2WS and NIA programs. Using “electromagnetic signals to detect the combined activity of many millions of neurons and synapses” (in other words, the EEG cap) was effective in augmenting cognition, the Jasons noted, but the information gleaned was “noisy and degraded.” The more invasive programs would produce far more specific results, they observed, particularly programs in which “a micro-electrode array [is] implanted into the cortex with connections to a ‘feedthrough’ pedestal on the skull.” The Jason scientists wrote that these chip-in-the-brain programs would indeed substantially improve “the desired outcome,” which could allow “predictable, high quality brain-control to become a reality.”

  So there it was, hidden in plain sight. If DARPA could master “high quality brain-control,” the possibilities for man-machine systems and brain-computer interface would open wide. The wall would come down. The applications in hunter-killer drone warfare would potentially be unbridled. The brain chip was the missing link.

  But even the Jasons felt it was important to issue, along with this idea, a stern warning. “An adversary might use invasive interfaces in military applications,” they wrote. “An extreme example would be remote guidance or control of a human being.” And for this reason, the Jason scientists cautioned the Pentagon not to pursue this area, at least not without a serious ethics debate. “The brain machine interface excites the imagination in its potential (good and evil) application to modify human performance,” but it also raises questions regarding “potential for abuses in carrying out such research,” the Jasons wrote. In summary, the Jason scientists said that creating human cyborgs able to be brain-controlled was not something they would recommend.

  This warning echoed an earlier Jason warning, back during the Vietnam War, when Secretary of Defense Robert McNamara asked the Jasons to consider using nuclear weapons against the Ho Chi Minh Trail. The Jasons studied the issue and concluded it was not something they could recommend. Using nuclear weapons in Vietnam would encourage the Vietcong to acquire nuclear weapons from their Soviet and Chinese benefactors and to use them, the Jasons warned. This would in turn encourage terrorists in the future to use nuclear weapons.

  In their 2008 study on augmented cognition and human performance, the Jason scientists also said they believed that the concept of brain control would ultimately fail because too many people in the military would have an ethical problem with it. “Such ethical considerations will appropriately limit the types of activities and applications in human performance modification that will be considered in the U.S. military,” they wrote.

  But in our discussion of the Jason scientists’ impact on DARPA, Goldblatt shook his head, indicating I was wrong.

  “The Jason scientists are hardly relevant anymore,” Goldblatt said. During his time at DARPA, and as of 2014, the “scientific advisory group with the most influence on DARPA,” he said, “is the DSB,” the Defense Science Board. The DSB has offices inside the Pentagon. And where the DSB finds problems, it is DARPA’s job to find solutions, Goldblatt explained. The DSB had recently studied man-machine systems, and it saw an entirely different set of problems related to human-robot interactions.

  In 2012, in between the two Pentagon roadmaps on drone warfare, “Unmanned Systems Integrated Roadmap FY 2011–2036” and “Unmanned Systems Integrated Roadmap FY 2013–2038,” the DSB delivered to the secretary of defense a 125-page report titled “The Role of Autonomy in DoD Systems.” The report unambiguously calls for the Pentagon to rapidly accelerate its development of artificially intelligent weapons systems. “The Task Force has concluded that, while currently fielded unmanned systems are making positive contributions across DoD operations, autonomy technology is being underutilized as a result of material obstacles within the Department that are inhibiting the broad acceptance of autonomy,” wrote DSB chairman Paul Kaminski in a letter accompanying the report.

  The primary obstacle, said the DSB, was trust—much as the Jason scientists had predicted in their report. Many individuals in the military mistrusted the idea that coupling man and machine in an effort to create autonomous weapons systems was a good idea. The DSB found that resistance came from all echelons of the command structure, from field commanders to drone operators. “For commanders and operators in particular, these challenges can collectively be characterized as a lack of trust that the autonomous functions of a given system will operate as intended in all situations,” wrote the DSB. The overall problem was getting “commanders to trust that autonomous systems will not behave in a manner other than what is intended on the battlefield.”

  Maybe the commanders had watched too many X-Files episodes or seen any of the Terminator films one too many times. Or maybe they read Department of Defense Directive 3000.09, which discusses “the probability and consequences of failure in autonomous and semi-automatic weapons systems that could lead to unintended engagements.” Or maybe commanders and operators want to remain men (and women), not become cyborg man-machines. But unlike the Jason scientists, the Defense Science Board advised the Pentagon to accelerate its efforts to change this attitude—to persuade commanders, operators, and warfighters to accept, and to trust, human-robot interaction.

  “An area of HRI [human-robot interaction] that has received significant attention is robot ethics,” wrote the DSB. This effort, which involved internal debates on robot ethics, was supposed to foster trust between military personnel and robotic systems, the DSB noted. Instead it backfired. “While theoretically interesting, this debate on functional morality has had unfortunate consequences. It increased distrust in unmanned systems because it implies that robots will not act with bounded rationality.” The DSB advised that this attitude of distrust needed to change.

  Perhaps it’s no surprise that DARPA has a program on how to manipulate trust. During the war on terror, the agency began working with the CIA’s own DARPA-like division, the Intelligence Advanced Research Projects Agency, or IARPA, on what it calls Narrative Networks (N2), to “develop techniques to quantify the effect of narrative on human cognition.” One scientist leading this effort, Dr. Paul Zak, insists that what DARPA and the CIA are doing with trust is a good thing. “We would all benefit if the government focused more on trusting people,” Zak told me in the fall of 2014, when I visited his laboratory at Claremont Graduate University in California. When I asked Zak if the DARPA research he was
involved in was more likely being used to manipulate trust, Zak said he had no reason to believe that was correct.

  Paul Zak is a leader in the field of neuroeconomics and morality, a field that studies the neurochemical roots of making economic decisions based on trust. Zak has a Ph.D. in economics and postdoctoral training, in neuroimaging, from Harvard. In 2004 he made what he describes as a groundbreaking and life-changing discovery. “I discovered the brain’s moral molecule,” Zak says, “the chemical in the brain, called oxytocin, that allows man to make moral decisions [and that] morality is tied to trust.” In no time, says Zak, “all kinds of people from DARPA were asking me, ‘How do we get some of this?’” Zak also fielded interest from the CIA. For DARPA’s Narrative Networks program, Zak has been developing a method to measure how people’s brains and bodies respond when oxytocin, i.e., “The brain’s moral molecule,” is released naturally.

  Researchers at the University of Bonn, not affiliated with DARPA, have taken a different approach with their studies of oxytocin. In December 2014, these researchers published a study on how the chemical can be used to “erase fear.” Lead researcher Monika Eckstein told Scientific American that her goal in the study was to administer oxytocin into the noses of sixty-two men, in hopes that their fear would dissipate. “And for the most part it did,” she said. A time might not be too far off when we live in a world in which fear can be erased.

  Why is the Defense Science Board so focused on pushing robotic warfare on the Pentagon? Why force military personnel to learn to “trust” robots and to rely on autonomous robots in future warfare? Why is the erasure of fear a federal investment? The answer to it all, to every question in this book, lies at the heart of the military-industrial complex.

  Unlike the Jason scientists, the majority of whom were part-time defense scientists and full-time university professors, the majority of DSB members are defense contractors. DSB chairman Paul Kaminski, who also served on President Obama’s Intelligence Advisory Board from 2009 to 2013, is a director of General Dynamics, chairman of the board of the RAND Corporation, chairman of the board of HRL (the former Hughes Research Labs), chairman of the board of Exostar, chairman and CEO of Technovation, Inc., trustee and advisor to the Johns Hopkins Applied Physics Lab, and trustee and advisor to MIT’s Lincoln Laboratory—all companies and corporations that build robotic weapons systems for DARPA and for the Pentagon. Kaminski, who also serves as a paid consultant to the Office of the Secretary of Defense, is but one example. Kaminski’s fellow DSB members, a total of roughly fifty persons, serve on the boards of defense contracting giants including Raytheon, Boeing, General Dynamics, Northrop Grumman, Bechtel, Aerospace Corporation, Texas Instruments, IBM, Lawrence Livermore National Laboratory, Sandia National Laboratories, and others.

  One might look at DARPA’s history and say that part of its role—even its entire role—is to maintain a U.S. advantage in military technology, in perpetuity. Former DARPA director Eberhardt Rechtin clearly stated this conundrum of advanced technology warfare when he told Congress, back in 1970, that it was necessary to accept the “chicken-and-egg problem” that DARPA will always face. That the agency must forever conduct “pre-requirement research,” because by the time a technological need arises on the battlefield, it becomes apparent, too late, that the research should already have been done. DARPA’s contractors are vital parts of a system that allows the Pentagon to stay ahead of its needs, and to steer revolutions in military affairs. To dominate in future battles, never to be caught off guard.

  One might also look at DARPA’s history, and its future, and say that it’s possible at some point that the technology may itself outstrip DARPA as it is unleashed into the world. This is a grave concern of many esteemed scientists and engineers.

  A question to ask might be, how close to the line can we get and still control what we create?

  Another question might be, how much of the race for this technological upper hand is now based in the reality that corporations are very much invested in keeping DARPA’s “chicken-and-egg” conundrum alive?

  This is what President Eisenhower warned Americans to fear when he spoke of the perils of the military-industrial complex in his farewell speech in January 1961. “We have been compelled to create a permanent armaments industry of vast proportions,” the president said.

  In the years since, the armaments industry has only grown bigger by the decade. If DARPA is the Pentagon’s brain, defense contractors are its beating heart. President Eisenhower said that the only way Americans could keep defense contractors in check was through knowledge. “Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals, so that security and liberty may prosper together.”

  Anything less, and civilians cede control of their own destiny.

  The programs written about in this book are all unclassified. DARPA’s highest-risk, highest-payoff programs remain secret until they are unveiled on the battlefield. Given how far along DARPA is in its quest for hunter-killer robots, and for a way to couple man with machine, perhaps the most urgent question of all might be whether civilians already have.

  Can military technology be stopped? Should it be? DARPA’s original autonomous robot designs were developed as part of DARPA’s Smart Weapons Program decades ago, in 1983. The program was called “Killer Robots” and its motto offered prescient words: “The battlefield is no place for human beings.”

  This book begins with scientists testing a weapon that at least some of them believed was an “evil thing.” In creating the hydrogen bomb, scientists engineered a weapon against which there is no defense. With regard to the thousands of hydrogen bombs in existence today, the mighty U.S. military relies on wishful optimism—hope that the civilization-destroyer is never unleashed.

  This book ends with scientists inside the Pentagon working to create autonomous weapons systems, and with scientists outside the Pentagon working to spread the idea that these weapons systems are inherently evil things, that artificially intelligent hunter-killer robots can and will outsmart their human creators, and against which there will be no defense.

  There is a perilous distinction to call attention to: when the hydrogen bomb was being engineered, the military-industrial complex—led by defense contractors, academics, and industrialists—was just beginning to exert considerable control over the Pentagon. Today that control is omnipotent.

  Another difference between the creation of the hydrogen bomb in the early 1950s and the accelerating development of hunter-killer robots today is that the decision to engineer the hydrogen bomb was made in secret and the decision to accelerate hunter-killer robots, while not widely known, is not secret. In that sense, destiny is being decided right now.

  The 15-megaton Castle Bravo thermonuclear bomb, exploded in the Marshall Islands in 1954, was the largest nuclear weapon ever detonated by the United States. If unleashed on the eastern seaboard today it would kill roughly 20 million people. With this weapon, authorized to proceed in secret, came the certainty of the military-industrial complex and the birth of DARPA. (U.S. Department of Energy)

  An elite group of weapons engineers rode out the Castle Bravo thermonuclear explosion from inside this bunker, code-named Station 70, just nineteen miles from ground zero. (The National Archives at Riverside)

  In the 1950s, John von Neumann—mathematician, physicist, game theorist, and inventor—was the superstar defense scientist. No one could compete with his brain. (U.S. Department of Energy)

  Rivalry spawns supremacy, and in the early 1950s, a second national nuclear weapons laboratory was created to foster competition with Los Alamos. Ernest O. Lawrence (left) and Edward Teller (center) cofounded the Lawrence Livermore National Laboratory. Herb York (right) served as first director. In 1958, York became scientific director of the brand new Advanced Research Project Agency (ARPA), later renamed DARPA. (Lawrence Livermore National Laboratory)

 
In his farewell address to the nation in January, 1961, President Eisenhower warned the American people about the “total influence” of the military-industrial complex. The warning was a decade too late. (Dwight D. Eisenhower Presidential Library)

  Edward Teller and Herb York—shown here with Livermore colleague Luis Alvarez—envisioned a 10,000-megaton nuclear weapon designed to decimate and depopulate much of the Soviet Union. (Lawrence Livermore National Laboratory)

  Harold Brown was twenty-four years old when he was put in charge of thermo-nuclear bomb work at Livermore. He followed Herb York to the Pentagon and oversaw ARPA weapons programs during the Vietnam War. In 1977, Harold Brown became the first scientist to be secretary of defense. (U.S. Department of Defense)

  Physicist and presidential science advisor Marvin “Murph” Goldberger cofounded the Jason advisory group in 1959, paid for solely by ARPA until the end of the Vietnam War. The Jasons, still at work today, are considered the most influential and secretive defense scientists in America. Photographed here in his home, age 90 in 2013, Goldberger examines a photo of himself and President Johnson. (Author’s collection)

  Senator John F. Kennedy visiting Senator Lyndon B. Johnson at the LBJ ranch in Texas. Each man, as President, would personally authorize some of the most controversial ARPA weapons programs of the Vietnam War. (Lyndon B. Johnson Presidential Library, photo by Frank Muto)

  In 1961 Kennedy sent Johnson to Vietnam to encourage South Vietnamese President Ngo Dinh Diem to sign off on ARPA’s weapons lab in Saigon. In this photograph are (roughly front to back) Ngo Dinh Diem, Lady Bird Johnson, Madame Nhu, Lyndon Johnson, Nguyen Ngoc Tho, Jean Kennedy Smith, Stephen Smith, and Ngo Dinh Nhu, the head of the secret police. In 1963, Diem and Nhu were murdered in a White House–approved coup d’état. (Lyndon B. Johnson Presidential Library, photo by Republic of Vietnam)