Read The Innovators Page 10


  For almost a decade Flowers had been fascinated by electronic circuits made with vacuum tubes, which he and other Brits called “valves.” As an engineer with the Post Office’s telephone division, he had created in 1934 an experimental system that used more than three thousand tubes to control connections among a thousand phone lines. He also pioneered the use of vacuum tubes for data storage. Turing enlisted Flowers to help on the bombe machines and then introduced him to Newman.

  Flowers realized that the only way to analyze the German encrypted streams quickly enough was to store at least one of them into the internal electronic memory of a machine rather than trying to compare two punched paper tapes. This would require 1,500 vacuum tubes. At first the Bletchley Park managers were skeptical, but Flowers pushed ahead, and by December 1943—after only eleven months—he produced the first Colossus machine. An even bigger version, using 2,400 vacuum tubes, was ready by June 1, 1944. Its first decoded intercepts supported other sources informing General Dwight Eisenhower, who was about to launch the D-Day invasion, that Hitler was not ordering extra troops to Normandy. Within a year, eight more Colossus machines were produced.

  This meant that well before ENIAC, which did not become operational until November 1945, the British code breakers had built a fully electronic and digital (indeed binary) computer. The second version, in June 1944, was even capable of some conditional branching. But unlike ENIAC, which had ten times the number of tubes, Colossus was a special-purpose machine geared for code breaking, not a general-purpose computer. With its limited programmability, it could not be instructed to perform all computational tasks, the way that (in theory) ENIAC could.

  SO, WHO INVENTED THE COMPUTER?

  In assessing how to apportion credit for creating the computer, it’s useful to begin by specifying which attributes define the essence of a computer. In the most general sense, the definition of a computer could encompass everything from an abacus to an iPhone. But in chronicling the birth of the Digital Revolution, it makes sense to follow the accepted definitions of what, in modern usage, constitutes a computer. Here are a few:

  “A programmable usually electronic device that can store, retrieve, and process data.” (Merriam-Webster Dictionary)

  “An electronic device which is capable of receiving information (data) in a particular form and of performing a sequence of operations in accordance with a predetermined but variable set of procedural instructions (program) to produce a result.” (Oxford English Dictionary)

  “A general purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically.” (Wikipedia, 2014)

  So the ideal computer is a machine that is electronic, general purpose, and programmable. What, then, best qualifies as the first?

  George Stibitz’s Model K, begun on his kitchen table in November 1937, led to a full-scale model at Bell Labs in January 1940. It was a binary computer and the first such device to be used remotely. But it used electromechanical relays and was thus not fully electronic. It was also a special-purpose computer and not programmable.

  Herman Zuse’s Z3, completed in May 1941, was the first automatically controlled, programmable, electrical, binary machine. It was designed to do engineering problems rather than be a general-purpose machine. However, it was later shown that, in theory, it could have been used as a Turing-complete machine. Its major difference from modern computers was that it was electromechanical, dependent on clacking and slow relay switches, rather than electronic. Another shortcoming is that it never really went into full-scale service. It was destroyed by the Allied bombing of Berlin in 1943.

  The computer designed by John Vincent Atanasoff, which was complete but not fully workable by the time Atanasoff abandoned it to serve in the Navy in September 1942, was the world’s first electronic digital computer, but it was only partly electronic. Its add-subtract mechanism used vacuum tubes, but its memory and data retrieval involved mechanical rotating drums. Its other main drawback, in terms of being considered the first modern computer, was that it was not programmable nor general purpose; instead it was hard-wired for the special task of solving linear equations. Also, Atanasoff never got it fully operational, and it disappeared into a basement at Iowa State.

  Bletchley Park’s Colossus I, completed in December 1943 by Max Newman and Tommy Flowers (with input from Alan Turing), was the first digital computer that was fully electronic, programmable, and operational. It was not, however, a general-purpose or Turing-complete machine; it was geared to the specific purpose of breaking Germany’s wartime codes.

  Howard Aiken’s Harvard Mark I, built with IBM and put into operation in May 1944, was programmable, as we will see in the following chapter, but it was electromechanical rather than electronic.

  ENIAC, completed by Presper Eckert and John Mauchly in November 1945, was the first machine to incorporate the full set of traits of a modern computer. It was all-electronic, superfast, and could be programmed by plugging and unplugging the cables connecting its different units. It was capable of changing paths based on interim results, and it qualified as a general-purpose Turing-complete machine, meaning it could in theory tackle any task. Most important, it worked. “That’s a big thing with an invention,” Eckert later said, contrasting their machine with Atanasoff’s. “You have to have a whole system that works.”75 Mauchly and Eckert got their machine to do some very powerful calculations, and it was in constant use for ten years. It became the basis for most subsequent computers.

  That last attribute is important. When we ascribe credit for an invention, determining who should be most noted by history, one criterion is looking at whose contributions turned out to have the most influence. Invention implies contributing something to the flow of history and affecting how an innovation developed. Using historic impact as a standard, Eckert and Mauchly are the most noteworthy innovators. Almost all computers of the 1950s trace their roots to ENIAC. The influence of Flowers, Newman, and Turing is somewhat trickier to assess. Their work was kept top-secret, but all three men were involved in the British computers built after the war. Zuse, who was isolated and under bombardment in Berlin, had even less influence on the course of computer development elsewhere. As for Atanasoff, his main influence on the field, perhaps his only influence, came from providing a few inspirations to Mauchly when he visited.

  * * *

  The issue of what inspirations Mauchly gleaned during his four-day visit with Atanasoff in Iowa in June 1941 turned into a protracted legal dispute. That raised another criterion, more legalistic than historical, in assessing credit for invention: Who, if anyone, ended up with the patents? In the case of the first computers, nobody did. But that outcome was due to a controversial legal battle that resulted in the patents of Eckert and Mauchly being nullified.76

  The saga began in 1947, when Eckert and Mauchly, after leaving Penn, applied for a patent on their ENIAC work, which was finally granted (the patent system being rather slow) in 1964. By then the Eckert-Mauchly company and its patent rights had been sold to Remington Rand, which became Sperry Rand; it began pressuring other companies to pay it licensing fees. IBM and Bell Labs made deals, but Honeywell balked and started looking for a way to challenge the patents. It hired a young lawyer, Charles Call, who had an engineering degree and had worked at Bell Labs. His mission was to upend the Eckert-Mauchly patent by showing that their ideas weren’t original.

  Pursuing a tip from a Honeywell lawyer who had gone to Iowa State and read about the computer that Atanasoff had built there, Call paid a visit to Atanasoff at his home in Maryland. Atanasoff was charmed by Call’s knowledge of his computer and somewhat resentful that he had never gotten much credit for it, so he handed over hundreds of letters and documents that showed how Mauchly had derived some ideas from his visit to Iowa. That evening Call drove to Washington to sit in the back of a lecture Mauchly was giving. In answer to a question about Atanasoff’s machine, Mauchly claimed he had barely examined it. Call realized that if he c
ould get Mauchly to say this in a deposition, then he could discredit him at a trial by producing Atanasoff’s documents.

  When Mauchly found out a few months later that Atanasoff might be helping Honeywell challenge his patents, he made his own visit to Atanasoff’s Maryland home, bringing with him a Sperry Rand lawyer. It was an awkward meeting. Mauchly claimed that during his visit to Iowa he hadn’t read Atanasoff’s paper carefully or examined his computer, and Atanasoff coldly pointed out that this was not true. Mauchly stayed for dinner and tried to ingratiate himself with Atanasoff, but to no avail.

  The issue went to trial before a federal judge, Earl Larson, in Minneapolis in June 1971. Mauchly proved a problematic witness. Pleading poor memory, he sounded squirrely about what he had seen during his visit to Iowa, and he repeatedly backtracked from assertions he had made in his earlier deposition, including his claim that he had only seen Atanasoff’s computer partly covered and in dim light. Atanasoff, by contrast, was very effective. He described the machine he had built, demonstrated a model, and pointed out which of his ideas Mauchly had borrowed. In all, seventy-seven witnesses were called to testify, another eighty were deposed, and 32,600 exhibits were entered into the record. The trial lasted more than nine months, making it the longest federal trial to that point.

  Judge Larson took another nineteen months to write his final decision, which was issued in October 1973. In it he ruled that the Eckert-Mauchly ENIAC patent was invalid: “Eckert and Mauchly did not themselves first invent the automatic electronic digital computer, but instead derived that subject matter from one Dr. John Vincent Atanasoff.”77 Instead of appealing, Sperry settled with Honeywell.IV

  The judge’s opinion, at 248 pages, was thorough, but it disregarded some significant differences between the machines. Mauchly did not derive quite as much from Atanasoff as the judge seemed to think. For example, Atanasoff’s electronic circuit used binary logic, whereas Mauchly’s was a decimal counter. Had the Eckert-Mauchly patent claims been less sweeping, they probably would have survived.

  The case did not determine, even legally, who should get what proportion of the credit for the invention of the modern computer, but it did have two important consequences: it resurrected Atanasoff from the basement of history, and it showed very clearly, though this was not the intent of the judge or either party, that great innovations are usually the result of ideas that flow from a large number of sources. An invention, especially one as complex as the computer, usually comes not from an individual brainstorm but from a collaboratively woven tapestry of creativity. Mauchly had visited and talked to many people. That perhaps made his invention harder to patent, but it did not lessen the impact he had.

  * * *

  Mauchly and Eckert should be at the top of the list of people who deserve credit for inventing the computer, not because the ideas were all their own but because they had the ability to draw ideas from multiple sources, add their own innovations, execute their vision by building a competent team, and have the most influence on the course of subsequent developments. The machine they built was the first general-purpose electronic computer. “Atanasoff may have won a point in court, but he went back to teaching and we went on to build the first real electronic programmable computers,” Eckert later pointed out.78

  A lot of the credit, too, should go to Turing, for developing the concept of a universal computer and then being part of a hands-on team at Bletchley Park. How you rank the historic contributions of the others depends partly on the criteria you value. If you are enticed by the romance of lone inventors and care less about who most influenced the progress of the field, you might put Atanasoff and Zuse high. But the main lesson to draw from the birth of computers is that innovation is usually a group effort, involving collaboration between visionaries and engineers, and that creativity comes from drawing on many sources. Only in storybooks do inventions come like a thunderbolt, or a lightbulb popping out of the head of a lone individual in a basement or garret or garage.

  * * *

  I. For the equation an + bn = cn, in which a, b, and c are positive integers, there is no solution when n is greater than 2.

  II. Every even integer greater than 2 can be expressed as the sum of two primes.

  III. A process in which a number is divided by 2 if it is even, and is tripled and the result added to 1 if odd, when repeated indefinitely, will always eventually lead to a result of 1.

  IV. By then Atanasoff had retired. His career after World War II had been spent in the field of military ordnance and artillery, not computers. He died in 1995. John Mauchly remained a computer scientist, partly as a consultant with Sperry and as the founding president of the Association for Computing Machinery. He died in 1980. Eckert likewise remained with Sperry much of his career. He died in 1995.

  Howard Aiken and Grace Hopper (1906–92) with a part of Babbage’s Difference Engine at Harvard in 1946.

  Jean Jennings and Frances Bilas with ENIAC.

  Jean Jennings (1924–2011) in 1945.

  Betty Snyder (1917–2001) in 1944.

  CHAPTER THREE

  * * *

  PROGRAMMING

  The development of the modern computer required another important step. All of the machines built during the war were conceived, at least initially, with a specific task in mind, such as solving equations or deciphering codes. A real computer, like that envisioned by Ada Lovelace and then Alan Turing, should be able to perform, seamlessly and quickly, any logical operation. This required machines whose operations were determined not just by their hardware but by software, the programs they could run. Once again Turing laid out the concept clearly. “We do not need to have an infinity of different machines doing different jobs,” he wrote in 1948. “A single one will suffice. The engineering problem of producing various machines for various jobs is replaced by the office work of ‘programming’ the universal machine to do these jobs.”1

  In theory, machines such as ENIAC could be programmed and even pass for general-purpose machines. But in practice, loading in a new program was a laborious process that often involved replugging by hand the cables that connected different units in the computer. The wartime machines could not switch programs at electronic speeds. This would require the next major step in the creation of the modern computer: figuring out how to store programs inside a machine’s electronic memory.

  GRACE HOPPER

  Starting with Charles Babbage, the men who invented computers focused primarily on the hardware. But the women who became involved during World War II saw early on the importance of programming, just as Ada Lovelace had. They developed ways to code the instructions that told the hardware what operations to perform. In this software lay the magic formulas that could transform the machines in wondrous ways.

  The most colorful programming pioneer was a gutsy and spirited, yet also charming and collegial, naval officer named Grace Hopper, who ended up working for Howard Aiken at Harvard and then for Presper Eckert and John Mauchly. Born Grace Brewster Murray in 1906, she was from a prosperous family on the Upper West Side of Manhattan. Her grandfather was a civil engineer who took her around New York on surveying trips, her mother was a mathematician, and her father was an insurance executive. She graduated from Vassar with a degree in math and physics, then went on to Yale, where in 1934 she earned her PhD in math.2

  Her education wasn’t as unusual as you might think. She was the eleventh woman to get a math doctorate from Yale, the first being in 1895.3 It was not all that uncommon for a woman, especially from a successful family, to get a doctorate in math in the 1930s. In fact, it was more common than it would be a generation later. The number of American women who got doctorates in math during the 1930s was 113, which was 15 percent of the total number of American math doctorates. During the decade of the 1950s, only 106 American women got math doctorates, which was a mere 4 percent of the total. (By the first decade of the 2000s things had more than rebounded, and there were 1,600 women who got math doctorates, 30 perc
ent of the total.)

  After marrying a professor of comparative literature, Vincent Hopper, Grace joined the faculty of Vassar. Unlike most math professors, she insisted that her students be able to write well. In her probability course, she began with a lecture on one of her favorite mathematical formulasI and asked her students to write an essay about it. These she would mark for clarity of writing and style. “I’d cover [an essay] up with ink, and I would get a rebellion that they were taking a math course not an English course,” she recalled. “Then I would explain, it was no use trying to learn math unless they could communicate it with other people.”4 Throughout her life, she excelled at being able to translate scientific problems—such as those involving trajectories, fluid flows, explosions, and weather patterns—into mathematical equations and then into ordinary English. This talent helped to make her a good programmer.

  By 1940 Grace Hopper was bored. She had no children, her marriage was unexciting, and teaching math was not as fulfilling as she had hoped. She took a partial leave from Vassar to study with the noted mathematician Richard Courant at New York University, focusing on methods for solving partial differential equations. She was still studying with Courant when the Japanese attacked Pearl Harbor in December 1941. America’s entry into World War II offered her a way to change her life. During the following eighteen months, she quit Vassar, divorced her husband, and at age thirty-six joined the U.S. Navy. She was sent to the Naval Reserve Midshipmen’s School at Smith College in Massachusetts, and in June 1944 graduated first in her class as Lieutenant Grace Hopper.