Read The Innovators Page 6


  For the rest of his life, Turing would wrestle with the issue of whether the human mind was fundamentally different from a deterministic machine, and he would gradually come to the conclusion that the distinction was less clear than he had thought.

  He also had an instinct that, just as uncertainty pervaded the subatomic realm, there were also mathematical problems that could not be solved mechanically and were destined to be cloaked in indeterminacy. At the time, mathematicians were intensely focused on questions about the completeness and consistency of logical systems, partly due to the influence of David Hilbert, the Göttingen-based genius who, among many other achievements, had come up with the mathematical formulation of the theory of general relativity concurrently with Einstein.

  At a 1928 conference, Hilbert posed three fundamental questions about any formal system of mathematics: (1) Was its set of rules complete, so that any statement could be proved (or disproved) using only the rules of the system? (2) Was it consistent, so that no statement could be proved true and also proved false? (3) Was there some procedure that could determine whether a particular statement was provable, rather than allowing the possibility that some statements (such as enduring math riddles like Fermat’s last theorem,I Goldbach’s conjecture,II or the Collatz conjectureIII) were destined to remain in undecidable limbo? Hilbert thought that the answer to the first two questions was yes, making the third one moot. He put it simply, “There is no such thing as an unsolvable problem.”

  Within three years, the Austrian-born logician Kurt Gödel, then twenty-five and living with his mother in Vienna, polished off the first two of these questions with unexpected answers: no and no. In his “incompleteness theorem,” he showed that there existed statements that could be neither proved nor disproved. Among them, to oversimplify a bit, were those that were akin to self-referential statements such as “This statement is unprovable.” If the statement is true, then it decrees that we can’t prove it to be true; if it’s false, that also leads to a logical contradiction. It is somewhat like the ancient Greek “liar’s paradox,” in which the truth of the statement “This statement is false” cannot be determined. (If the statement is true, then it’s also false, and vice versa.)

  By coming up with statements that could not be proved or disproved, Gödel showed that any formal system powerful enough to express the usual mathematics was incomplete. He was also able to produce a companion theorem that effectively answered no to Hilbert’s second question.

  That left the third of Hilbert’s questions, that of decidability or, as Hilbert called it, the Entscheidungsproblem or “decision problem.” Even though Gödel had come up with statements that could be neither proved nor disproved, perhaps that odd class of statements could somehow be identified and cordoned off, leaving the rest of the system complete and consistent. That would require that we find some method for deciding whether a statement was provable. When the great Cambridge math professor Max Newman taught Turing about Hilbert’s questions, the way he expressed the Entscheidungsproblem was this: Is there a “mechanical process” that can be used to determine whether a particular logical statement is provable?

  Turing liked the concept of a “mechanical process.” One day in the summer of 1935, he was out for his usual solitary run along the Ely River, and after a couple of miles he stopped to lie down among the apple trees in Grantchester Meadows to ponder an idea. He would take the notion of a “mechanical process” literally, conjuring up a mechanical process—an imaginary machine—and applying it to the problem.8

  The “Logical Computing Machine” that he envisioned (as a thought experiment, not as a real machine to be built) was quite simple at first glance, but it could handle, in theory, any mathematical computation. It consisted of an unlimited length of paper tape containing symbols within squares; in the simplest binary example, these symbols could be merely a 1 and a blank. The machine would be able to read the symbols on the tape and perform certain actions based on a “table of instructions” it had been given.9

  The table of instructions would tell the machine what to do based on whatever configuration it happened to be in and what symbol, if any, it found in the square. For example, the table of instructions for a particular task might decree that if the machine was in configuration 1 and saw a 1 in the square, then it should move one square to the right and shift into configuration 2. Somewhat surprisingly, to us if not to Turing, such a machine, given the proper table of instructions, could complete any mathematical task, no matter how complex.

  How might this imaginary machine answer Hilbert’s third question, the decision problem? Turing approached the problem by refining the concept of “computable numbers.” Any real number that was defined by a mathematical rule could be calculated by the Logical Computing Machine. Even an irrational number such as π could be calculated indefinitely using a finite table of instructions. So could the logarithm of 7, or the square root of 2, or the sequence of Bernoulli numbers that Ada Lovelace had helped produce an algorithm for, or any other number or series, no matter how challenging to compute, as long as its calculation was defined by a finite set of rules. All of these were, in Turing’s parlance, “computable numbers.”

  Turing went on to show that noncomputable numbers also existed. This was related to what he called “the halting problem.” There can be no method, he showed, to determine in advance whether any given instruction table combined with any given set of inputs will lead the machine to arrive at an answer or go into some loop and continue chugging away indefinitely, getting nowhere. The insolvability of the halting problem, he showed, meant that Hilbert’s decision problem, the Entscheidungsproblem, was unsolvable. Despite what Hilbert seemed to hope, no mechanical procedure can determine the provability of every mathematical statement. Gödel’s incompleteness theory, the indeterminacy of quantum mechanics, and Turing’s answer to Hilbert’s third challenge all dealt blows to a mechanical, deterministic, predictable universe.

  Turing’s paper was published in 1937 with the not so snappy title “On Computable Numbers, with an Application to the Entscheidungsproblem.” His answer to Hilbert’s third question was useful for the development of mathematical theory. But far more important was the by-product of Turing’s proof: his concept of a Logical Computing Machine, which soon came to be known as a Turing machine. “It is possible to invent a single machine which can be used to compute any computable sequence,” he declared.10 Such a machine would be able to read the instructions of any other machine and carry out whatever task that machine could do. In essence, it embodied the dream of Charles Babbage and Ada Lovelace for a completely general-purpose universal machine.

  A different and less beautiful solution to the Entscheidungsproblem, with the clunkier name “untyped lambda calculus,” had been published earlier that year by Alonzo Church, a mathematician at Princeton. Turing’s professor Max Newman decided that it would be useful for Turing to go there to study under Church. In his letter of recommendation, Newman described Turing’s enormous potential. He also added a more personal appeal based on Turing’s personality. “He has been working without any supervision or criticism from anyone,” Newman wrote. “This makes it all the more important that he should come into contact as soon as possible with the leading workers on this line, so that he should not develop into a confirmed solitary.”11

  Turing did have a tendency toward being a loner. His homosexuality made him feel like an outsider at times; he lived alone and avoided deep personal commitments. At one point he proposed marriage to a female colleague, but then felt compelled to tell her that he was gay; she was unfazed and still willing to get married, but he believed it would be a sham and decided not to proceed. Yet he did not become “a confirmed solitary.” He learned to work as part of a team, with collaborators, which was key to allowing his abstract theories to be reflected in real and tangible inventions.

  In September 1936, while waiting for his paper to be published, the twenty-four-year-old doctoral candidate sailed t
o America in steerage class aboard the aging ocean liner RMS Berengaria, lugging with him a prized brass sextant. His office at Princeton was in the Mathematics Department building, which also then housed the Institute for Advanced Study, where Einstein, Gödel, and von Neumann held court. The cultivated and highly sociable von Neumann became particularly interested in Turing’s work, despite their very different personalities.

  The seismic shifts and simultaneous advances of 1937 were not directly caused by the publication of Turing’s paper. In fact, it got little notice at first. Turing asked his mother to send out reprints of it to the mathematical philosopher Bertrand Russell and a half dozen other famous scholars, but the only major review was by Alonzo Church, who could afford to be flattering because he had been ahead of Turing in solving Hilbert’s decision problem. Church was not only generous; he introduced the term Turing machine for what Turing had called a Logical Computing Machine. Thus at twenty-four, Turing’s name became indelibly stamped on one of the most important concepts of the digital age.12

  CLAUDE SHANNON AND GEORGE STIBITZ AT BELL LABS

  There was another seminal theoretical breakthrough in 1937, similar to Turing’s in that it was purely a thought experiment. This one was the work of an MIT graduate student named Claude Shannon, who that year turned in the most influential master’s thesis of all time, a paper that Scientific American later dubbed “the Magna Carta of the Information Age.”13

  Shannon grew up in a small Michigan town where he built model planes and amateur radios, then went on to major in electrical engineering and math at the University of Michigan. In his senior year he answered a help-wanted listing tacked to a bulletin board, which offered a job at MIT working under Vannevar Bush helping to run the Differential Analyzer. Shannon got the job and was mesmerized by the machine—not so much the rods and pulleys and wheels that formed the analog components as the electromagnetic relay switches that were part of its control circuit. As electrical signals caused them to click open and clack closed, the switches created different circuit patterns.

  During the summer of 1937, Shannon took a break from MIT and went to work at Bell Labs, a research facility run by AT&T. Located then in Manhattan on the Hudson River edge of Greenwich Village, it was a haven for turning ideas into inventions. Abstract theories intersected with practical problems there, and in the corridors and cafeterias eccentric theorists mingled with hands-on engineers, gnarly mechanics, and businesslike problem-solvers, encouraging the cross-fertilization of theory with engineering. This made Bell Labs an archetype of one of the most important underpinnings of digital-age innovation, what the Harvard science historian Peter Galison has called a “trading zone.” When these disparate practitioners and theoreticians came together, they learned how to find a common parlance to trade ideas and exchange information.14

  At Bell Labs, Shannon saw up close the wonderful power of the phone system’s circuits, which used electrical switches to route calls and balance loads. In his mind, he began connecting the workings of these circuits to another subject he found fascinating, the system of logic formulated ninety years earlier by the British mathematician George Boole. Boole revolutionized logic by finding ways to express logical statements using symbols and equations. He gave true propositions the value 1 and false propositions a 0. A set of basic logical operations—such as and, or, not, either/or, and if/then—could then be performed using these propositions, just as if they were math equations.

  Shannon figured out that electrical circuits could execute these logical operations using an arrangement of on-off switches. To perform an and function, for example, two switches could be put in sequence, so that both had to be on for electricity to flow. To perform an or function, the switches could be in parallel so that electricity would flow if either of them was on. Slightly more versatile switches called logic gates could streamline the process. In other words, you could design a circuit containing a lot of relays and logic gates that could perform, step by step, a sequence of logical tasks.

  (A “relay” is simply a switch that can be opened and shut electrically, such as by using an electromagnet. The ones that clack open and closed are sometimes called electromechanical because they have moving parts. Vacuum tubes and transistors can also be used as switches in an electrical circuit; they are called electronic because they manipulate the flow of electrons but do not require the movement of any physical parts. A “logic gate” is a switch that can handle one or more inputs. For example, in the case of two inputs, an and logic gate switches on if both of the inputs are on, and an or logic gate switches on if either of the inputs is on. Shannon’s insight was that these could be wired together in circuits that could execute the tasks of Boole’s logical algebra.)

  When Shannon returned to MIT in the fall, Bush was fascinated by his ideas and urged him to include them in his master’s thesis. Entitled “A Symbolic Analysis of Relay and Switching Circuits,” it showed how each of the many functions of Boolean algebra could be executed. “It is possible to perform complex mathematical operations by means of relay circuits,” he summed up at the end.15 This became the basic concept underlying all digital computers.

  Shannon’s ideas intrigued Turing because they neatly related to his own just-published concept of a universal machine that could use simple instructions, expressed in binary coding, to tackle problems not only of math but of logic. Also, since logic was related to the way human minds reason, a machine that performed logical tasks could, in theory, mimic the way humans think.

  * * *

  Working at Bell Labs at the same time was a mathematician named George Stibitz, whose job was to figure out ways to handle the increasingly complicated calculations needed by the telephone engineers. The only tools he had were mechanical desktop adding machines, so he set out to invent something better based on Shannon’s insight that electronic circuits could perform mathematical and logical tasks. Late one evening in November, he went to the stockroom and took home some old electromagnetic relays and bulbs. At his kitchen table, he put the parts together with a tobacco tin and a few switches to form a simple logical circuit that could add binary numbers. A lit bulb represented a 1, and an unlit bulb represented a 0. His wife dubbed it the “K-Model,” after the kitchen table. He took it into the office the next day and tried to convince his colleagues that, with enough relays, he could make a calculating machine.

  One important mission of Bell Labs was to figure out ways to amplify a phone signal over long distances while filtering out static. The engineers had formulas that dealt with the amplitude and phase of the signal, and the solutions to their equations sometimes involved complex numbers (ones that include an imaginary unit that represents the square root of a negative number). Stibitz was asked by his supervisor if his proposed machine could handle complex numbers. When he said that it could, a team was assigned to help him build it. The Complex Number Calculator, as it was called, was completed in 1939. It had more than four hundred relays, each of which could open and shut twenty times per second. That made it both blindingly fast compared to mechanical calculators and painfully clunky compared to the all-electronic vacuum-tube circuits just being invented. Stibitz’s computer was not programmable, but it showed the potential of a circuit of relays to do binary math, process information, and handle logical procedures.16

  HOWARD AIKEN

  Also in 1937 a Harvard doctoral student named Howard Aiken was struggling to do tedious calculations for his physics thesis using an adding machine. When he lobbied the university to build a more sophisticated computer to do the work, his department head mentioned that in the attic of Harvard’s science center were some brass wheels from a century-old device that seemed to be similar to what he wanted. When Aiken explored the attic, he found one of six demonstration models of Charles Babbage’s Difference Engine, which Babbage’s son Henry had made and distributed. Aiken became fascinated by Babbage and moved the set of brass wheels into his office. “Sure enough, we had two of Babbage’s wheels,” he
recalled. “Those were the wheels that I had later mounted and put in the body of the computer.”17

  That fall, just when Stibitz was cooking up his kitchen-table demonstration, Aiken wrote a twenty-two-page memo to his Harvard superiors and executives at IBM making the case that they should fund a modern version of Babbage’s digital machine. “The desire to economize time and mental effort in arithmetical computations, and to eliminate human liability to error is probably as old as the science of arithmetic itself,” his memo began.18

  Aiken had grown up in Indiana under rough circumstances. When he was twelve, he used a fireplace poker to defend his mother against his drunk and abusive father, who then abandoned the family with no money. So young Howard dropped out of ninth grade to support the family by working as a telephone installer, then got a night job with the local power company so that he could attend a tech school during the day. He drove himself to be a success, but in the process he developed into a taskmaster with an explosive temper, someone who was described as resembling an approaching thunderstorm.19

  Harvard had mixed feelings about building Aiken’s proposed calculating machine or holding out the possibility that he might be granted tenure for a project that seemed to be more practical than academic. (In parts of the Harvard faculty club, calling someone practical rather than academic was considered an insult.) Supporting Aiken was President James Bryant Conant, who, as chairman of the National Defense Research Committee, was comfortable positioning Harvard as part of a triangle involving academia, industry, and the military. His Physics Department, however, was more purist. Its chairman wrote to Conant in December 1939, saying that the machine was “desirable if money can be found, but not necessarily more desirable than anything else,” and a faculty committee said of Aiken, “It should be made quite clear to him that such activity did not increase his chances of promotion to a professorship.” Eventually Conant prevailed and authorized Aiken to build his machine.20