Read The Perfectionists: How Precision Engineers Created the Modern World Page 27


  (TOLERANCE: 0.000 000 000 000 000 000 000 000 000 000 000 01)

  Squeezing Beyond Boundaries

  One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.

  —WERNER HEISENBERG, DIE PHYSIK DER ATOMKERNE (1949)

  Once every few weeks, beginning in the summer of 2018, a trio of large Boeing freighter aircraft, most often converted and windowless 747s of the Dutch airline KLM, takes off from Schiphol airport outside Amsterdam, with a precious cargo bound eventually for the city of Chandler, a western desert exurb of Phoenix, Arizona. The cargo is always the same, consisting of nine white boxes in each aircraft, each box taller than a man. To get these profoundly heavy containers from the airport in Phoenix to their destination, twenty miles away, requires a convoy of rather more than a dozen eighteen-wheeler trucks. On arrival and finally uncrated, the contents of all the boxes are bolted together to form one enormous 160-ton machine—a machine tool, in fact, a direct descendant of the machine tools invented and used by men such as Joseph Bramah and Henry Maudslay and Henry Royce and Henry Ford a century and more before.

  Just like its cast-iron predecessors, this Dutch-made behemoth of a tool (fifteen of which compose the total order due to be sent to Chandler, each delivered as it is made) is a machine that makes machines. Yet, rather than making mechanical devices by the precise cutting of metal from metal, this gigantic device is designed for the manufacture of the tiniest of machines imaginable, all of which perform their work electronically, without any visible moving parts.

  For here we come to the culmination of precision’s quarter-millennium evolutionary journey. Up until this moment, almost all the devices and creations that required a degree of precision in their making had been made of metal, and performed their various functions through physical movements of one kind or another. Pistons rose and fell; locks opened and closed; rifles fired; sewing machines secured pieces of fabric and created hems and selvedges; bicycles wobbled along lanes; cars ran along highways; ball bearings spun and whirled; trains snorted out of tunnels; aircraft flew through the skies; telescopes deployed; clocks ticked or hummed, and their hands moved ever forward, never back, one precise second at a time.

  Then came the computer, then the personal computer, then the smartphone, then the previously unimaginable tools of today—and with this helter-skelter technological evolution came a time of translation, a time when the leading edge of precision passed itself out into the beyond, moving as if through an invisible gateway, from the purely mechanical and physical world and into an immobile and silent universe, one where electrons and protons and neutrons have replaced iron and oil and bearings and lubricants and trunnions and the paradigm-altering idea of interchangeable parts, and where, though the components might well glow with fierce lights or send out intense waves of heat, nothing moved one piece against another in mechanical fashion, no machine required that measured exactness be an essential attribute of every component piece. Precision had by now reached a degree of exactitude that would be of relevance and use only at the near-atomic level, and for devices that were now near-universally electronic and that obeyed different rules and could perform tasks hitherto never even considered.

  The particular device sent out to perform such tasks in Arizona, and which, when fully assembled, is as big as a modest apartment, is known formally as an NXE:3350B EUV scanner. It is made by a generally unfamiliar but formidably important Dutch-registered company known simply by its initials, ASML. Each one of the machines in the order costs its customer about $100 million, making the total order worth around $1.5 billion.

  The customer whose place of business is in Chandler—a conglomeration of huge and faceless buildings that are known in the argot as a “fab,” or fabrication plant, for in line with this new world order, factories that make metal things are being supplemented by fabs* that make electronic things—could easily afford the sum. Intel Corporation, a fifty-year-old tentpole of the modern computer industry, has current assets well in excess of $100 billion. Its central business is the making, in the many fabs it has scattered around the planet—the one in Chandler is known as Fab 42—of electronic microprocessor chips, the operating brains of almost all the world’s computers. The enormous ASML devices allow the firm to manufacture these chips, and to place transistors on them in huge numbers and to any almost unreal level of precision and minute scale that today’s computer industry, pressing for ever-speedier and more powerful computers, endlessly demands.

  It takes an enormous machine to allow for the making of something so infinitesimally tiny as a computer chip. This Twinscan NXE:3350B photolithography machine, made by the Dutch company ASML, would fill three jet cargo aircraft. Intel, the world’s biggest chip maker, buys these $100 million machines by the score.

  Photograph courtesy of ASML.

  How the two tasks are managed, the making of the chips and the making of the machines that make the chips, are two of the more memorable and intertwined precision-related sagas of recent years. The technology that now binds the two companies together† is performed on such an infinitesimally minute scale, and to tolerances that would have seemed unimaginably absurd and well-nigh unachievable only decades ago, that it is taking precision into the world of the barely believable—except that it is a world that manifestly must be believed, a world from which, one can argue, modern humankind benefits mightily and for whose existence it is all the better, an assertion with which both Intel and ASML would readily agree.

  Gordon Moore, one of the founders of Intel, is most probably the man to blame for this trend toward ultraprecision in the electronics world. He made an immense fortune by devising the means to make ever-smaller and smaller transistors and to cram millions, then billions of them onto a single microprocessing chip, the heart and soul of every computing device now made. He is best known, however, for his forecast (made in 1965, when he was thirty-six years old and evidently a coming man) that from that moment onward, the size of critical electronic components would shrink by half and that computing speed and power would double, and would do so with metronomic regularity, every year.

  For now, the law advanced by Gordon Moore (seated) in 1965, when he ran Fairchild Semiconductor, in which he forecast that integrated circuit performance would double every year (a figure he later and prudently revised downward, to double every two years), still obtains, though most agree it is reaching the limits of possible performance.

  Photograph courtesy of Intel Free Press.

  An amended version of Moore’s law, as a colleague promptly named the pronouncement, has since assumed the status of Holy Writ, not least because it proved to be more or less correct, its forecasts uncannily accurate. Yet, as Gordon Moore himself has noted, his law has served not so much to describe the development of the computer industry as to drive it. For firms that make computer chips seem nowadays to be bent on manufacturing them to the minutest and most ever-diminishing tolerances, just in order to keep the Moore’s law bandwagon rolling.

  The electronics journals of recent years have been filled with essays showing how this new chip, or that new processor, or that newly designed motherboard, offers a further indication that Moore’s law is still in effect thirty, forty, fifty years after it was first promulgated. It is rather as though Moore, if unwittingly, has become some sort of venerably wise Pied Piper, leading the industry to go ever faster, to make devices ever smaller and ever more powerful, just to fulfill his forecast. And to do so even though, quite possibly, many consumers, heretical and Luddite-inspired though the thought may be, find it quite unnecessary. They might rather wish for some settling, for a time of calm, a moment of contentment, rather than to be gripped by the perceived need to buy the latest iPhone or the machine equipped with that newest and fastest microprocessor (though not entire
ly certain what a microprocessor is, or does), to ensure that everyone is Keeping Up with Moore.

  The numbers are beyond incredible. There are now more transistors at work on this planet (some 15 quintillion, or 15,000,000,000,000,000,000) than there are leaves on all the trees in the world. In 2015, the four major chip-making firms were making 14 trillion transistors every single second. Also, the sizes of the individual transistors are well down into the atomic level.

  Having said this—this very last assertion means that fundamental constants of physics may have their own plans for quieting matters down—I must point out that it is beginning to appear as though conventional electronics were about to reach some kind of physical limit, and that Moore’s law, after five giddy decades of predictive accuracy, may be about to hit the stops. Not, of course, that this will inhibit the computer industry from creating some entirely new technology to take its place, as is clearly occurring right now. Whether Moore’s law will continue to apply to that new technology, however, remains to be seen.

  GORDON MOORE WAS born in 1929, the son of the sheriff of San Mateo County in Northern California. The idea for the device that would dominate Moore’s professional life, the transistor, had just been foreshadowed, and by a man Moore was never to meet. Julius Lilienfeld, who left Leipzig for Massachusetts in the 1920s, had drawn up a series of hesitant and untidy plans for making an all-electronic gateway, a device that would allow a low-voltage electrical current, by the employment of a substance then known as a semiconductor, to control a very-much-higher-current flow, and either to switch it on and off at will or to amplify it—all without moving parts or exorbitant cost.

  Hitherto such work had been performed by a breakable, costly, and very hot (while working) glass tube–encased diode or, later, triode: the solid-state version of this, which Lilienfeld dreamed might one day be made, could replace it, and consequently make electronics cool, small, and cheap. He patented his idea in Canada in 1925, creating drawings of “A Method and Apparatus for Controlling Electric Currents.” His scheme was entirely conceptual, however: no such device could be made with the technology and materials available at the time—all that existed was his idea and his newly announced principle.

  Time passed; the idea endured. It took twenty years for Lilienfeld’s concept to be made real, and a working transistor was indeed made and was well into development by the time the young Moore entered the California university system as a competent but not apparently overly gifted student in San Jose.

  Two days before Christmas in 1947, the Bell Labs physicists John Bardeen, Walter Brattain, and William Shockley—the last a difficult man, later to be reviled as a keen proponent of eugenics; his cool calculation of likely wartime casualties helped tip the scales in favor of President Truman’s decision to drop the atomic bomb on Hiroshima and Nagasaki—had unveiled the first working device. They would win the 1956 Nobel Prize in Physics for doing so: in his lecture, Shockley remarked of what had been invented that “it seems likely that many inventions unforeseen at present will be made.” He knew only the half of it.

  John Bardeen, William Shockley, and Walter Brattain (left to right), joint winners of the 1956 Nobel Prize in Physics for their discovery of the “transistor effect.” Bardeen would win again, in 1972, for his work on semiconductors, becoming one of only four people to have won the Nobel twice.

  The invention was not yet called a “transistor”—that term would enter the lexicon a year later, as it exhibited a blend of electrical properties that employed the words transfer and resistor. It was also far from being a small thing: its prototype is now preserved in a Bell Labs bell jar, and with its wires and various components and the crucial semiconducting sliver of a previously little-regarded silvery light-metal element, germanium, it occupies the volume of a small child’s hand.

  Yet, in a matter of months, the devices that would deploy the so-called transistor effect started to be made very much smaller, and by the time, in the mid-1950s, that the first transistor radios were on the market, the little glass thimble, with its characteristic three wires protruding from its nether region—one introducing the gate voltage into the transistor; the other two, unattractively called the source and the drain, being live only when voltage is applied via the gate—had become a familiar sight.

  Small and miraculous in their abilities these glass-and-wire thimbles may have been, but they were still far from being tiny. Tiny became theoretically possible only after the invention of silicon wafer–based transistors in 1954, and then, most crucially, with the making in 1959 of the first entirely flat, or planar, devices. By this time, the young Gordon Moore had entered the picture, having been fully steeped in science during years spent at Berkeley, Caltech, and Johns Hopkins. He had now left the world of the academy to enter commerce, and to explore the commercial possibilities of the fledgling semiconductor industry. He did so specifically at the behest of William Shockley, who had left Bell in 1956 and headed out west to Palo Alto, there to set up his own company, Shockley Transistors, and search for the first of his predicted “many inventions unforeseen.”

  The first transistor, invented by Bell Labs in New Jersey shortly before Christmas 1947. Arguably no other twentieth-century invention has been so influential, and in the story of precision, its creation marked the moment when moving mechanics gave way to immobile electronics, when Newton passed the mantle to Einstein.

  Photograph courtesy of Windell H. Oskay, www.evilmadscientist.com.

  His doing so marked the establishment, essentially, of what would become Silicon Valley, the then-unbuilt cathedral for the coming religion of semiconducting. For Shockley, who, thanks to his Nobel Prize and his reputation, was well financed enough to hire whomever he wanted, swiftly assembled a stable of scientific rarae aves, including Gordon Moore as his chief chemist, together with a cluster of equally bright young physicists and engineers.

  Shockley promptly drove them all mad. Within a year, a group of eight of his first hires, all complaining bitterly about his tyrannical and secretive behavior and his overtly paranoid mien (and his inexplicable and unexplained abandonment of silicon as the central semiconducting element of his firm’s research), stormed out. The group, later to be known by Shockley’s dismissive term for them, the Traitorous Eight, formed in 1957 a new company that was to change everything. Their start-up,* named Fairchild Semiconductor, would begin to create a whole raft of new silicon-based products, and then shrink and shrink them, and imprint upon them computing abilities that hitherto could be accomplished only by giant machines that occupied entire suites of air-conditioned rooms.

  The invention of the planar transistor was by all accounts one of Fairchild’s two most important achievements. The man who created the technology that would then both allow miniaturization to proceed apace and, in turn, lead Gordon Moore to write his famous prediction is now almost entirely forgotten outside the closeted confines of the semiconducting world. Jean Amédée Hoerni, one of the eight who had abandoned Shockley for Fairchild, was the theoretical physicist scion of a Swiss banking family and, at thirty-two, when he joined Fairchild, a devoted rock climber, mountaineer, and thinker.

  His elegant invention changed, at a stroke, the way transistors were made. Up until then, they were formed, essentially, mechanically. Tiny grooves were etched into silicon wafers, aluminum conductors were traced into the etched grooves, and the resulting wafers, now with their etched hills and valleys, shaped like western desert mesas (which led these Fairchild products to be known as mesa transistors), were encased in tiny metal canisters, with the three working wires protruding.

  These were still somewhat big and ungainly things, and this at a time when, just after the launch of Sputnik, the American space industry dearly wanted its new electronics to be tiny, reliable, and cheap. Moreover, the Fairchild mesa transistors were not very reliable: far too often, tiny pieces of resin or solder or dust would be left behind after the etching process and would rattle around in their metal cases and cause the
transistors to perform erratically, or not at all. Something was needed that was small and worked perfectly.

  The moody, solitary, and austere Jean Hoerni came up with the idea that allowed a coating of silicon oxide on top of a pure silicon crystal to be used as an integral part of the transistor, as an insulator, and with no hills or valleys, no mesas, to give the resulting device unnecessary bulk. His creation, he insisted, would be very much smaller than a mesa transistor, and more reliable. To prove his point, he had a technician create a prototype that was no more than a dot, just a millimeter across, and then dramatically spat on it to show that any kind of human misbehavior would not interfere with its working. It performed flawlessly. It was tiny, it worked, and it seemed well-nigh indestructible—or, at least, immune to insult. It was also cheap, and in consequence, it became Fairchild’s signature product from almost that moment on.

  It was, however, just one of two Fairchild game-changing products. The other was born of an idea that was doodled on four pages of a company notebook* by another of the refugees from Shockley, a man named Robert Noyce. His thought was that, now that planar transistors were about to become a reality, might it not be possible to put flattened versions of the other components of a full-fledged electrical circuit (resistors, capacitors, oscillators, diodes, and the like) onto the same silicon oxide covering of a silicon wafer? Could not the circuitry, in other words, be integrated?

  If it could, and if every component were now tiny and, most important, virtually flat, then could the circuits not be printed onto the silicon wafer photographically, employing the same principle that was used in photographic enlargers?

  The principle of a darkroom enlarger formed the basis of the idea. An enlarger took the negative of a tiny piece of celluloid imagery (say, 35 mm film from a camera) and used its lenses to make a very much bigger version of the image, or an edited part of the image, and print it on light-sensitive paper. This same principle, Noyce wrote in his notebook, could surely be used backward. A designer could draw onto some transparent medium a large diagram of a piece of integrated circuitry and then, using a device much like an enlarger, but with its lenses refashioned to make its images not bigger but very much smaller, the image could be printed, as it were, onto the silicon oxide of the wafer.