Perversely, the most enduring consequence of the 1970s belief that energy supplies were running out was not to use less, but to look for more. In this quest, Jimmy Carter, arguably the most ecologically minded president in U.S. history, endorsed policies that today seem like environmental folly. Notably, his administration sought to offset the approaching decline of oil and gas by tripling the use of coal, a much dirtier fuel. Just as peak oil had provided justification for foreign-policy misadventures in the 1920s and 1930s, it proved a friend to Big Coal in the 1970s and 1980s. Meanwhile, oil firms found so much crude that by the end of the 1990s real prices had fallen to half—sometimes a fifth—of what they were during Carter’s day.
Central to the misunderstanding was the concept of a “reserve.” Both Hubbertians and McKelveyans agree that an oil reserve is a physical stock: a finite pool of hydrocarbon molecules. To Hubbertians, the implication is clear: pump out too much and you will eventually empty it. How long you can pump depends primarily on the size of the pool. To McKelveyans, though, what matters most is not the size of the pool, but the capacity of the pump.
The reason for this apparently counterintuitive belief is that a petroleum reserve is not, in fact, a subterranean pool, like the underground lake where Voldemort conceals part of his soul in the Harry Potter books, but rather an imprecisely defined zone of permeable, sponge-like rock that has petroleum in its pores. (A reserve can also occur in thin sheets between layers of shale.) Nor is petroleum a uniform substance, a black liquid like the inky water in Voldemort’s lake. Instead, it is a crazy stew of different compounds: oil of various grades mixed with ethane, propane, methane, and other hydrocarbons. These range from the purely gaseous (methane, or natural gas) to syrupy liquids (crude oil) to semi-solids (the petroleum precursors sometimes called tar sands, for example). Squashed into stone deep underground, this jumble of glop, goo, and gas is usually under great pressure. Layers of impermeable rock prevent it from seeping to the surface. When drilling bores through the caps, the pressurized liquids and gases shoot up in orthodox gusher fashion.
How much can be extracted depends on how deep the drilling operation can probe, the composition of the regions it can reach, which of the different compounds in that area it can handle, and—a key variable—whether the current price justifies the required effort. If a company’s engineers develop new equipment that can pump out more petroleum at a lower cost, the effective size of the reservoir increases. Not the actual size—its physical dimensions—but the effective size, the amount of oil and gas that can be extracted in the foreseeable future.
An often-cited example is the Kern River field, north of Los Angeles. From the day of its discovery in 1899, it was evident that Kern River was rich with oil. Pithole-style, wildcatters poured into the area, throwing up derricks and boring wells. In 1949, after fifty years of drilling, analysts estimated that just 47 million barrels remained in reserves—a triviality, a finger snap, a rounding error in the oil business. Kern River, it seemed, was nearly played out. In reality, the field was still full of oil. But what remained was so thick and heavy that it almost didn’t float on water. No method existed for sucking such dense stuff out of the ground.
Petroleum engineers in the 1970s figured out how to extract it: shoot hot steam down wells to soften thick oil and force it from stone. At first, the process was hideously inefficient: boiling the water to produce the steam required up to 40 percent of the oil that came out of the wells. Because companies were making steam by burning unrefined crude oil at the wellhead, they released torrents of toxic chemicals into the air. But this wasteful process squeezed out petroleum that had seemed impossible to reach. Over time engineers learned how to use steam with less waste and pollution. By 1989 they had taken out another 945 million barrels from the Kern River field. That year analysts again estimated Kern reserves: 697 million barrels. Technology continued to improve. By 2009 Kern had produced more than 1.3 billion additional barrels, and reserves were estimated to be almost 600 million barrels.
Meanwhile, the industry was learning how to tunnel deeper into the earth, opening up previously inaccessible deposits. In 1998 an oil rig at a field adjacent to Kern River bored thousands of feet beyond any previous well in the area. At 17,657 feet, it blew out in a classic gusher. Oil and gas shot three hundred feet in the air, caught fire, and destroyed the well. Energy firms guessed that the blowout indicated the presence of undiscovered oil and gas deposits far underground. Investors rushed in and began to drill. They indeed found millions of barrels of oil at great depths, but it was mixed with so much water that the wells flooded. Within a few years, almost all the new rigs ceased operation. The reserve vanished, but the oil remained.
Stories like that of Kern River have occurred all over the world for decades. After hearing them over and over again from geologists, I realized Hubbert and Limits were going about matters the wrong way. An oil reservoir in the earth is a stock. If it becomes too costly or difficult to extract, people will either find new reservoirs, new techniques to extract more from old reservoirs, or new methods to use less to accomplish the same goal. All of this means that the situation constantly changes, which in turn means that we can see only a limited distance ahead.
“It is commonly asked, when will the world’s supply of oil be exhausted?” wrote the MIT economist Morris Adelman. “The best one-word answer: never.” On its face, this seems ridiculous—how could a finite stock be inexhaustible, when a constantly renewed flow can run out? But more than a century of experience has shown it to be true. As a practical matter, we know only that there is more than enough for the foreseeable future. That is, fossil-fuel supplies have no known bounds. In strict technical terms, this means they are infinite. Hardly anyone who is not an economist believes this, though.
“Not a Commodity to Be Bought or Sold”
It was a time when a new electronic communications network had suddenly made it possible to transmit data around the world at almost the speed of light. Fashions spread from one corner of the earth to another, then vanished. A bold new breed of super-rich entrepreneurs was launching enormous technological enterprises. Media empires were rising and falling.
This was the 1870s; the electronic network was the telegraph. The “Victorian Internet,” as the writer Tom Standage has called it, “revolutionized business practice, gave rise to new forms of crime, and inundated its users with information.” The super-rich entrepreneurs were laying submarine cables across the English Channel, the Mediterranean, and the Atlantic. Completion of the first transatlantic cable had been greeted by a jubilant parade in New York City. Its insulation failed rapidly. Similar problems plagued the other undersea cables. The future was being held back. Innovations in cable technology were required.
One of the engineers developing the next generation of cables was a Briton named Willoughby Smith. Smith was supposed to test the cable as it was being laid. Looking for the right material for his tests, he tried selenium, a gray, metal-like element. Irritatingly, he couldn’t ascertain exactly how much electricity selenium let through. Sometimes it blocked the current like so much rubber, sometimes it allowed it to flow almost as freely as copper. “Pieces of high resistance at night would be only half the resistance in the morning,” Smith recalled. He eventually realized the difference was due to light. In sunlight, selenium conducted electricity; in darkness, it did not. Smith was baffled; nothing in physics said this could be possible.
Taking up the puzzle, a King’s College physicist named William Grylls Adams found something more surprising still. If Adams put a strip of Smith’s selenium in a dark room and then lighted a candle, he wrote in 1876, it was “possible to start a current in the selenium merely by the action of light.” The excited italics expressed Adams’s amazement. In all of human history, people had generated power either by burning something or by letting water or air or muscle turn a crank. Adams had created electricity by shining light at a lump of stuff.
In retrospect it seems clear that ma
ny of Adams’s colleagues didn’t believe it. Even when the New York inventor Charles Fritts actually built functioning solar panels—he spread a layer of selenium over a layer of copper and placed the assembly on his roof, thereby generating electricity—most researchers still dismissed them. “They appeared to generate power without consuming fuel and without dissipating heat,” wrote the solar historian John Perlin. Fritts’s panels “seemed to counter all of what science believed at that time.” They sounded like perpetual-motion machines. How could Adams’s “photoelectric effect” possibly be real?
Only in 1905 was the panels’ puzzling behavior explained—by Albert Einstein, a newly minted Ph.D. with a day job in the Swiss patent office. In what may have been the greatest intellectual sprint for any physicist in history, Einstein completed four major articles in the spring of that year. One described a new way to measure the size of molecules, a second gave a new explanation for the movement of small particles in liquids, and a third introduced special relativity, which revamped science’s understanding of space and time. The fourth explained the photoelectric effect.
Physicists had always described light as a kind of wave. In his photoelectric paper, Einstein posited that light could also be viewed as a packet or particle—a photon, to use today’s term. Waves spread their energy across a region; particles, like bullets, concentrate it at a point. The photoelectric effect occurred when these particles of light slammed into atoms and knocked free some of their electrons. In Fritts’s panels, photons from sunlight ejected electrons from the thin layer of selenium into the copper. The copper acted like a wire and transmitted the stream of electrons: an electric current.
Einstein received the Nobel Prize in 1921 for explaining the photoelectric effect. But Fritts’s invention remained a laboratory curiosity. Photovoltaic panels, as they are known today, were fascinating but useless. Converting only a tiny fraction of the sun’s energy into electricity, they were much too inefficient for any practical use. Decades of sporadic research into photovoltaics brought little improvement—something that Warren Weaver, Borlaug’s supervisor at the Rockefeller Foundation, lamented in 1949. The lack of progress was demonstrated four years later, when the Bell Laboratories physicist Daryl Chapin tested an array of selenium panels. No matter what he did, they converted less than 1 percent of the incoming solar energy into electricity. Then two of Chapin’s colleagues presented him with a surprise.
The two researchers, Calvin Fuller and Gerald Pearson, were members of the team that transformed the transistor, invented at Bell in 1947, from a finicky laboratory prototype into the mass-produced foundation of the computer industry. At the center of this work was silicon, common and inexpensive, a principal constituent of beach sand. Silicon forms crystals, each atom linked to four neighbors in a pattern identical to that formed by carbon atoms in diamonds. As students learn in high school chemistry, the atoms bond to each other by sharing their outer electrons. Silicon crystals can be adulterated—“doped,” in the jargon—by replacing a few of the silicon atoms with atoms of boron, arsenic, phosphorus, or other elements. If the added “dopant” atoms have more shareable electrons than the silicon atoms they replaced, the crystal as a whole ends up with extra electrons. Because electrons have a negative charge, the crystal becomes negatively charged. Similarly, if the dopant atoms have fewer shareable electrons, the doped crystal has, in effect, some electron-sized “holes”—it is positively charged. Like the electrons in the crystal, the “holes” are shared, meaning that their locations can move about somewhat in the way that physical electrons can move about.
Fuller and Pearson placed a thin layer of the first type of doped silicon (extra electrons) atop a layer of the second type (extra holes). The two Bell researchers attached the little assembly to a circuit—a loop of wire, in effect—and an ammeter, a device that measures electric currents. When they turned on a desk light, the ammeter showed the two-layer silicon suddenly generating an electric current. The same thing happened with sunlight. Fuller and Pearson realized that the photons were penetrating the top layer with enough force to knock electrons into the bottom layer, creating a flow of electrons that moved into the wire: a current. The two men had accidentally created a new type of solar panel.
When Chapin tested these novel photovoltaics, they converted about five times more solar energy to electricity than the older selenium panels. But they were still terribly inefficient. Chapin estimated the cost of silicon panels that could supply electricity for a typical middle-class home at $1.43 million (about $13 million in today’s dollars). It would be cheaper to cover the entire roof in gold leaf.
Daunted by the economics, most researchers gave up on photovoltaics until the oil shocks of the 1970s revived peak-oil fears—and hopes that the sun might be the way to escape them. The numbers seemed so overwhelming, so alluring. Every second, the sun bathes Earth with 172,500 terawatts of energy. (A terawatt—a trillion watts—is the biggest energy unit in common use.) About a third of this prodigious flow is promptly reflected into space, mainly by clouds. The leftover—roughly 113,000 terawatts, depending on cloud cover—is available for capture. All human enterprises together now use a bit less than 18 terawatts. In other words, the sun furnishes more than six thousand times the energy produced today by all of our power plants, engines, factories, furnaces, and fires combined. Its light won’t run out for billions of years. Who cares about oil in the Middle East?
Hubbert was a solar champion, but the noisiest advocates were in the new counterculture, which swarmed about solar installations like bumblebees around flowering thyme. Publications like The Whole Earth Catalog and CoEvolution Quarterly extolled sun power in Vogtian terms as “soft technology”—the route to a small-scale, decentralized, individual-empowering future with technology that was “alive, resilient, adaptive, maybe even lovable.” Solar hot-water heaters! Solar heat exchangers! Solar shutters! Solar dryers! Solar buildings!—all untouched by corporate greed. (By contrast, Borlaugian-style “hard tech,” exemplified by smoke-belching coal plants, was alienating, wasteful, environmentally ruinous, and above all old-fashioned.) Solar power, proclaimed the eco-activist Barry Commoner, is inherently liberating. “No giant monopoly can control its supply or dictate its use….Unlike oil or uranium, sunlight is not a commodity to be bought or sold; it cannot be possessed.”
Sun power’s image as the province of baling-wire hippies was at odds with reality. Today’s multibillion-dollar photovoltaic industry owes its existence mainly to the Pentagon and Big Oil. The first wide-scale use of solar panels had come in the 1960s: powering military satellites, which couldn’t use fossil fuels (too bulky to lift into space) or batteries (impossible to recharge in orbit). By the 1970s photovoltaics were cheaper, but the industry had acquired only one major new user: the petroleum industry. Some 70 percent of the solar modules sold in the United States were bought to run offshore drilling platforms.
Realizing that solar had become essential to oil production, petroleum firms set up their own photovoltaic subsidiaries. Exxon became, in 1973, the first commercial manufacturer of solar panels; the second, a year later, was a joint venture with the oil giant Mobil. (Exxon and Mobil merged in 1999.) The Atlantic Richfield Company (ARCO), another oil colossus, ran the world’s biggest solar company until it was acquired by Royal Dutch Shell, the oil and gas multinational. Later the title of world’s biggest solar company passed to British Petroleum (now known as BP). By 1980 petroleum firms owned six of the ten biggest U.S. solar firms, representing most of the world’s photovoltaic manufacturing capacity.
Why did Big Oil keep investing in a technology with such a slow potential payoff? One reason was a new wave of peak-oil anxieties. After falling in the 1980s, worries slowly built up in the 1990s until they were publicly detonated by the British geologist Colin Campbell and the French petroleum engineer Jean Laherrère, who predicted in a widely read Scientific American article in 1998 that the oil party was almost over. “Before 2010,” the two men proclaimed, globa
l petroleum output would permanently decline. “Spending more money on oil exploration will not change this situation….There is only so much crude oil in the world, and the industry has found about 90 percent of it.” Humankind was not running out of oil per se, they stressed. What was vanishing was “the abundant and cheap oil on which all industrial nations depend.”
As in previous oil panics, the fears spread widely. “The supply of oil is limited,” President George W. Bush told world leaders in Switzerland. Saudi petroleum is in “irreversible decline,” proclaimed the peak-oil pundit Matt Simmons in 2005. Oil baron/corporate raider T. Boone Pickens agreed; the world is “halfway through the hydrocarbon era,” he said that year. “Slowly at first and [then] at an accelerating rate,” the best-selling peak enthusiast James Howard Kunstler predicted at about the same time, “world oil production will decline, world economies and markets will exhibit increased instability…and we will enter a new age of previously unimaginable austerity.” Warnings flooded from the presses: Hubbert’s Peak (2001), Powerdown (2004), Twilight in the Desert (2005), The Long Emergency (2005), Peak Oil Prep (2006), The Post-Petroleum Survival Guide and Cookbook (2006), Confronting Collapse: The Crisis of Energy and Money in a Post-Peak-Oil World (2009).
“The price of oil was an index to the Western world’s anxiety,” the novelist Don DeLillo had suggested. “It told us how bad we felt at a given time.” If so, people were feeling rather bad; petroleum panic had taken hold as never before. The University of Maryland’s Program of International Policy Attitudes surveyed fifteen thousand people in sixteen countries: 78 percent believed that we were running out of oil. Another poll: 83 percent of Britons thought that oil and gas could become unaffordable. Another: three-quarters of Americans believed that a petroleum drought was coming. “I don’t see why people are so worried about global warming destroying the planet,” Simmons said in 2008. “Peak oil will take care of that.” Seeming to echo his admonition, the price of oil soared that year to its all-time high: $147.27 per barrel.