Kamangar said that with the profitability push on track, it was time to refocus on user growth and work on more social features. But Schmidt had another suggestion. That previous weekend, he had seen a tennis match streamed over the Internet by CBS and was impressed at the quality of the video. That was the direction that YouTube should take. “I want you to create a new kind of broadcast,” he said. “It’s so obvious what the product should be. Your goal should be to have a million quality broadcasts of … what knows what?”
Not long after that, YouTube began streaming live events, including a U2 concert at the Rose Bowl and a Barack Obama press conference. It also streamed its version of Google Goes to the Movies—a full-length version of Taxi Driver. These were apparently the first examples of Google’s intended millions of broadcasts.
Earlier in that same GPS session where YouTube had presented in September 2009, Google executives had seen a demo of another television-based product dubbed Google TV. They had okayed the project back in 2007, when a French engineer named Vincent Dureau had explained that by 2010 there would be many television devices connected to Internet broadband and “Google wants to be on those devices.” Dureau’s idea was to provide a Google operating system for televisions—a sort of Android for TVs. Instead of a program guide, users would get the equivalent of a video dial tone via the Internet, directing them to a wealth of content. Google TV would originally be included in devices such as Blu-ray players, and eventually in television sets, which would presumably give users instant access to the millions of high-quality YouTube channels that Eric Schmidt envisioned—all paid for, certainly, by a video equivalent of AdWords.
The ambitions of the two video-based projects were so audacious that you would never know that there was a recession on. Indeed, at an October 2009 press roundtable in New York City, Schmidt would declare that, for Google at least, the economic bad times—as mild as they had been for his company—were officially over. Google was hiring again. It would also step up its rate of acquiring companies, big and small. Expect one a month. “We are increasing our hiring rate and our investment rate in anticipation of a recovery,” he said. The company would prosecute with vigor its efforts to dominate in the phone world and the television world, as well in the field of software—Microsoft, Apple, and cable companies be damned.
PART SIX
GUGE
Google’s Moral Dilemma in China
1
“I feel like I shouldn’t impose my beliefs on the world. It’s a bad technology practice.”
“DO KNOW EVIL!”
That was the legend on the back of the cool black T-shirts printed by the geeks, scientists, pager-bound technicians, and former break-in artists on the Google Security Team.
But the failure to know evil—or more accurately, the failure to navigate around it without falling into its dark orbit—would come to haunt the company in its most serious moral crisis. When the revelation came that a security breach had compromised the company’s intellectual property and additional attacks had exposed the Gmail accounts of dissidents critical of the Chinese government, Google’s “China problem” became front-page news. After weeks of struggling with the issue, Google’s Executive Committee, including Schmidt, Page, and Brin, finally agreed on the most significant and embarrassing retreat in the company’s history. On January 12, 2010, they changed course in the country with the world’s biggest Internet user base, announcing an effective pullout of their search engine from mainland China.
Though the underlying issue of Google’s China pullout was censorship, it was ironic that a cyberattack had triggered the retreat. Google had believed that its computer science skills and savvy made it a leader in protecting its corporate information. With its blend of Montessori naiveté and hubris that had served it so well in other areas, the company felt it could do security better. Until the China incursion, it appeared to be succeeding.
As with other aspects of the company, Google’s security team had evolved as the enterprise grew. In May 2002, Google hired its first person dedicated specifically to protecting its operations from intruders, vandals, and thieves. Heather Adkins had wound up in the field almost by accident. She’d been a marine biology major at Humboldt State University, where she’d stumbled on computers, then switched her major to CS. It was the mid-1990s, and the Internet craze had spawned hundreds of companies desperate for engineers. Even before Adkins could graduate, the Internet company Excite lured her to Silicon Valley, where she ran its huge email system. She obtained an education in computer security on the fly and left Excite to run security for a short-lived start-up. She survived Google’s interview process to become the top security enforcer of one of the world’s most visible cybertargets. She was twenty-five years old.
Google’s existing sysops (systems operations) teams, staffed by engineers familiar with best practices in the field, had been diligent in using security software to defeat what was already a constant series of probes and outright attacks, so Adkins wasn’t facing a crisis. Instead, it was apparent that a big part of her job would be making sure that security was baked into the products and services Google would introduce. Some cyberattacks would inevitably involve not just Google’s security but the personal information of Google’s users. An early challenge came when Adkins learned about the Gmail product in development. Google would be responsible for billions of emails, loaded with personal information and confidential business materials. Atkins called for a complete design review from the security perspective. She took the entire Gmail team to an off-site meeting, and for a couple of days they whiteboarded every possible vulnerability. That began Google’s practice of working on security with engineers while projects were in the design stage. The security team also ran training sessions, including a mandatory secure programming class that every Noogler had to take, and had regular office hours where engineers could work out knotty security problems with the team.
Google’s security team grew substantially from the day when Adkins arrived as employee 451. It hired three different kinds of security workers. There were academic computer scientists; responders, who wore pagers and were prepared to address intrusions or denial-of-service attacks instantly (for instance, a giant attack of hacker bots during the 2003 Google ski trip); and “breakers,” people whose job it was to don the mental cloak of dark-side hackers and reverse-engineer and probe Google’s systems to see if there were holes that some malfeasant might be exploiting. Sometimes Google paid outside consultants—their murky résumés notwithstanding—to search for vulnerabilities. Other times, it worked with skilled amateurs who were more than happy to be paid with a T-shirt for locating a bug in Google’s software. (There was a discussion whether the garment should read, I FOUND A FLAW IN GOOGLE AND ALL I GOT WAS THIS LOUSY T-SHIRT, but the security people worried that the message might encourage even more attacks, so the shirt had only the standard company logo.)
In 2003, the company hired Alma Whitten. She’d earned the first doctorate in computer security and human factors at Carnegie Mellon. Her focus was internal security—in part to make sure that Google’s security protocols were sufficiently easy to manage that the company’s engineers wouldn’t bypass them with shortcuts. Her job was not only to encourage a security-conscious mind-set but, in the worst case, to catch any Googlers who proved to be disloyal crooks.
“Google’s been described as sort of the inmates running the asylum,” says Brandon Downey, who works with Whitten in Security Operations. “But it’s a little more than that—it’s more like the inmates all have real guns.” There was precious information to be protected, as well as hundreds of thousands of servers that could be turned into useless junk. And then there was the looming nightmare of espionage.
Early in Whitten’s tenure, Google had been rewriting its system for handling user logs. Those were the crown jewels of information, containing precious and sometimes pernicious information about what Google users searched for and yearned for. Google wanted increasingly t
o use that information to improve its search and ad systems, but the company had a strict rule that no one examine logs to glean information about any individual user.
Whitten realized that Google needed what other big information technology companies already had: an explicit policy about security. But the policy had to be Googley. So Whitten and others formed a seven-person group to hammer out a commonsense internal security policy—something written in plain English that could be described in a couple of pages.
One issue proved to be a devil for the committee. “Specifically, it was about whether Google’s physical security people would have the right to ask people to submit to a search,” she says. “The set of people who were on this team were quite uncomfortable with the idea that this would be part of the employer-employee relationship.” The people charged with physical security wanted a license to check out anyone anytime the security team’s wrongdoing antennae twitched. “I was concerned that within the corporate environment the incentives would be perverse—it would always be janitors who got searched and never the research scientists.”
This discussion dragged on for months, a glaring anomaly in a company that measures things in milliseconds. (“The analogy to childbirth was certainly mentioned a number of times,” says Whitten.) Ultimately, the group reached an arrangement that all sides could live with. A Google security officer could search employees without probable cause in issues where physical well-being was threatened—such as looking for weapons—but not to safeguard information. “You can do it to keep people safe, not to keep property safe,” says Whitten.
That was the way Google security in Mountain View would work. As crucial as security was, Google could not bear the idea that its employees could not be trusted. In accordance with best practices, there would be “reasonable audit trails,” in the words of Alma Whitten. But Google would not submit itself to a lockdown mentality. Could you really be a Googler if the company eyed you like a shoplifter and rummaged through your bag as you left?
As 2009 approached, Heather Adkins was asked about her OKRs for the approaching year. “Number one is, don’t get hacked,” she said. “That’s always my first one.” She was particularly concerned with attacks from overseas. Palestinian hackers were emerging. Iran was a rising threat. But one country presented the biggest worry for Google’s security team. “Of course,” she said. “China.”
Page and Brin always saw Google as a global corporation. In the company’s first few years, Omid Kordestani established beachheads in a number of countries. But those were sales operations. In 2004, Google began to get serious about starting engineering centers overseas.
To help set them up, Google turned to a recent hire from Hewlett-Packard. Kannan Pashupathy had been schooled in his native India before traveling to the United States for graduate work at Stanford. (This is so common a biographical fact at Google that there should be a keystroke shortcut to invoke it.) Pashupathy was a deft leader as well as an engineer, and HP moved him up its organizational ladder rung by rung—to lead engineer, architect, and ultimately senior manager.
Pashupathy was just about to return to the United States after a long stint abroad when Google’s head of engineering, Wayne Rosing, recruited him. The university-like atmosphere at the Googleplex charmed Pashupathy, but an expertly baited hook by Rosing clinched the deal. “Kannan, if you’re confident about your abilities and you know you’ll be successful at whatever you do, Google is the place for you. If not, then don’t come.”
“That got me,” says Pashupathy. “It appealed to my machismo.” He arrived at Google just as the company was beginning an era of international expansion of engineering offices. Currently the company had only three small overseas outposts—in Zurich, Bangalore, and Tokyo. Larry Page wanted to build a hundred engineering offices in the next five years.
That was the situation in which Pashupathy found himself as a brand-new Googler sitting in a conference room in Building 43 with Larry Page, Sergey Brin, Eric Schmidt, and Alan Eustace, who had replaced Wayne Rosing after the latter’s retirement. “It was almost like it was the first time they were talking amongst themselves about how Google was going to grow as a company internationally, from an engineering perspective,” says Pashupathy. Larry Page was standing by the whiteboard, and Eric turned to him and said, “Okay, Larry, what do you want to do? How fast do you want to grow?”
“How many engineers does Microsoft have?” asked Page.
About 25,000, Page was told.
“We should have a million,” said Page.
Eric, accustomed to Page’s hyperbolic responses by then, said, “Come on, Larry, let’s be real.” But Page had a real vision: just as Google’s hardware would be spread around the world in hundreds of thousands of server racks, Google’s brainpower would be similarly dispersed, revolutionizing the spread of information while speaking the local language.
Pashupathy and Eustace worked out a plan for expansion that they would take to a GPS session for approval. They ranked countries into tiers, organized by suitability for Google engineering offices. Before the meeting, Pashupathy was warned not to ever bring cost into the discussion—not to talk about return on investment. He was simply to look at the talent and the user value the project would bring. “That was brand-new to me, because all my years at HP, I’d be standing on budgets, trying to cut costs.” Even so, his ambitions proved too timid for the founders. Part of his strategy was to move deliberately into the new countries. In Google’s cathedral of speed, this was a cardinal sin, and Page ripped into him for the transgression. “You’re thinking like a big-company guy,” he said. Google had become a big company by thinking like a small company.
Google began to open engineering offices overseas. As soon as Google made a decision to pursue a country, Pashupathy would go in and do a lightning round of meetings, gatherings, and interviews. “It was a very streamlined process,” he says. “Talk to a bunch of people, including government, companies, students, professors, the whole bit. And then come back and make a call of whether we were going to invest. If we were, we’d immediately look for a director.”
Some countries were natural fits. Zurich was a central location for European operations. Israel’s entrepreneurial character led Google to establish a center in Haifa as well as the more expected Tel Aviv. The Haifa office was a move to accommodate Yoelle Maarek, a celebrated computer scientist who had headed IBM’s labs in Israel. Google hired another world-class computer scientist, Yossi Matias, to head the Tel Aviv office. (In 2009, during Google’s austerity push, the company would merge the engineering centers and Maarek would depart.)
Pashupathy’s native country, India, was an obvious choice for an engineering office. But finding a director proved difficult. Eventually, an early employee, Krishna Bharat, volunteered for the job. The India offices became among Google’s most productive. “When you’re outside, you’ve got the auto rickshaws, the poverty, the honking horns … India,” says Roy Gilbert, who helped set up the offices. “And then you walk into our office in Hyderabad and it’s like you’re in Mountain View. Like any Google office around the world.” (One difference: in India, the electricity was erratic.)
Different countries presented different challenges. In India the politicians demanded penalties and censorship when users of the Orkut social-networking service, very popular in that country, launched epithets at officials. In Thailand, the king could not be insulted. In Germany, denying the Holocaust is illegal. Generally, in cases where officials ordered that Google filter its search results, the company would push back. It was a constant struggle.
But nothing like that in China.
Before Pashupathy’s time, Google’s history in China had been brief but not without tumult. In 2000, as part of its general effort to make Google search available worldwide, Google began working on a version of its flagship service in Chinese. Google was way late to the game—a year before, Yahoo had offered Chinese search and had actually opened an office in Beijing. Google would
notice the originating country of a user’s Internet address and deliver its home page in the native language. All the indexes were in the United States, and Google had no operations in China itself. Substantial numbers of Chinese users, particularly well-educated ones, began Googling, and its market share rose to an estimated 25 percent. It became the favorite among well-educated people who wanted information from outside of China. This ascent came to an abrupt halt on September 3, 2002. That day, Chinese visitors who typed “www.google.com” into their browsers got only error messages. The Great Chinese Firewall had blocked Google. That was how outsiders referred to the technology behind the Chinese government’s sweeping censorship. China realized that the Internet was a commercial necessity but the potential freedom of speech it offered was deemed a threat. So the country built an elaborate censorship infrastructure to block disfavored sites or pages.
The outage caught Google by surprise. But by that time Google’s leaders were accustomed to extreme reactions to their products. Making all the world’s information accessible was a fairly disruptive goal, with particularly low appeal to authoritarian regimes. “Pretty much every possible contentious political issue comes up at Google,” Brin said in September 2002, ticking off other recent conflagrations involving gun ads and neo-Nazi websites. It was Brin who made the calls on those situations. “I’ve generally been the one to do that, because you can debate these things forever,” he said. “It’s between Larry and Eric and myself, and they sort of say, ‘Sergey will take care of it.’”
Or, as Eric Schmidt told a reporter when asked just how Google determines the application of its famous unofficial motto, “Evil is what Sergey says is evil.”
The problem that September was that Google didn’t know why China had blocked its search engine or what it could do to fix things. (Brin hinted to one media source that he suspected that the government had acted at the instigation of the leading Chinese-based search engine, a company called Baidu, which had begun operating in 2000.) Brin ordered a stack of books about China from Amazon.com to educate himself and asked tech luminaries with international experience for advice. Google had never established a relationship with the government, and it pulled every string it could to try to connect. The diplomacy initiative had barely begun when, two weeks later, on September 12, Google was mysteriously unblocked.