Read In The Plex Page 46


  Though the FTC made its ruling based largely on whether the purchase would be anticompetitive, it did mention the issue of consumer privacy, observing that the issues in the merger “are not unique to Google and DoubleClick.” That conclusion demonstrated that the commission failed to perceive the admittedly complicated privacy implications that were unique in this case. For its part, Google helped foment misunderstanding by not being clear about the unprecedented benefits it would gain in tracking consumer behavior.

  In fact, the DoubleClick deal radically broadened the scope of the information Google collected about everyone’s browsing activity on the Internet. While Google’s original impetus in buying DoubleClick was to establish itself in display advertising, sometime after the process began, people at the company realized that they were going to wind up with the Internet-tracking equivalent of the Hope Diamond: an omniscient cookie that no other company could match. It was so powerful that even within Google, the handling of the gem became somewhat contentious.

  Some understanding of the way cookies work in advertising networks is required to appreciate this. When a user visits a site that contains an ad from a network like DoubleClick, the browser automatically “drops” a cookie onto a user’s hard drive. The information enables a website to know whether a visitor has been there before and thus to determine what ads might be appealing, as well as which ads have already been shown to that user. Furthermore, every time a user subsequently visits a site with ads, that visit is logged into a unique file of all of that user’s peregrinations. Over time, the file develops into a rather lengthy log that provides a fully fleshed out profile of the user’s interests. Thus, the DoubleClick cookie provided a potentially voluminous amount of information about its users and their interests, virtually all of it compiled by stealth. Though savvy and motivated consumers could block or delete the cookies, very few knew about this possibility, and even fewer took advantage of it.

  The information in the DoubleClick cookie was limited, however. It logged visits only to sites that ran DoubleClick’s display ads, typically large commercial websites. Many sites on the Internet were smaller ones that didn’t use big ad networks. Those interests or activities weren’t reflected in the DoubleClick cookie. Millions of those smaller sites, however, did use an advertising network: Google’s AdSense. AdSense had its own cookie, but it was not as snoopy as DoubleClick’s. Only when the user actually clicked on an ad would the AdSense cookie log the presence of the user on the site. This “cookie on click” process was lauded by privacy experts as far less invasive of people’s privacy than the DoubleClick variety.

  Google could have signed up as a DoubleClick customer and permitted DoubleClick to drop its cookies on sites where AdSense ads appeared. That would have made Google literally billions more dollars, since advertisers would have paid much more for the more relevant ads. But Larry and Sergey did not want Google to drop third-party cookies on its own sites. Implicit in their refusal: the practice seemed, well, evil.

  But after Google bought DoubleClick, the equation was different. Google now owned an ad network whose business hinged on a cookie that peered over the shoulder of users as it viewed their ads and logged their travels on much of the web. This was no longer a third-party cookie; DoubleClick was Google. Google became the only company with the ability to pull together user data on both the fat head and the long tail of the Internet. The question was, would Google aggregate that data to track the complete activity of Internet users? The answer was yes.

  On August 8, 2008, not long after FTC regulators approved the DoubleClick purchase, Google quietly made the change that created the most powerful cookie on the Internet. It did away with the AdSense cookie entirely and instead arranged to drop the DoubleClick cookie when someone visited a site with an AdSense ad. Before that change, when a user visited a political blog or a cat care site using AdSense, there was no record of the visit unless the user clicked on an ad. Now Google would record users’ presence when they visited those sites. And it would combine that information with all the other data in the DoubleClick cookie. That single cookie, unique to Google, could track a user to every corner of the Internet.

  The upbeat Google blog item that mentioned the change, entitled “New Enhancements on the Google Content Network,” was directed mainly to agencies, advertisers, and publishers and extolled the use of the new cookie. While the blog item did note that users could opt out of receiving the cookie and directed them to a revamped privacy policy, the posting did not explain the seismic nature of the change—that Google had unique access to what was now the web’s most powerful tracking tool.

  “Of course it was a very big deal,” says Susan Wojcicki, who as the head of the ads program was involved in the discussions. “What changed was that we were now the first person.” (As opposed to being a “third person” provider of user information to an outside party, DoubleClick.) But there was a bigger reason for Google’s change of heart. “We weren’t winning,” says Wojcicki. “Without the cookie, we weren’t making the impact on the world that you have to make to be successful.” In her view, Google had to make that step—one it had resisted earlier in part for moral reasons—so it could improve advertising and help its users.

  The powerful personal information in its enhanced DoubleClick cookie was, of course, only part of the data Google had about its users. The company also had even more intimate and comprehensive information about people from their search behavior. This information was included in the logs that were so valuable to Google in its relentless effort to improve search and run experiments. (The information did not identify users by name but by the Internet [IP] addresses they used to access Google. Those signed into Google, though, were identifiable by name.) For privacy purposes, Google fully anonymized the search cookie after nine months (dropping the IP address) and deleted it after eighteen months. (Originally, the anonymization occurred after eighteen months, but Google had changed it under pressure from critics and regulators.) Privacy activists believed that Google’s retention of identifiable search data for nine months was still too long. The European Union recommended six months, a standard that other search companies, including Microsoft, adopted or exceeded. But Google insisted that it keep information for as long as a human gestation period. “We queried every engineering team to find how long they needed the data to do the things they needed, including security, ads quality, and search quality,” says Jane Horvath, Google’s chief privacy officer in North America. “The median we came out with was nine months. It’s completely central to our tools. It’s the key to our innovation.”

  In any case, Google, in various places, now had the data on almost everywhere users went on the Internet and, via search, all their interests. No law prevented it from combining all that information into one file.

  Google would contend that limits do exist. It did not combine the data on its ad cookie with the personal information on its users’ search behavior, nor did it combine website visit data with the content of people’s mail and documents, or the posts they wrote on Blogger. Only the information derived from people’s browsing behavior was used to help deliver ads. When people expressed concerns about all that information residing with one company, Google would revert to its standard defense: if it betrayed consumers’ trust, its business would be irrevocably damaged. Nonetheless, a 2008 internal presentation written by a Googler who arrived through the DoubleClick acquisition proposed a road map for Google’s ad practices that indeed included ads chosen on the basis of people’s searches. “Google search,” it said, “is the BEST source of user interests found on the Internet and would represent an immediate market differentiator with which no other player could compete.” (That same presentation showed that the author was catching on to the Google way: under the rubric “wacky examples” of cookie use, he suggested a “Larry Page Ad” where the cofounder would “opt in” to a system that let users “create wacky ads that would appear on Larry’s laptop as he browses sites.” That was an
idea worthy of Page himself!) When The Wall Street Journal reported on the presentation, Google dismissed it as a speculative vision statement from a junior employee.

  But while Google held off using people’s search history for ads, it did engage in an internal debate on how it might use the cookie-based information that tracked their visits to websites. The problem was how Google might implement the practice of “retargeting,” which meant showing ads suggested by a user’s browsing activities, as opposed to any purchases or other actions a user might have made on a site. According to press reports, Brin had previously been against the practice; Page had been in favor. After the DoubleClick purchase, though, it was clear that Google would indeed engage in retargeting, using the super-cookie it created in August 2008. But to distinguish its behavior from the many other companies that used similar techniques, it paired the new product with what it called a new privacy practice. As part of its interest-based advertising rollout in March 2009, Google introduced a feature that gave consumers the ability to see categories of ads they’d be shown—consumer electronics, golf equipment, etc.—and provided an opt-out escape hatch from such ads. (Presumably by seeing those categories, you’d know something about what Google knows about you, at least through your cookies.) There was even a way consumers could inform Google that they’d like to see certain kinds of ads regarding interests that an examination of their web peregrinations had yet to reveal. “We wanted to take a different twist on things, to marry relevant ads with our overall stance around privacy and transparency,” says Neal Mohan. “Everybody understands that the great content we have on the Internet is supported by advertising, so if there’s a way to make it so that the message is truly relevant, then we said let’s do that. The simplest way was literally asking individual viewers what they would like to see.”

  Google took no chances before announcing its interest-based advertising initiative, seeking feedback from regulators and privacy advocates such as the Center for Democracy & Freedom and the Electronic Frontier Foundation. “Five years ago, we would’ve just launched that and we would’ve said, ‘Oh, let’s see what happens,’” Schmidt said. As a result of the planning, the press treated Google’s announcement relatively benignly, and even the voices on the blogosphere were subdued. The lack of protests startled Sergey Brin. “I was pretty skeptical it would have such a positive reaction from the press,” he told Googlers at a TGIF. “These are the kinds of things the privacy nuts take advantage of to cause paranoia.” When it was noted that one privacy group, Adbusters, was suggesting that users protest by automatically clicking every AdSense ad they encountered (thus messing with the validity of the business model), Page jokingly asked, “Don’t we make money from clicks?”

  “I don’t think that’s a good long-term strategy,” Brin said drily.

  “I like the idea of protests making us money,” Page replied, a Cheshire-cat grin on his face.

  As it turned out, Google didn’t need the protests: its interest-based advertising did very well without them. In September 2010 Google executive Vic Gundotra said that the money Google was making from retargeting was “staggering.” A month later, Google for the first time announced its revenues for overall display advertising: $2.5 billion a year and growing rapidly.

  Google’s effort to present interest-based advertising without igniting a conflagration turned out to be an increasingly rare privacy victory for the company. As people began to perceive Google less as a scrappy gang of wizards behind an uncanny search engine and more as an Information Age behemoth, they became less tolerant of all the personal information the company held about them.

  Page and Brin continued to have mixed feelings about privacy. On the one hand, they were consumed with focusing Google’s services on its users. It was almost a religious premise. But on the other, their view of what users wanted in terms of privacy differed from the views of advocates in the field. They also thought that the press often blew minor privacy glitches out of proportion. Larry Page would claim that which Google products were labeled as privacy invaders was utterly random. “There’s a 10 percent chance of any one of them becoming an issue, and it’s not possible to predict which ones,” he says. “Oftentimes the thing that people are upset about isn’t the actual thing they should be upset about. But somebody came up with clever language, like ‘It’s spooky,’ and then that got quoted everywhere, and then everybody was saying, ‘Oh, it’s spooky.’ Based on my experience with these kinds of things, it has much more to do with what the first headline says than something where you actually have a lot of control.”

  This was not to say that Google did not spend a massive amount of time and energy thinking about privacy and implementing safeguards. Under Nicole Wong’s guidance, Google created a small infrastructure of privacy monitors. In addition to Jane Horvath, Google hired Microsoft’s former privacy czar Peter Fleischer, posting him to Paris to deal with the exacting standards of the European Union. With many products, a Google lawyer would work with the engineering team to make privacy protection part of the design. The difficulties came because of Google’s very nature: it was an Internet-based company driven to put all of the world’s information into its data centers. In addition, Google’s engineers were most often young people who had grown up with the net and had a different philosophy about what’s private than the professional privacy wonks do.

  The pressures often came to a head in the regular meetings of Google’s Privacy Council, a group including policy lawyers and a smattering of executives who met regularly to discuss the privacy implications of products under development at Google. In October 2009, for instance, the discussion centered around a set of features to be added to Google Latitude, a product based on Google Maps that let users share their physical location with friends. Latitude itself was controversial, not so much because of its nature—several companies offered similar products, most with fewer safeguards than Google offered—but because it was Google doing the tracking. Only Google faced the question “You have all this information about me, and now you want to know where I am?”

  The new features upped the ante. Google Latitude now could log a user’s entire location history. Turning on the feature would provide a complete visual log of everywhere you went. When Steve Lee, the Latitude product manager, gave a demo, there was a collective sucking in of breath: overlaid on a Google Map were his peregrinations on October 5, just two days earlier. There was a thick red line from Mountain View to Berkeley, with balloon-shaped “bread crumbs” showing the check-in points when his GPS-equipped phone had pinged Google’s servers every five minutes to report his location. Apparently, he had gone on a late-night trip. Little balloons appeared on the map with his location at five-minute intervals: 11:50 P.M. Charles Street, Mountain View … 11:55 Huff Street MV … 12:00 Shoreline Boulevard MV …

  The program had a handful of key privacy safeguards, some of which had been added after meetings with the Electronic Frontier Foundation, the Center for Democracy & Technology, and a group devoted to preventing domestic abuse. The product was strictly opt in: Latitude users had to sign up for the program. When they did, they would receive regular email warnings specifying exactly what would happen if they signed up. Even after that, their computer screens would regularly sprout dialog boxes warning that location information was being stored. Only a dead person could miss the opportunities to opt out after she’d opted in. And you could delete the location information at any time.

  “Is it a real delete?” Nicole Wong asked Lee, wanting to make sure that it was a case where the information would be gone not only from the user’s perspective but from Google’s data centers as well.

  “We have a full expectation it will be a delete,” Lee assured her, ideally within an hour after the request. If the data somehow lingered, a human being at Google would get a red flag to follow up and make sure that the information was gone. Nonetheless, Peter Fleischer was troubled. He considered a big part of his job to be pushing against the enthusiasm of engin
eers, who were commonly thrilled by new data-driven projects. As he listened to the description of the feature, he became worried less by what Lee was describing than by what regulators and the technically naïve population might think when the program was described to them. “What can we do to make this palatable for the much larger group of users who say, ‘Google, where are you going?’” he asked. “Even Google Latitude itself, which is impeccable in privacy policy, is a lightning rod. I just find it really weird that we would keep this stuff for a bunch of teenagers who don’t know what they’re doing.”

  Lee explained that people, particularly younger users, like the ability to use metrics to track their location. The idea was to keep a virtual diary of where you had been, maybe retaining it for a lifetime. Young citizens of the digital age understood this. “People who are going to sign up for this are people who are comfortable to have their information shared and stored,” he said.

  Nicole Wong didn’t get it. “If I’m a normal user, what am I using my location for?”

  “It’s cool,” said Lee.

  “I’m not into cool,” she replied.

  Ultimately, a few more minor privacy safeguards were built in, and Google launched the new feature—with virtually no critical outcry. The sanguine reaction seemed to back up Page’s claim that you couldn’t predict which products would blow up in your face.

  One product in particular, however, had already emerged as Google’s most troublesome, almost a symbol for the disconnect between Google’s goals and the now-global concerns regarding Google’s intrusiveness. That was Google Street View, an outgrowth of Google Maps. Its purpose was to show users what a location looked like as if they were teleported into the physical realm and plopped on the ground in front of the address they were searching for. The feature was of a piece with less commercial Google Earth additions such as Google Moon, Google Mars, and Google Sky. Unlike their earthbound counterparts, those couldn’t be easily monetized—when virtually navigating the moon and the constellations, one is unlikely to be directed to the nearest dry cleaning or fast-food establishment—but they did fit into Google’s bigger vision as the dominant repository of not just the world’s information but the universe’s.