“your and your friends’ names, profile pictures, gender, user IDs, connections, and any content shared using the Everyone privacy setting.”
How would this work in practice? Let’s imagine that CNN and Facebook team up. If you’re logged into Facebook and visit CNN, the website would be able to welcome you by your full name, display gender-relevant content, show you recommendations from the people in your network who also visit CNN, and so on. Going a little further, if you share your interest information, CNN might be able to dynamically display stories that match your interests.
The level of disclosure proposed in this new policy is similar (or even identical) to the information disclosure required for use of a Facebook app. The critical difference in the new policy is that while applications require an opt-in, it appears that this new process will require an opt-out. Facebook spokesperson Barry Schnitt:
“The opt-out hasn’t been built yet. We just want people to know they’ll be able to opt out. We’ve made that commitment. There will be an opt-out right when the user gets to the site, and there will be some opt-out functionality on Facebook. But as to where the button will be or how it will look, I don’t know, because they don’t exist right now.”
In theory, there will be two opt-outs. The first will be the hypothetical button that Schnitt talks about. The second will be to log out of Facebook and remove the Facebook cookie. In reality, though, if you’re a Facebook user, you can never really opt-out, because any time a Facebook friend visits a third party site Facebook will share some of your information with that site.
Although it is a good sign that Facebook has gone on record regarding privacy control, the previous comment reveals Facebook’s cavalier attitude towards privacy. Quite literally, they’re talking about pushing identity information of 400 million people around, yet privacy is treated as an afterthought – something they’ll figure out later. When will companies like Facebook and Google start bringing privacy teams in at the beginning of the design process, rather than at the end?
Shifting topics a little bit, I see this move as notable because it marks Facebook’s first foray into large-scale warehoused behavioral targeting. Targeting companies like Doubleclick (owned by Google) routinely mine our travels around the web, allowing third-party consumers to generate targeted recommendations based on our habits. Because this happens behind the scenes, we’re less likely to notice it (which doesn’t make it any less troubling). Facebook’s move stands to confront us with behavioral targeting, and they should consider the boundary they’re confronting. It may not seem to be a big thing to have a third party website welcome you by your first and last name, but it is a paradigm shift on the web.
TechCrunch argues that it is time to sharpen the pitchforks, in preparation for the major backlash against the service. Let me explain why this is frustrating. In my opinion, the role of the privacy team is to navigate the necessary tension between our freedoms to disclose and how companies can ethically and morally profit from our data. Facebook’s failures with Beacon or Google’s failure with Buzz are not “wins” for privacy; rather, they are losses for companies, consumers, and the market.
This brings me back to what is troubling about the “sharpening pitchforks” mentality. It doesn’t and shouldn’t have to be this way. Compared to Doubliclick, Facebook’s move really isn’t any more troubling – if the system is implemented properly. And if the system is implemented properly, it could be a win – for consumers, for Facebook, and for third parties. So how can Facebook navigate this challenge? Let’s start with research, sensible design, and a different style of rollout than the traditional ask-for-forgiveness-later approach Facebook seems to believe in.
At Facebook’s current size and scale, they can’t afford to get this wrong. Through research, testing, and a willingness to put the customer first, Facebook could navigate the challenges of this new feature. But make no mistake, more than anyone, they are in the bulls eye right now. And if Facebook does decide to play cavalier with privacy, the mobs TechCrunch describe will be waiting.
In the week since Google introduced Buzz, the most interesting thing about the fiasco has been watching the company. For an organization as risk-averse and PR-aware as Google, a public failure offers insight that can’t be gleaned from watching daily operations. As Google attempts to fix the problems and move the conversation onward, I thought I might reflect on some of the teachable elements of this event.
First, a little bit of back story. As part of my fellowship at the School of Information and Library Science, I teach a course about social network sites. Each week, I sit down with my students to discuss the social, legal, ethical and privacy implications of social network sites, among other things. Potentially noteworthy is that my course doesn’t spend a lot of time on social network science – graph theory, quantitative analysis of networks, etc. Rather, we concern ourselves with the interaction of people with social technology at large scale.
In our readings and discussions, we’re often challenged to think about how people present themselves in technology. When you create a profile in a social network site, or share a stream of Tweets, you’re essentially creating a representation of an identity. As we’ve seen time and time again in Facebook, we run into problems when identities collide during “context collapse” – when people from a different segment of your life view an identity you’ve constructed for your friends.
Taken one way, it could be argued that this problem of separate identities reveals some sort of fundamental character flaw: “Why aren’t you the same person to everyone?” As Google CEO Eric Schmidt pointed out, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” It is the intersection of technology and philosophies like Schmidt’s that are causing companies like Google and Facebook to stumble again and again, creating “privacy nightmares.”
Many of the readings in my class are influenced by Erving Goffman’s theories of identity and interaction. Goffman, the legendary Chicago-school sociologist and former ASA president, elaborates in rich detail the process of social interaction in his books The Presentation of Self in Everyday Life, Behavior in Public Places, and Interaction Ritual. In essence, Goffman argues that identity and interaction are performative, a concept that maps very well onto social network sites. By “creating” identities, we’re not living dual lives, but rather engaging in a well-established performance of identity that lets us share the proper “front” in context. We act differently on LinkedIn and Facebook because these sites have contextual norms, not because we’re duplicitous.
At the beginning of each semester of my class, I tell my students that they’re going to leave with a skillset that helps them negotiate human interaction with social technology. I’ve sat up at night, pondering the value of such a skillset. More than anything, the Buzz fiasco has driven home the point that we need interdisciplinary information professionals that can work with teams in negotiating the social implications of their tools. These are the students I’m working with, and I wonder how Buzz would have rolled differently if their voices were brought to the table.
The builders of social technologies are challenged to manage the relationship between technical affordance and what is, for lack of a better term, human inertia. That is, the tendency for people to act like people. As Google Buzz engineers attempted to reconfigure our notions of a social group (work/friends/romantic/etc. was collapsed to “most frequently contacted”), they ran smack into human inertia. Even though Google’s algorithms have likely figured out a more efficient way for us to group the people we know, it was simply too much to ask us to configure ourselves to the technology.
By fabricating new social groupings, Google ran head-on into Facebook’s biggest problem – that of context collapse. When we merge social groups together, we are challenged to manage our disclosures across these groups, which have different norms of propriety. How is it possible that Google didn’t see the potential problems of such context collapse at scale? I’d like to offer a potential answer.
If you read a history of Silicon Valley (such as Katie Hafner’s or Michael Hilzitk’s), you’ll notice a theme of interconnection. Silicon Valley’s tech economy is a dense series of highly entrepreneurial networks, where employment is characterized by acceptance of failure and short tenures. The work of AnnaLee Saxenian reveals this trait as being fundamental in the Valley’s success; ideas are gestated frequently, teams assemble rapidly through the uncharacteristically large networks of oft-moving tech employees. As good as this is for innovation, it is bad for the development of a social networking site.
Working in Silicon Valley is a classical embeddedness problem. If you work in the Valley, it is likely that many of the people you know share similar traits. They work at the same company as you, think about similar problems, went to similar schools. Such homophily is beneficial for allowing entrepreneurial teams to assemble quickly, but it is bad for finding heterogenous opinions. Consider the case-in-point of the Google Buzz test – it was rolled out initially to Google’s 20,000 employees. These employees – similar on many traits, richly compensated, cognizant of privacy – are different in key ways from the rest of the Buzz ecosystem. Perhaps the homophily of the test base accounts for how devastating edge-cases weren’t designed for, or perhaps groupthink shouted such possibilities down. Either way, this is an important lesson about the pervasive problems of homophily when designing privacy systems.
While involving interdisciplinary information professionals like the ones I train in the design process would be a good step forward, it is easier said than done. Just as Silicon Valley engineers collide with human inertia, the Valley has its own inertia of bigger, better, and faster. Introducing the human perspective into such a culture is an ongoing, and challenging problem (see the work on Values in Design). Right now, the market (and the opinion-sphere, to a lesser extent) regulates and acts as the proxy for human problems with systems. I’d like to think that by introducing informed, professional voices to the discussion, we can move beyond this reactionary approach to privacy. Perhaps Buzz is the case that moves this discussion forward.
Image used under CC-BY-ND, original source.
Fred Vogelstein has an interesting article in the new edition of Wired, previewing Facebook’s full-on assault of Google for targeted advertising territory. The article makes news, and includes some great (and painfully ironic quotes) from Mark Zuckerberg in which he accuses Google of contributing to the surveillance society (Pot, Kettle, Black). The article reads like a preview for the Super Bowl, with notoriously tight-lipped executives tossing bombs back and forth. Congrats to Vogelstein for successfully stoking the ire of these monoliths.
The fundamental conflict of the article lies in the comparison of the advertising products offered by the two companies. Google’s product, targeted text ads, is the single most successful product on the Internet. The tiny, unobstructive ads have fueled Google’s dominance in multiple markets; today, 90% of Google’s revenue comes from Adsense. Facebook’s product is nascent – it is the concept that advertising works better when it is socially mediated. That is, we are more likely to click on ads, content, and links when the content is funneled through our friends. This theory is sensible, but to date, Facebook’s concept remains vaporware, with a majority of their revenue coming through traditional targeted text and banner campaigns.
Framed by Zuckerberg, the contrast between Facebook and Google is personal vs. impersonal. Of Google he states: “You have a bunch of machines and algorithms going out and crawling the Web and bringing information back. That only gets stuff that is publicly available to everyone. And it doesn’t give people the control that they need to be really comfortable.” Vogelstein writes:
Facebook CEO Mark Zuckerberg envisions a more personalized, humanized Web, where our network of friends, colleagues, peers, and family is our primary source of information, just as it is offline. In Zuckerberg’s vision, users will query this “social graph” to find a doctor, the best camera, or someone to hire—rather than tapping the cold mathematics of a Google search. It is a complete rethinking of how we navigate the online world, one that places Facebook right at the center. In other words, right where Google is now.
Personal vs. impersonal. Wouldn’t you rather get a doctor recommendation from ten of your friends than a text link? The value of peer recommendations have driven many communities, including countless bulletin boards and fora, sites like epinions and Yelp, and members-only specialist communities. The fundamental problem with monetization in Facebook’s case lies with norms that govern the exchange of advice, particularly that the advice be truthful and unbiased. If we are to trust advice, we must know that external agents aren’t corrupting or influencing the transmission of advice. We can get advice from Facebook regrading doctors, but we won’t trust the advice if Facebook pays our friends to recommend certain doctors.
Facebook’s grand vision involves a wholly-contained world of social information that is brokered out through the web. With enough critical mass, it is argued, most of our common information needs can be answered by our social networks. With most technological main effect hypotheses, the formulation is generally suspect. Researchers of social support argue that support is more effectively derived from certain actors, that support is contextual, etc. In a traditional model, where the people around you are the primary producers of information, your personal support network is crucial. With the advent of the Internet, however, most of us no longer exist in a traditional model where the people around us are our only support vector (1).
The reality is that Google, and other search engines, have restructured expectations regarding everyday information seeking. It is no longer good enough to simply get recommendations from a personal network when there is a vast quantity of electronic information available at one’s fingertips. You can certainly get doctor recommendations from your friends, but the online search for information about the doctor is now a natural part of the information seeking process. In this sense, Facebook is complementary, providing an important but not all-encompassing factor in our decision making process. The argument that individuals will move their information seeking to a social network, and away from the mechanistic site Google simply assumes too much. Google has already won by making itself an integral part of our everyday information seeking processes.
If Facebook (a proxy for “socially mediated search”) is a complementary and useful part of everyday information seeking, we must consider the relevance of information we get from the site. We generally assess relevance in information systems through “recall” and “precision.” In Facebook, recall is strictly bound to our known social world – the people who we have connected with. Therefore, precision is a function of how well the various others producing results match our needs. If you have 500 friends, spaced across a variety of age ranges, is it safe to assume that information you get from the network will actually be all that relevant? Our core social networks are generally homophilous, but our core social networks are very small. Expand past a certain network size and it becomes likely the interests and experience of your “friends” will vary significantly from yours.
Facebook could address this problem with friend lists, the privacy feature that compels individuals to place their friends in groups. Perhaps friend lists could be converted to interest groups (People whose book recommendations I trust), but the mechanics of a process would require a good bit of intervention on behalf of the user. The participation gap is also problematic – if the people who you really trust for book recommendations are not heavy users of Facebook, then it is unlikely you’ll have your information needs addressed.
Facebook could develop algorithms that look for similarity between question askers and answerers – if I ask for a book recommendation, perhaps Facebook could weight responses from people who share my stated book tastes. This compels participation and broadcast of information, one of Michael Zimmer’s new laws of social networking.
Although the debate framed by Vogelstein and Zuckerberg is Facebook vs. Google, there is actually very little opportunity for Facebook to significantly edge into Google’s core market – targeted text-link ads. Text link ads are served as a by-product of information search, which is an integral part of our everyday information seeking processes. Facebook is likely to emerge as a complement to search, and in some areas it may perform better than search, but search will remain relevant. The challenge to Facebook is to find a way to monetize their value areas without being in contravention of social norms. The challenge to Google is to get access to the wealth of personal data Facebook is collecting (and no, Google Friend Connect and all of their other terrifically lame social products, will solve this problem). For the consumer, the battle between Google and Facebook is a win-win, with the obvious exception of privacy matters.
(1) Those with “impoverished life-worlds” – those with limited access to information and resources, are unlikely to incorporate search engines or social networks into their everyday information search processes.
Saul Hansell reports on Joseph Turow’s proposal for awareness of behavioral targeting:
I’m coming to the conclusion that each advertisement on a page has to speak for itself. That’s implicit in the approach Google is taking for its new behavioral targeting system. It puts the phrase “Ads by Google” on all its advertisements. Click that link and you’ll get some limited information about Google’s targeting system and an ability to adjust some of the interests that Google is tracking.
But Google’s approach is presented in a way that glosses over what they are doing and discourages people from reading the disclosure and exercising control, says Joseph Turow, a marketing professor at the Annenberg School for Communication of the University of Pennsylvania.
Mr. Turow has developed a plan that is simpler and more comprehensive: Put an icon on each ad that signifies that the ad collects or uses information about users. If you click the icon, you will go to what he calls a “privacy dashboard” that will let you understand exactly what information was used to choose that ad for you. And you’ll have the opportunity to edit the information or opt out of having any targeting done at all.
Google Booksearch is becoming one of my go-to scholarly resources. All of the evilness aside, it is extremely useful to be able to look up a chapter or section from a book (even if that book is on the shelf in the other room). Since I manage my reading lists with Amazon, I wanted to make it very easy to look up books in Google Booksearch from Amazon. So I created the following bookmarklet:
When you’re on an Amazon product page, click this bookmarklet and you’ll be taken to the Google Booksearch results for the book. If previewing is allowed for the book, you’ll be able to leaf through it before you purchase/borrow/walk to your shelf. To install the bookmarklet, drag the booksearch lookup link to your bookmarks folder.
Some quick notes on Booksearch: