In the week since Google introduced Buzz, the most interesting thing about the fiasco has been watching the company. For an organization as risk-averse and PR-aware as Google, a public failure offers insight that can’t be gleaned from watching daily operations. As Google attempts to fix the problems and move the conversation onward, I thought I might reflect on some of the teachable elements of this event.
First, a little bit of back story. As part of my fellowship at the School of Information and Library Science, I teach a course about social network sites. Each week, I sit down with my students to discuss the social, legal, ethical and privacy implications of social network sites, among other things. Potentially noteworthy is that my course doesn’t spend a lot of time on social network science – graph theory, quantitative analysis of networks, etc. Rather, we concern ourselves with the interaction of people with social technology at large scale.
In our readings and discussions, we’re often challenged to think about how people present themselves in technology. When you create a profile in a social network site, or share a stream of Tweets, you’re essentially creating a representation of an identity. As we’ve seen time and time again in Facebook, we run into problems when identities collide during “context collapse” – when people from a different segment of your life view an identity you’ve constructed for your friends.
Taken one way, it could be argued that this problem of separate identities reveals some sort of fundamental character flaw: “Why aren’t you the same person to everyone?” As Google CEO Eric Schmidt pointed out, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” It is the intersection of technology and philosophies like Schmidt’s that are causing companies like Google and Facebook to stumble again and again, creating “privacy nightmares.”
Many of the readings in my class are influenced by Erving Goffman’s theories of identity and interaction. Goffman, the legendary Chicago-school sociologist and former ASA president, elaborates in rich detail the process of social interaction in his books The Presentation of Self in Everyday Life, Behavior in Public Places, and Interaction Ritual. In essence, Goffman argues that identity and interaction are performative, a concept that maps very well onto social network sites. By “creating” identities, we’re not living dual lives, but rather engaging in a well-established performance of identity that lets us share the proper “front” in context. We act differently on LinkedIn and Facebook because these sites have contextual norms, not because we’re duplicitous.
At the beginning of each semester of my class, I tell my students that they’re going to leave with a skillset that helps them negotiate human interaction with social technology. I’ve sat up at night, pondering the value of such a skillset. More than anything, the Buzz fiasco has driven home the point that we need interdisciplinary information professionals that can work with teams in negotiating the social implications of their tools. These are the students I’m working with, and I wonder how Buzz would have rolled differently if their voices were brought to the table.
The builders of social technologies are challenged to manage the relationship between technical affordance and what is, for lack of a better term, human inertia. That is, the tendency for people to act like people. As Google Buzz engineers attempted to reconfigure our notions of a social group (work/friends/romantic/etc. was collapsed to “most frequently contacted”), they ran smack into human inertia. Even though Google’s algorithms have likely figured out a more efficient way for us to group the people we know, it was simply too much to ask us to configure ourselves to the technology.
By fabricating new social groupings, Google ran head-on into Facebook’s biggest problem – that of context collapse. When we merge social groups together, we are challenged to manage our disclosures across these groups, which have different norms of propriety. How is it possible that Google didn’t see the potential problems of such context collapse at scale? I’d like to offer a potential answer.
If you read a history of Silicon Valley (such as Katie Hafner’s or Michael Hilzitk’s), you’ll notice a theme of interconnection. Silicon Valley’s tech economy is a dense series of highly entrepreneurial networks, where employment is characterized by acceptance of failure and short tenures. The work of AnnaLee Saxenian reveals this trait as being fundamental in the Valley’s success; ideas are gestated frequently, teams assemble rapidly through the uncharacteristically large networks of oft-moving tech employees. As good as this is for innovation, it is bad for the development of a social networking site.
Working in Silicon Valley is a classical embeddedness problem. If you work in the Valley, it is likely that many of the people you know share similar traits. They work at the same company as you, think about similar problems, went to similar schools. Such homophily is beneficial for allowing entrepreneurial teams to assemble quickly, but it is bad for finding heterogenous opinions. Consider the case-in-point of the Google Buzz test – it was rolled out initially to Google’s 20,000 employees. These employees – similar on many traits, richly compensated, cognizant of privacy – are different in key ways from the rest of the Buzz ecosystem. Perhaps the homophily of the test base accounts for how devastating edge-cases weren’t designed for, or perhaps groupthink shouted such possibilities down. Either way, this is an important lesson about the pervasive problems of homophily when designing privacy systems.
While involving interdisciplinary information professionals like the ones I train in the design process would be a good step forward, it is easier said than done. Just as Silicon Valley engineers collide with human inertia, the Valley has its own inertia of bigger, better, and faster. Introducing the human perspective into such a culture is an ongoing, and challenging problem (see the work on Values in Design). Right now, the market (and the opinion-sphere, to a lesser extent) regulates and acts as the proxy for human problems with systems. I’d like to think that by introducing informed, professional voices to the discussion, we can move beyond this reactionary approach to privacy. Perhaps Buzz is the case that moves this discussion forward.
Image used under CC-BY-ND, original source.