A few days ago, I stumbled upon a post from the blog Business Insider that asked “Why Is Twitter More Popular With Black People Than White People?” Drawing on data from Edison Research, the writer proposed a number of explanations for why “black people represent 25% of Twitter users, roughly twice their share of the population in general.” This factoid has now been reported by the New York Times, the San Francisco Chronicle, The Atlantic, as well as a number of prominent blogs. It’s also going viral in the Twittersphere.
I’m loathe to trust bloggers getting survey data right, so I requested a copy of the report from Edison Research (available here). At first glance, the data looks good – the research was conducted by Arbitron, it employs a landline/mobile random digit dialing (RDD) frame, with about 1,750 people age 12 and older interviewed. “National probability” studies of this sort are generally considered valid for population estimates.
Without getting into too much detail, a study’s validity is dependent on the sampling method and sample size (among many other things). In terms of method, RDD is not a true equal-probability of selection method, but both industry and academia consider it “good enough” when the sample is weighted to known totals. As for size, a sample of 1750 people allows us to make claims about a large population at an error rate of about plus or minus 3 percent.
Let’s cut to the chase: Where did the Edison Research interpretation go wrong? In the report, Tom Webster states:
The percentage of Twitter users who are African-American currently stands at roughly 25%, which is approximately double the percentage of African-Americans in the current U.S. population. Indeed, many of the “trending topics” on Twitter on a typical day are reflective of African-American culture, memes and topics.
From this, we are to believe that of all Twitter users, 25% are African-American. Not only is this surprising considering current population estimates, but also because Twitter is a global service. Let’s explore how Edison got to this 25 percent number (conveniently rounded up from 24 percent).
In the phone interview, Edison asked all respondents 12+ (n=1750) if they “currently ever use[d] Twitter.” 7% of respondents said yes, approximately 123 people. Of those 123, Edison then asked how often they used Twitter. 85% of those respondents (105 people) indicated they used Twitter at least once a month, and were thus recoded as “Monthly Twitter Users.” Herein lies the problem: It was from these 105 individuals (not the 1750 total respondents) that Edison based its estimates of Twitter use.
Let’s return to sampling error. Because random samples are asymptotically efficient, a sample of 1750 can speak to a population of hundreds of millions almost as well as a sample of 2000, 3000, or even 5000. But a sample of 105 people speaking to the very large userbase (self reported at 100 million) of Twitter? Not so efficient. The margins of error are approximately +/- 10% at an alpha of .05, +/- 12.5 at an alpha of .01. And these margins assume true equal probability of selection, and no nonresponse bias. With weighting for proportionality, it is almost certain these margins increase substantially (1).
Let’s explore what this means practically. First, Edison Research can’t speak to all Twitter users, because all Twitter users weren’t potentially included in the sample. Edison can, however, speak to USA Twitter use, from its sample of 105 monthly users. If we assume that only 5 million Twitter users in the USA use the service every month, Edison is still using 105 people to speak about these 5 million people (the margins of error don’t change). Unfortunately, this is highly unreliable.
The American Community Survey finds that approximately 13.1% of the US population self identifies as Black or African American. At an alpha of .05, the range of potentially true estimates of African-American Twitter use in the US is actually anywhere from 14% to 34%. At an alpha of .01, this estimate ranges anywhere from 11% to almost 38%, causing us to reject the hypothesis that the estimate is not attributable to sampling error or random effects. If we then include weights in our estimates of error (likely the case because Edison’s sample over-represents people under 24), the growth in error causes us to fail to reject the null hypothesis at the .05 level as well. We just can’t trust that the demographics of Twitter actually do vary from current population estimates.
Is Twitter “disproportionately” African American, White, Hispanic, or Green? The simple fact is that from this data, we can’t say so with confidence. If Edison had been a little more forthcoming with their sample sizes, it might be more likely that the blogger/journalist who reported these data would have sensed something wrong. But I wouldn’t bank on it, because it seems like Edison Research was pushing this spin from the get-go.
A final note: as I was researching/considering this piece, it was interesting to see the “spin” being placed on this “fact” around the blogosphere. Of course, you had your standard racist comments/tweets of the “there goes the neighborhood” variety, but there also appeared to be a large swath of users who were heralding this as a point of pride. Before you examine my subconscious racist motives for examining this question, please just know I like getting surveys right. And if Edison wanted to get this right, they could start by giving us a topline cross-tab of ethnicity, Twitter use, and the respective margins of error.
Ugh, footnotes on a blog!
1. Research consistently demonstrates a negatively correlated relationship between age and nonresponse; young users are more likely to under-respond, increasing their odds of being weighted in a population (and increasing their margins of error). Research is mixed on the relationship between ethnicity and nonresponse.
On Tuesday, the Pew Internet and American Life Project released a new, must-read report on Teens and Mobile Phones. The project was a collaboration between Pew and the University of Michigan’s Communication Studies department, and it involves some of the top researchers of teens and technology (Amanda Lenhart, Richard Ling, Scott Campbell and Kristen Purcell).
In addition to releasing the great report, Pew did something new by simultaneously releasing the data sets used in the report (if I’m not mistaken, they’re usually embargoed a few months). As someone who pays very close attention to Pew’s research, I was very pleased to see this – if I have questions or want to explore something further, I could go right to the data.
One of the questions in the Pew report was a modification of the General Social Survey’s (GSS) “discussion networks” question. Questions of this sort ask individuals to list how many people with which they can discuss personal matters, which seems to be a good proxy for one’s close, supportive network. Using the GSS data, Peter Marsden found in 1987 that Americans, on average, have three discussants. Replicating the analysis in 2006, McPherson and colleagues found that discussion networks had shrunk to an average of two. There’s been plenty of criticism of the measure (my favorite being Peter Bearman’s Headless frogs.. paper, see also Fischer, 2009). Most recently, Hampton and colleagues explored the effect of technology on discussion networks in a great Pew report entitled Social Isolation and New Technology.
One of the great promises of “social technologies” is that they connect us to important others. By participating in a social network site, for example, we’re able to keep in touch with a broader range of diverse contacts. Critics are quick to point out that all those ties may be meaningless; in research, we draw distinctions between tie strength. Ellison and colleagues have demonstrated that use of Facebook among undergraduates increases a form of bridging (weak-tie) social capital. The “important matters” question, on the other hand, is more reflective of bonding (strong-tie) relations. Therefore we can use Pew’s new data to explore the relationship between use (and intensity of use) of social technologies and a teenager’s strong-tie supportive network.
First, some important notes. From hereon I am going to be talking about novel data analysis. This is a blog post, so I am going to keep the reporting informal. If you wish to explore my analysis, or re-run it, I have included a zip file that contains the questionnaire, data, reasonably commented do-file and output log. Sorry, R fans, Stata wins for survey analysis; these files are compatible with Stata 11. The analysis I’ll be talking about is weighted (individuals as PSU, using PSRAI’s omnibus weight). The dependent variable is an overdispersed (mean=~5, variance=~10) count, the proper regression being negative binomial (confirmed with LR test on the alpha). Finally, the question explored in this analysis is not a direct match to the GSS question, it is actually quite different (GSS is a name generator). Therefore, the results are not directly comparable, but they are likely informative. See the Pew report methodology section for a full description of the sample.
Teenage Discussion Networks
For the Teens and Mobile Technology study, interviewers spoke to 800 teens age 12-17, asking a range of questions about technology use. Included in the questionnaire was the question about discussion networks. In this questions, interviewers asked how many people the individual “feel[s] very close to and with whom you are frequently in contact to discuss various things, including your personal issues and feelings.” The mean response was a little over 5, with a standard deviation of three. The density plot is included at right.
First, I explored if demographic and socio-economic factors were associated with the size of teenage discussion networks. Pew collected data on age, gender, family income, parent’s ethnicity, and total number of kids in the household. These variables could impact the teen’s ability to form discussion networks for a variety of reasons, so it is worthwhile to retain them as control variables. I found only one variable significant: being of “black, non-hispanic” parentage. Compared to teens of “white, non-hispanic” parentage, teens of “black, non-hispanic” parentage have a lower incidence rate of reported discussants (IRR=.8041, p=0.011, Model1.pdf).
Next, I wanted to explore the effects of internet use, social network site use, and mobile phone ownership on the size of teenage discussion network, controlling for demographic factors. I found that use of the internet, use of social network site, and ownership of a mobile phone were all positively and significantly (p<.05) associated with the size of the support network (Model2.pdf). Importantly, ethnicity remained negative and significant, indicating that teens of “black, non-hispanic” parentage do not make up the gap in the support network size due to technology use.
Of course, most teens do not use technology in isolation. In fact, Pew’s report indicates that most teens use the internet, SNS, and mobile phones in combination. Therefore, we should explore the effects of these technologies simultaneously to identify the robust contribution to the size of the discussion network. When we evaluate these simultaneously controlling for demographic factors, we find that internet use and mobile phone use no longer significantly contribute to the size of a teen’s discussion network. Use of social network sites, however, remains significant (IRR=1.142, p=.028, Model3.pdf). It appears that teens who use social network sites are more likely to report larger discussion networks. This is pretty impressive!
Before we get too excited about the promise of social network sites, let’s consider what we know about them. For most teens, the social network site represents an online space for interacting with offline friends. If use of the social network site really adds people to the core discussion network, where are they coming from? Couldn’t an alternate explanation be that individuals who are more social offline are also more social online? Pew also asked about frequency of offline socialization, and we can enter this measure as a control in our model. When we do, we see that none of the technologies remain significant, and offline interaction emerges as a significant predictor (IRR=1.074, p=.010, Model3.pdf). It turns out that teens that are more active with their friends have larger discussion networks, controlling for demographics and social technology use.
It should be noted that Pew’s report did contain a number of “technology intensity” or “differential technology use” variables (e.g. how often do you…). I included these in my exploratory analysis and none were significant, so I focused on use effects. In the study of “social impact of technology”, there is a long history of attribution error regarding the “effects of technology.” My goals in this analysis were twofold: First, to explore a re-occurring question that is addressable with Pew’s data (is technology use robustly associated with larger discussion networks), and to explore some alternate hypotheses to the findings (a common theme in “discussion networks” research).
What I see in this data is a manifestation of the ubiquity of technology in teenage life. If our technology is used to connect to those around us, the effects of the technology will be constrained within the social setting. What we may be seeing here is that teens that are already outgoing are more likely to use social technologies. That is, the use of the network is built into the everyday processes that would be associated with the growth of a discussion/support network. This finding is mundane, but it begs the question – how might we leverage technologies to enable less outgoing teenagers to expand their support networks?
Finally, please treat this post as a rough draft, a work in progress. The fact I feel it is acceptable to write a blog post like this is evidence I’ve been in grad school too long, so it is time to get back to my dissertation.
Ugh, Citations on a blog!
Over the past two years, countless people have written to me, asking if there is a version of Freedom for Windows. I hated telling people that they couldn’t have Freedom. I’m happy to report that if you’ve got a Windows XP, Vista, or 7 computer, you too can now experience Freedom.
Want to know a little more about Freedom? Read about it in the New York Times Magazine, Salon.com, USA Today, Chronicle of Higher Education, LifeHacker, and others. I’m also quite partial to the recent article on Freedom in the Guardian that starts: “With the help of a lovely man called Fred, I’m no longer in thrall to SamCam’s cape and Guido Fawkes.”
Let me know what you think!
Last week, I wrote a number of essays critical of Twitter’s decision to provide a collection of public Tweets to the Library of Congress for permanent archiving. I argued that by taking user data and putting it into a public archive, Twitter had meaningfully restricted the privacy rights of users. Some of you agreed with my position, many didn’t; but all who commented or wrote to me helped shape my thinking. In this post, I want to provide a little more context on the nature of privacy in systems like Twitter.
Last week, I gave a talk on the dynamics of privacy in Facebook. In the research, we modeled a behavior that is increasingly pervasive in Facebook: having a friends-only profile. I want to draw attention to one slide from the talk:
In this slide, the two slopes you see are the growth of Facebook, and the proportion of UNC undergraduates with friends-only profiles. Now, the data are on different axes, and Excel is fitting the lines, but the trend is meaningful. With growth in the service we see a correlated turn towards privacy.
While the pattern I observe is only general to Facebook at UNC, other researchers have observed similar patterns of privacy behavior in other social software. For example, as Friendster scaled,
[S]o too did the diversity of the social networks represented. A growing portion of participants found themselves simultaneously negotiating multiple social groups—social and professional circles, side interests, and so on. (boyd, 2007)
With the increasing complexity of diverse audiences, individuals turned to a range of strategies to manage their privacy: multiple accounts, limiting disclosure, or simply dropping out of the service. Regarding Myspace, Caverlee and Webb (2008) reported (bold is mine):
Overall, the fraction of private profiles is increasing with time, indicating that new adopters of social networks may be more attuned to the inherent privacy risks of adopting a public Web presence. We find that women favor private profiles 2-to-1 over men, and that (perhaps, counter-intuitively) younger users are more likely to adopt a private profile than older users. We also find that the more connected a user is in the social network, the more likely she is to adopt a private profile.
And now in Facebook, our research finds a similar movement towards privacy as the service grows and networks diversify. One can only suspect that Facebook’s recent “privacy upgrades” and changes to the terms of service prohibiting privacy of certain information has something to do with this normative shift.
Looking at the data across systems, I’d like to speculate that there’s a general property at work. In a social software system, as the system grows and diversity of networks increases, so does utilization of privacy. Here’s a graph I’ve constructed illustrating the trend (larger version):
The slope is purposefully convex. In the early stages of adoption, network use is sparse, so individuals are incentivized to lower privacy, to increase the odds of finding others. As time passes and the service grows, individuals form dense, small-world clusters. At this stage, individuals are mainly connected to one another within one context, and there are minimal bridges between contexts. Therefore, individuals can afford to keep privacy low, due to minimal risk of inadvertent sharing across context. As the system expands, however, we see a turn back towards privacy as an increasing number of bridges across context are created. In this moment of context collapse, individuals erect barriers of privacy to facilitate continued disclosure. Here’s a closer look at the (simulated) networks:
By linking privacy to context collapse, I argue that mobilization towards privacy is largely a function of perceived audiences (and harms). This distinction is important because it holds privacy attitudes constant. Research, both mine and by others, has demonstrated that privacy attitudes do not necessarily predict privacy behaviors. Awareness of privacy-in-context is actually the key variable causing the dynamic shift towards privacy in social software systems.
Let’s return our attention to Twitter. What does your Twitter network look like? If you’re an average user, your network probably contains a few offline friends (many, many fewer than Facebook or Myspace) and some celebrities (your definition may vary). There may also be a few friends you’ve made on Twitter, who you don’t know offline. Chances are, the average Twitter user’s network looks like the sparse “Early Adopter” or “Small World” network.
We see evidence in cultural practice that users have sparse networks in Twitter. Going back to my notes on Alice Marwick’s AOIR ’09 talk, the culture of celebrity serves a very functional purpose for Twitterers with sparse networks, who wish to connect out of limiting contexts. “Talking” to celebrities (and finding others who talk to the celebs you talk to) is a way of escaping one’s sparse world, finding new people to follow in a known context. Hashtag culture provides further evidence that individuals are trying to talk “across” or “out” of limited contexts. If your network is sparse, turning to site-level anchors like hashtags and celebs provides a reliable stream of conversation in networks where conversation is lacking due to structural impediments.
I wonder how long these practices will need to continue. Just the other day, Twitter announced that 100 million people had created accounts. You can’t turn the news on without hearing about Twitter. A large group of people, primed on social software by Facebook, are waiting to join Twitter. And over the next year or two, they will, raising issues of context collapse, and prompting a turn toward increased privacy among early adopters.
My major problem with the Twitter/LoC agreement is that the people who will be confronted with context collapse and a growing need for privacy have lost meaningful recourse. As I argued in my last post, it becomes impossible to take back what you’ve shared, a real and useful privacy strategy. You’ll still be able to make your account private, but it seems there’s little you can do about the Tweets you sent that were archived permanently in the Library of Congress.
Why is this bad? Let’s consider a hypothetical. In 2007, Myspace had 100 million users. Myspace was growing fast, with many users signing on for the first time. Myspace users had two options for privacy: public or friends-only. And a lot more people had public profiles in 2007 then they do today. How would we feel, now, if Myspace had given all of its public profiles to the Library of Congress for permanent archive in 2007? I can only guess that a bunch of people who had public profiles in 2007 might feel a little uncomfortable about it (cue the “it’s their own damn fault” chorus).
I guess I should feel relief that if Twitter is going to do this to users, at least they are partnering with the LoC (an admirable entity). But, in reading what LoC staff is saying about this effort, I’m not comforted. Of the dataset, LoC Blogger Matt Raymond writes “I’m certain we’ll learn things that none of us now can even possibly conceive.” National Archivist David Ferriero writes “What will historians be able to glean from our tweets? We can’t be sure, but it will probably be very interesting” (while also stating “Twitter is not for everyone. If you are anything like me, you don’t really care what someone had for breakfast.”) It strikes me that the Twitter archive is being treated like a novelty, promising to be an amazing treasure trove when new research methods are developed.
Maybe it’s all these years of running t-tests (developed 1908), but I’m skeptical that these Tweets are going to tell us something that we can’t quite imagine. Robust methods develop slowly, and are validated over time. We’ll probably still be doing text mining, linguistic and sentiment analysis, and content analysis 50 years from now. One area that is improving rapidly, however, is the identification of individuals in large data sets. The Netflix dataset was identified by Narayanan and Shmatikov. Acquisti and Gross demonstrated they were able to guess people’s social security numbers from public data. And old-fashion detective work by Michael Zimmer identified the T3 Facebook dataset. Of the future, we know this: It will be easier to connect you to your archived Twitter identity.
So here’s the thing. Why won’t Twitter make the archiving a simple, opt-in process? Or at least allow people to opt out? Twitter obviously knows that giving user data to a permanent archive is different from sharing an API or allowing a Google spider – they wouldn’t have approached the LoC if this wasn’t the case. I may be the only voice shouting about this, but this is a big, watershed moment regarding user privacy. EFF, EPIC, Facebook watchdogs – where are you? Let’s work with Twitter and make this right.
“your and your friends’ names, profile pictures, gender, user IDs, connections, and any content shared using the Everyone privacy setting.”
How would this work in practice? Let’s imagine that CNN and Facebook team up. If you’re logged into Facebook and visit CNN, the website would be able to welcome you by your full name, display gender-relevant content, show you recommendations from the people in your network who also visit CNN, and so on. Going a little further, if you share your interest information, CNN might be able to dynamically display stories that match your interests.
The level of disclosure proposed in this new policy is similar (or even identical) to the information disclosure required for use of a Facebook app. The critical difference in the new policy is that while applications require an opt-in, it appears that this new process will require an opt-out. Facebook spokesperson Barry Schnitt:
“The opt-out hasn’t been built yet. We just want people to know they’ll be able to opt out. We’ve made that commitment. There will be an opt-out right when the user gets to the site, and there will be some opt-out functionality on Facebook. But as to where the button will be or how it will look, I don’t know, because they don’t exist right now.”
In theory, there will be two opt-outs. The first will be the hypothetical button that Schnitt talks about. The second will be to log out of Facebook and remove the Facebook cookie. In reality, though, if you’re a Facebook user, you can never really opt-out, because any time a Facebook friend visits a third party site Facebook will share some of your information with that site.
Although it is a good sign that Facebook has gone on record regarding privacy control, the previous comment reveals Facebook’s cavalier attitude towards privacy. Quite literally, they’re talking about pushing identity information of 400 million people around, yet privacy is treated as an afterthought – something they’ll figure out later. When will companies like Facebook and Google start bringing privacy teams in at the beginning of the design process, rather than at the end?
Shifting topics a little bit, I see this move as notable because it marks Facebook’s first foray into large-scale warehoused behavioral targeting. Targeting companies like Doubleclick (owned by Google) routinely mine our travels around the web, allowing third-party consumers to generate targeted recommendations based on our habits. Because this happens behind the scenes, we’re less likely to notice it (which doesn’t make it any less troubling). Facebook’s move stands to confront us with behavioral targeting, and they should consider the boundary they’re confronting. It may not seem to be a big thing to have a third party website welcome you by your first and last name, but it is a paradigm shift on the web.
TechCrunch argues that it is time to sharpen the pitchforks, in preparation for the major backlash against the service. Let me explain why this is frustrating. In my opinion, the role of the privacy team is to navigate the necessary tension between our freedoms to disclose and how companies can ethically and morally profit from our data. Facebook’s failures with Beacon or Google’s failure with Buzz are not “wins” for privacy; rather, they are losses for companies, consumers, and the market.
This brings me back to what is troubling about the “sharpening pitchforks” mentality. It doesn’t and shouldn’t have to be this way. Compared to Doubliclick, Facebook’s move really isn’t any more troubling – if the system is implemented properly. And if the system is implemented properly, it could be a win – for consumers, for Facebook, and for third parties. So how can Facebook navigate this challenge? Let’s start with research, sensible design, and a different style of rollout than the traditional ask-for-forgiveness-later approach Facebook seems to believe in.
At Facebook’s current size and scale, they can’t afford to get this wrong. Through research, testing, and a willingness to put the customer first, Facebook could navigate the challenges of this new feature. But make no mistake, more than anyone, they are in the bulls eye right now. And if Facebook does decide to play cavalier with privacy, the mobs TechCrunch describe will be waiting.