Ad verba per numeros
a tweet by @zephoria linking a post about Big Data and Social sciences. She hits the nail in the head with that post, I specially liked this part (mainly because of my paper):
[...] Big Data presents new opportunities for understanding social practice. Of course the next statement must begin with a but. And that but is simple: Just because you see traces of data doesnt mean you always know the intention or cultural logic behind them. And just because you have a big N doesnt mean that its representative or generalizable.Amen to that!Second "event": a tweet by @munmun10 about the bias towards publishing positive results. She links an interesting article by Ars Technica which describes a study on the infamous "file-drawer effect". Such a fancy name refers to researchers tendency to just report possitive results while not discussing negative results --which, of course, can be equally important.OK, enough, I'll talk, I'll talk.Why do these two unrelated tweets push me to urgently describe my own paper? Mainly because it deals with negative results, lessons learned from strong assumptions about exploiting Big Data, and it gives some warnings about different pitfalls one can find when doing Social Media research.First of all, the abstract:
A warning against converting Twitter into the next Literary Digest. Daniel Gayo-Avello (2010). User generated content has experimented a vertiginous growth both in the diversity of applications and the volume of topics covered by the users. Content published in micro-blogging systems such as Twitter is thought to be feasibly data-mine in order to "take the pulse" to society. At this moment, plentiful of positive experiences have been published, praising the goodness of relatively simple approaches to sampling, opinion mining, and sentiment analysis. In this paper I'd like to play devil's advocate by describing a careful study in which such simple approaches largely overestimate Obama's victory in U.S. 2008 Presidential Elections. A thorough post-mortem of that study is conducted and several important lessons are extracted.The study described in the paper had been in my drawer since mid-2009 because I thought it was unpublishable because of the outcome of the research: my data predicted a Obama victory (good), but the margin was too big. And when I mean too big I mean that Obama won Texas according to Twitter data (bad). All of this remind me of the (infamous) Literary Digest poll that was a total failure predicting the outcome of U.S. 1936 Presidential Elections. Thus, I simply assumed (in 2009) that using Twitter to predict elections in 2008 was like polling owners of cars in 1936 to predict who would be the next POTUS. Without further ado I simply moved on.Then, this year three different papers appeared in a short time span:
- "From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series" by O'Connor et al.
- "Predicting Elections with Twitter: What 140 Characters Reveal about Political Sentiment" by Tumasjan et al.
- "Predicting the Future with Social Media" by Asur and Huberman.
- The one by O'Connor et al. links Twitter sentiment to public opinion (e.g. consumer confidence and presidential job approval); interestingly they did not find any strong correlation between Twitter sentiment and surveys conducted during the 2008 presidential campaign.
- The study by Tumasjan et al., on the contrary, asserts that The mere number of tweets reflects voter preferences and comes close to traditional election polls. In fact, they were able to predict the outcome of last German elections with Twitter data.
- Lastly, the paper by Asur and Huberman describes the correlation between volume of conversation about a movie in Twitter and the earnings in the opening weekend. In fact, in a recent interview predicting elections is described as a possible field of application of the same methods.
- The Big Data fallacy. Social Media are extremely appealing because researchers can easily obtain large data collections to be mined. However, just being large does not make such collections statistically representative of the global population.
- Beware of naïve sentiment analysis. It is certainly possible that some applications can achieve reasonable results by merely accounting topic frequency or using simple approaches to sentiment detection. However, noisy instruments should be avoided and one should carefully check whether s/he is using --maybe unknowingly-- a random classifier.
- Be careful with demographical bias. Social Media users tend to be relatively young and, depending on the population of interest, this can introduce an important bias. To improve results it is imperative to know users' ages and try to correct the age bias in the data.
- What is essential is invisible to the eye. Non-responses can play a role even more important than the collected data. If the lack of information mostly affect just one group the results can greatly depart from reality. Nonetheless to say, estimating degree of non-response and its nature is extremely difficult --if not impossible at all. Thus, we must be very conscious of this issue.
- (A few) Past positive results do not guarantee generalization. As researchers we must be aware of the file drawer effect and, hence, we should carefully evaluate positive reports before assuming the reported methods can be straightforwardly applied to any similar scenario with identical (positive) results.