Quantifying controversy on social media

Controversies are everywhere on social media. Studying and understanding the structure and evolution of these controversies is an important area of research. Though there have been previous studies that try to study controversy on social media, they are either too domain specific (e.g., politics) or need prior labeled data.

To address these shortcomings, in our recent WSDM 2016 paper, we designed a fully automatic way to detect ad-hoc controversial issues in the wild, with no prior information or domain knowledge. We represent a topic of discussion with a conversation graph. In this graph vertices represent people and edges represent conversation activity, such as posts, comments, mentions, or endorsements. Our goal is to examine if there are distinguishable patterns in the way conversations are shaped during a controversial event.

Our initial analysis on the structure of various networks suggests that this might be a promising direction to go. The figure below shows 8 networks from 4 hashtags. The first row is retweet graphs, the second is follow graphs. (a,e) – #beefban, (b,f) – #russia_march (in russian) are controversial hashtags; (c,g) – #sxsw, (d,h) – #germanwings are non-controversial.

Screenshot 2015-11-21 14.37.18

We designed a three-stage pipeline to quantify controversy in social media, which can be generalized to any social network, though we limit our experiments to Twitter, in this work.

Screen Shot 2016-02-14 at 12.31.38 PM

Graph building is done using various types of interaction networks, which include, retweets, follows, mentions, content or hybrid (content + network). We show in our experiments that retweet networks work best for detecting controversy.

For partitioning the graph, we use out-of-the-box tools, like Metis. We experimented with others such as label propagation and spectral clustering too, and they provide almost similar results.

Our main contribution is in the controversy measures. We came up with three main approaches, mainly graph based, described below shortly:

  • Random walks: This measure captures how likely a random user on either side is to be exposed to authoritative content from the opposing side.
  • Edge-betweenness: This measure captures the difference in the distribution of betweenness on the cut (between the two sides), from other random edges in the graph.
  • Embedding: This measures how separated the two sides are, in a low dimensional embedding. This measure was inspired by the above figure, used for laying out the networks.
  • Sentiment variance: A wild-card measure, which we didn’t expect to work (but works great) was just using sentiment analysis. Even though there is not much difference between various types of controversial topics, this measure, which captures how varied is the sentiment among tweets belonging to a particular topic, works great to differentiate between controversial and non-controversial topics.

For this paper, we tried out a lot of other methods too, which mostly do not work. Our conclusion was that it is not easy to detect controversy in the wild, but our analysis, combining retweet graphs with random walk score works well in most cases. For finer details, please refer to our paper (arxiv). We also have a demo of our system, applied on hundreds of trending topics on Twitter over three months. More details of that can be found here (CSCW 2016 demo, arxiv).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s