When social bots attack: Modeling susceptibility of users in online social networks

15 04 2012

Next week, my PhD student Claudia Wagner will present results from one of our recent studies on the susceptibility of users in online social networks at the #MSM2012 workshop at WWW’2012 conference in Lyon, France.

In our paper (download socialbots.pdf), we analyze data from the Socialbot Challenge 2011 organized by T. Hwang and the WebEcologyProject, where a set of Twitter users were targeted by three teams who implemented socialbots that were released “into the wild” (i.e. implemented and implanted on Twitter). The objective for each team was to elicit certain responses from target users, such as @replies or follows. Our work on this dataset aimed to understand and model the factors that make users susceptible to such attacks.

Our results indicate that even very active Twitter users, who might be expected to develop certain skills and competencies for using social media, are prone to attacks. The work presented in this paper increases our understanding about vulnerabilities of online social networks, and represents a stepping stone towards more sophisticated measures for protecting users from socialbot attacks in online social network environments.

The Figure below depicts the network of users and socialbots in our dataset (a set of users who were targeted by social bots during the Socialbot challenge), how they link to each other, and highlights those users who were susceptible to the attacks (green and orange nodes).

Susceptibility of users in a target population of 500 twitter accounts

Susceptibility of users on Twitter who were targeted by socialbots during the Socialbot challenge 2011 (organized by T. Hwang and the WebEcologyProject). Each node represents a Twitter user: red nodes represent socialbots (total of 3), blue nodes represent users who did not interact with social bots, green nodes represent users who have interacted with at least one social bot, orange nodes represent users who have interacted with all social bots. Dashed edges represent social links between users which existed prior to the challenge, solid edges represent social links that were created during the challenge. Large nodes have a high follower/followee ratio (more popular users), small nodes have a low follower/followee ratio (less popular users). Network visualization generated by my student Simon Kendler.

Here’s the abstract of our paper:

Abstract: Social bots are automatic or semi-automatic computer programs that mimic humans and/or human behavior in online social networks. Social bots can attack users (targets) in on- line social networks to pursue a variety of latent goals, such as to spread information or to influence targets. Without a deep understanding of the nature of such attacks or the susceptibility of users, the potential of social media as an instrument for facilitating discourse or democratic processes is in jeopardy. In this paper, we study data from the Social Bot Challenge 2011 – an experiment conducted by the WebEcologyProject during 2011 – in which three teams implemented a number of social bots that aimed to influence user behavior on Twitter. Using this data, we aim to develop models to (i) identify susceptible users among a set of targets and (ii) predict users’ level of susceptibility. We explore the predictiveness of three different groups of features (network, behavioral and linguistic features) for these tasks. Our results suggest that susceptible users tend to use Twitter for a conversational purpose and tend to be more open and social since they communicate with many different users, use more social words and show more affection than non-susceptible users.

here’s the full reference, including a link to the article (socialbots.pdf):

Reference and PDF Download: C. Wagner, S. Mitter; C. Körner and M. Strohmaier. When social bots attack: Modeling susceptibility of users in online social networks. In Proceedings of the 2nd Workshop on Making Sense of Microposts (MSM’2012), held in conjunction with the 21st World Wide Web Conference (WWW’2012), Lyon, France, 2012. (download socialbots.pdf)





CfP ACM Hypertext and Social Media 2012

15 11 2011

I’d like to point you to the following Call for Papers – make sure to consider submitting your research!

23rd International Conference ACM Hypertext and Social Media (HT’2012) http://www.ht2012.org June 25-28, 2012 Milwaukee, WI, USA

 

The ACM Hypertext and Social Media conference is a premium venue for high quality peer-reviewed research on hypertext theory, systems and applications. It is concerned with all aspects of modern hypertext research including social media, semantic web, dynamic and computed hypertext and hypermedia as well as narrative systems and applications. The ACM Hypertext and Social Media 2012 conference will focus on exploring, studying and shaping relationships between four important dimensions of links in hypertextual systems and the World Wide Web: people, data, resources and stories.

Conference tracks and track co-chairs:

Track 1: Social Media (Linking people)
Claudia Müller-Birn, Freie Universität Berlin, Germany
Munmun De Choudhury, Microsoft Research, USA

Track 2: Semantic Data (Linking data)
Harith Alani, Open University, UK
Alexandre Passant, DERI, Ireland

Track 3: Adaptive Hypertext and Hypermedia (Linking resources)
Jill Freyne, CSIRO, Australia
Shlomo Berkovsky, CSIRO, Australia

Track 4: Hypertext and Narrative Connections (Linking stories)
Andrew S. Gordon, University of Southern California, USA
Frank Nack, University of Amsterdam, The Netherlands

Important Dates and Submission:

Full and Short Paper Submission: Monday Feb 6 2012

Notification: Wednesday March 21 2012

Final Version: Monday April 23 2012 Submission details are available at http://www.ht2012.org

Submissions will be accepted via https://www.easychair.org/conferences/?conf=ht2012

Organization Committee:

General Chair: Ethan Munson, University of Wisconsin – Milwaukee, USA

PC Chair: Markus Strohmaier, Graz University of Technology, Austria

Publicity Chair: Alvin Chin, Nokia Research Beijing, China





What is the size of the Library of Twitter?

22 02 2011

The Library of Babel is a theoretical library that holds the sum of all books that can be written with (i) a given set of symbols and (ii) a given page limit. According to Wikipedia, the Library of Babel is based on a short story by the author and librarian Jorge Luis Borges (1899–1986). Its idea is simple: the library holds all books that can be produced by every combinatorially possible sequence of symbols up to a certain book length. In Jorge Luis Borges case, the Library is immensly large since it contains all possible books up to 410 pages. The American Scientist calculates:

… each book has 410 pages, with 40 lines of 80 characters on each page. Thus a book consists of 410 [pages] × 40 [lines] × 80 [characters] = 1,312,000 symbols. There are 25 choices for each of these symbols, and so the library’s collection consists of 251,312,000 books.

But what is the size of a Library of Twitter, i.e. the size of the set of all theoretically possible tweets? It should be (i) much smaller and (ii) much easier to calculate due to the particular structure of tweets. Here’s a brief back-of-the-envelope calculation:

Given the 140 character limit of tweets, and assuming an english vocabulary of 26 symbols expanded by basic syntactical elements such as punctuation (.), commas (,), spaces ( ), at signs (@), hashs (#) and a few others, we end up with 140 characters and all combinatorially possible sequences of a vocabulary of maybe 50 symbols. Based on these (conservative) assumptions, the Library of Twitter holds at least  50140 tweets.

In other words, the size of the Library of Twitter is at least 7.17 × 10237 [1] or:

7174648137343063403129495466444370592154941142407760751396189613515730343351606279611587524414062500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

While this number seems impressive, it pales in comparison to the size of the Library of Babel (which is 1.956 × 101834097). As with the Library of Babel, most of the Library of Twitter contents would be non-sensical. But on the upside, the library would also contain all tweets ever written in the past and all theoretically possible tweets to be written in the future. Thereby, 50140 is an upper bound on the information that can be conveyed in 140 characters given a vocabulary of 50 symbols [2]. This first approximate upper bound should be informative for future studies of Twitter to answer questions such as: How many of the theoretically possible tweets have already been written – or in other words – how much is there left to write before we run out of (sensical) combinatorial options?

I’ll leave it to somebody else to calculate the number of bits and hard drives necessary to store, mine and search the Library of Twitter.

[1] all numbers calculated with WolframAlpha
[2] It is obvious that larger assumed vocabularies would significantly increase the size of the library.





A game-with-a-purpose based on Twitter

11 10 2010

I am happy to announce that my research group at TU Graz has launched Bulltweetbingo!, a game-with-a-purpose based on Twitter, today. The game is already live and available at http://bingo.tugraz.at. For an introduction to the idea of Buzzword Bingo, please see the following IBM commercial (Youtube video).

 

IBM Innovation Buzzword Bingo (Youtube)

 

Rather than playing buzzword bingo while listening to a talk, the idea of Bulltweetbingo! is to play Buzzword Bingo with the people you follow on Twitter. All people you follow on Twitter automatically participate in the game by tweeting. A Bulltweetbingo game terminates (i.e. hits “Bingo!”) if the people you follow on Twitter use a particular combination of the defined buzzwords in their tweets. We intend to use the data provided by each game in our research on analzying the semantics of short messages on systems such as Twitter or Facebook. Each game provides information about the relevance and topics of tweets for a particular person as well as some information on the topics of tweets that a person expects to receive in the future.

I’m copy’n pasting some more information about the game that we have made available on the game website  (about the project).

Bulltweetbingo!
Playing a game of bingo with people you follow on Twitter.

A team of researchers from Graz University of Technology, Austria has developed one of the first games-with-a-purpose that is exclusively based on Twitter.

The goal of this project is to annotate and to better understand the short messages posted to so-called social awareness streams such as Twitter or Facebook. Using this data, the researchers aim to improve the ability of computers to effectively organize and make sense out of the sea of short messages available today.

Dr. Markus Strohmaier, Assistant Professor at the Knowledge Management Institute at Graz University of Technology, Austria explains: “While social awareness streams such as Twitter or Facebook have experienced significant popularity over the last few years, we know little about how to best understand, search and organize the information that is contained in them.”

To tackle this problem, the researchers have developed a game of Buzzword Bingo that users can play with people they follow on Twitter.

“With each game users play on our website, we will collect data that helps us develop more effective algorithms for better understanding this new kind of data” Dr. Markus Strohmaier says, “and in addition to that, we simply hope users would enjoy playing a game of Bingo on Twitter. Each game is unique and exciting in a sense that users generally don’t know what tweets people will publish during the course of a bingo game”.

The researchers have launched the site bulltweetbingo! and ask users to sign up and to play a game of Bingo with the people they follow on Twitter. Twitter users can sign up at http://bingo.tugraz.at.

The game was implemented by one of my talented students, Simon Walk – Make sure to hire him if you need a complex web project to be realized quickly and effectively!





Are Tag Clouds Useful for Navigating Social Media?

16 08 2010

This week, my colleague Denis Helic will present results from a recent collaboration investigating the usefulness of tag clouds at the IEEE SocialCom 2010 conference in Minneapolis, Minnesota, USA. In this paper (download pdf), we investigated if and to what extent tag clouds – a popular mechanism for interacting with social media – are useful for navigation.

An Exemplary Tag Cloud from Flickr.com

While tag clouds can potentially serve different purposes, there seems to be an implicit assumption among engineers of social tagging systems that tag clouds are specifically useful to support navigation. This is evident in the large-scale adoption of tag clouds for interlinking resources in numerous systems such as Flickr, Delicious, and BibSonomy. However, this Navigability Assumption has hardly been critically reflected (with some notable exceptions, for example [1]), and has largely remained untested in the past. In this paper, we demonstrate that the prevalent approach to tag cloud-based navigation in social tagging systems is highly problematic with regard to network-theoretic measures of navigability. In a series of experiments, we will show that the Navigability Assumption only holds in very specific settings, and for the most common scenarios, we can assert that it is wrong.

While recent research has studied navigation in social tagging systems from user interface [2], [3], [4] and network-theoretic [5] perspectives, the unique focus of this paper is the intersection of these issues. This paper provides answers to questions such as: How do user interface constraints of tag clouds affect the navigability of tagging systems? And how efficient is navigation via tag clouds from a network-theoretic perspective? Particularly, we first 1) investigate the intrinsic navigability of tagging datasets without considering user interface effects, and then 2) take pragmatic user interface constraints into account. We 3) will demonstrate that for many social tagging systems, the so-called Navigability Assumption does not hold and we will finally 4) use our findings to illuminate a path towards improving the navigability of tag clouds.

Here’s the abstract:

Abstract: It is a widely held belief among designers of social tagging systems that tag clouds represent a useful tool for navigation. This is evident in, for example, the increasing number of tagging systems offering tag clouds for navigational purposes, which hints towards an implicit assumption that tag clouds support efficient navigation. In this paper, we examine and test this assumption from a network-theoretic perspective, and show that in many cases it does not hold. We first model navigation in tagging systems as a bipartite graph of tags and resources and then simulate the navigation process in such a graph. We use network-theoretic properties to analyse the navigability of three tagging datasets with regard to different user interface restrictions imposed by tag clouds. Our results confirm that tag-resource networks have efficient navigation properties in theory, but they also show that popular user interface decisions (such as “pagination” combined with reverse-chronological listing of resources) significantly impair the potential of tag clouds as a useful tool for navigation. Based on our findings, we identify a number of avenues for further research and the design of novel tag cloud construction algorithms. Our work is relevant for researchers interested in navigability of emergent hypertext structures, and for engineers seeking to improve the navigability of social tagging systems.

The results presented in this paper make a theoretical and an empirical argument against existing approaches to tag cloud construction. Our work thereby both confirms and refutes the assumption that current tag cloud incarnations are a useful tool for navigating social tagging systems. While we confirm that tag-resource networks have efficient navigational properties in theory, we show that popular user interface decisions (such as “pagination” combined with reverse-chronological listing of resources) significantly impair navigability. Our experimental results demonstrate that popular approaches to using tag clouds for navigational purposes suffer from significant problems. We conclude that in order to make full use of the potential of tag clouds for navigating social tagging systems, new and more sophisticated ways of thinking about designing tag cloud algorithms are needed.

Here’s the full reference for the paper, and a link to the pdf as well as to preliminary slides:

Reference and PDF Download: D. Helic, C. Trattner, M. Strohmaier and K. Andrews, On the Navigability of Social Tagging Systems, The 2nd IEEE International Conference on Social Computing (SocialCom 2010), Minneapolis, Minnesota, USA, 2010. (download pdf) (related slides)

Further references:

[1] M. A. Hearst and D. Rosner, “Tag clouds: Data analysis tool or social signaller?” in HICSS ’08: Proceedings of the Proceedings of the 41st Annual Hawaii International Conference on System Sciences. Washington, DC, USA: IEEE Computer Society, 2008.
[2] C. S. Mesnage and M. J. Carman, “Tag navigation,” in SoSEA ’09: Proceedings of the 2nd international workshop on Social software engineering and applications. New York, NY, USA: ACM, 2009, pp. 29–32.
[3] A. W. Rivadeneira, D. M. Gruen, M. J. Muller, and D. R. Millen, “Getting our head in the clouds: toward evaluation studies of tagclouds,” in CHI ’07: Proceedings of the SIGCHI conference on Human factors in computing systems. New York, NY, USA: ACM, 2007, pp. 995–998.
[4] J. Sinclair and M. Cardew-Hall, “The folksonomy tag cloud: when is it useful?” Journal of Information Science, vol. 34, p. 15, 2008. [Online]. Available: http://jis.sagepub.com/cgi/content/abstract/34/1/15
[5] N. Neubauer and K. Obermayer, “Hyperincident connected components of tagging networks,” in HT ’09: Proceedings of the 20th ACM conference on Hypertext and hypermedia. New York, NY, USA: ACM, 2009, pp. 229–238.





WWW’2010 – Stop Thinking, Start Tagging: Tag Semantics Emerge From Collaborative Verbosity

12 02 2010

I want to share the abstract of our upcoming paper at WWW’2010 (here is a link to the full paper). In case you are interested in our research and going to WWW in Raleigh this year as well, I’d be happy if you’d get in touch.

C. Körner, D. Benz, A. Hotho, M. Strohmaier, G. Stumme, Stop Thinking, Start Tagging: Tag Semantics Emerge From Collaborative Verbosity, 19th International World Wide Web Conference (WWW2010), Raleigh, NC, USA, April 26-30, ACM, 2010.

Abstract: Recent research provides evidence for the presence of emergent semantics in collaborative tagging systems. While several methods have been proposed, little is known about the factors that influence the evolution of semantic structures in these systems. A natural hypothesis is that the quality of the emergent semantics depends on the pragmatics of tagging: Users with certain usage patterns might contribute more to the resulting semantics than others. In this work, we propose several measures which enable a pragmatic differentiation of taggers by their degree of contribution to emerging semantic structures. We distinguish between categorizers, who typically use a small set of tags as a replacement for hierarchical classification schemes, and describers, who are annotating resources with a wealth of freely associated, descriptive keywords. To study our hypothesis, we apply semantic similarity measures to 64 different partitions of a real-world and large-scale folksonomy containing different ratios of categorizers and describers. Our results not only show that ‘verbose’ taggers are most useful for the emergence  of tag semantics, but also that a subset containing only 40% of the most ‘verbose’ taggers can produce results that match and even outperform the semantic precision obtained from the whole dataset. Moreover, the results suggest that there exists a causal link between the pragmatics of tagging and resulting emergent semantics. This work is relevant for designers and analysts of tagging systems interested (i) in fostering the semantic development of their platforms, (ii) in identifying users introducing “semantic noise”, and (iii) in learning ontologies.

More details can be found in the full paper.

This work is funded in part by the Know-Center and the FWF Research Grant TransAgere. It is the result of a collaboration with the KDE group at University of Kassel and the  University of Würzburg. You might  also want to have a look at a related blog post on the bibsonomy blog.

Some background about the distinction between categorizers and describers can be found in a related paper:

M. Strohmaier, C. Koerner, R. Kern, Why do Users Tag? Detecting Users’ Motivation for Tagging in Social Tagging Systems, 4th International AAAI Conference on Weblogs and Social Media (ICWSM2010), Washington, DC, USA, May 23-26, 2010. (Download pdf)





Measuring Earthquakes on Twitter: The Twicalli Scale

15 01 2010

I got interested in the signal that Twitter received from the two last earthquakes happening in California and Haiti. It has been recently suggested that Twitter can play a role in assessing the magnitude of an earthquake, by studying the stream of tweets that contain a reference to the event, such as the stream of messages related to #earthquake, including messages like this. The term “Twichter Scale” has been used in this context to discuss the relation between Twitter and external events such as earthquakes.

Different people have expressed different ideas about a Twichter Scale, for example:

Twichter Scale (n): the fraction of Twitter traffic caused by an earthquake. Unused on the east coast. (@ian_soboroff)

While this definition does not necessarily imply that the Twichter scale indicates the magnitude of earthquakes, it is interesting to ask whether Twitter data can be used for that purpose.

Impact of two earthquakes on different Twitter hashtag streams: #earthquake, #earthquakes and #quake between Jan 9 and Jan 15

When we look at the data, we can clearly identify both earthquakes represented as spikes in the data. Both earthquakes were comparable in terms of Magnitude (6,5 vs. 7.0 on the Richter Scale). And in fact, both events produced a comparable amplitude for the #earthquake hashtag stream. On the surface, this might be a confirmation of the idea of a Twichter Scale, based on the Richter Scale, which is a scale measuring the magnitude of an earthquake. The Richter scale produces the same value for a given earthquake, no matter where you are.

However, there is another, less scientific measure to characterize earthquakes – the so-called Mercalli scale – which is a measure of an earthquake’s effect on people and structures.

Which yields to the interesting question, whether Twitter streams can better serve as an indicator of strength (Richter) or impact (Mercalli) of an earthquake?

As we can see in the figure, the amplitude produced on Twitter is approximately equal for both events (almost 400 messages per hour). My suspicion however is, that this is not because Twitter accurately captures the strengths of earthquakes, but because the Jan 9 earthquake was closer to California, where more people (more Twitter users) are willing to share their experiences. So it seems that this produced an amplitude of similar extent, although the impact of the Jan 9 earthquake in California on structures and people was much weaker than the impact of the Jan 12 earthquake in Haiti.

So how can we identify the difference of an earthquake in terms of its impact on people and structures?

When we look at the diagram above, we can see a clear difference after the initial spike: While the Californian earthquake did not cause many follow-up tweets, the aftermath of the Haiti earthquake is clearly visible.

What does that say about Twitter as a signal for earthquakes?

  1. The amplitude of the signal on Twitter is very likely biased by the density of Twitter users in a given region, and thereby can neither give reliable information about the magnitude nor the impact of an earthquake. This suggests that Twitter can not act as a reliable sensor to detect the magnitude of an earthquake in a “Richter Scale” sense.
  2. However, the “aftermath” of a spike on twitter (the integral) seems to be a good indication of an earthquake’s impact on people and structures – in a “Mercalli Scale” sense. Long after the initial spike, the Haiti earthquake is still topic of conversations on Twitter (those are likely related to  fundraising efforts and other related aid activities). Indepentent of the density of Twitter users in Haiti (which is probably low), the aftermath can clearly be identified.

The Twicalli Scale:

This suggests that Twitter as a sensor for the magnitude of earthquakes (in a Richter Scale sense) does not seem very useful. Twitter is more indicative of earthquakes in a “Twicalli scale” sense:

Using the aftermath (not the amplitude) of twitter stream data, the impact (not the magnitude) of earthquakes becomes visible on Twitter.

Update: Here are links to further resources and the datasets this analysis is based on:

Update II (Aug 27 2010): The Twicalli scale was mentioned in a recent paper on the importance of trust in social awareness streams such as Twitter (page 8, left column)

Marcelo Mendoza, Barbara Poblete and Carlos Castillo, Twitter Under Crisis: Can we trust what we RT?, Workshop on Social Media Analytics, In conjunction with the International Conference on Knowledge Discovery & Data Mining (KDD 2010), PDF download (see page 8, left column)

Update III (Oct 11 2011): Now there’s also a WWW2011 paper mentioning the Twicalli Scale (page 2, top of right column)

Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of the 20th international conference on World wide web (WWW ’11). ACM, New York, NY, USA, 675-684.