Mode 3 knowledge production: or the differences between a blog post and a scientific article

17 02 2014

With the proliferation of data, the increasing availability of rather simple tools to analyze data and an increasing number of people who can use these tools in combination with the availability of low cost publication platforms (e.g. blogs), the potential to democratize certain aspects of scientific processes – such as empirical data analysis – seems tremendeous. This might give rise to the idea that everyone who can use these tools (such as Python), and publish the results from their analysis (e.g. via blog posts) can now participate in knowledge production.

An opportunity for data analysis by the masses: If true, the potential of such a development would be enormous: By increasing the number of people that participate in scientific processes, we could increase the coverage of interesting phenomena to explore, research activity would not be constrained to areas that are funded by large institutional bodies, and in general more research could get done.

At the same time, this would represent an absolut shift in the way science has been operating up til now, as people formerly not part of traditional scientific processes (and not trained in scientific knowledge production) now move into new territory, and participate in new processes. In order to understand this shift, we need to understand the modi operandi of scientific knowledge production in the past.

Different modes of knowledge production: There are many ways to look at scientific knowledge production. A very influential distinction has been made by Gibbons et al, which argue that we have to differentiate between “Mode 1” and “Mode 2” of knowledge production.

M. Gibbons, C. Limoges, and H. Nowotny. The new production of knowledge: the dynamics of science and research in contemporary societies. Sage, 1997.

Mode 1 refers to traditional knowledge production processes, by focusing on hierarchical mechanisms and processes executed by a set of homogenous actors from a common disciplinary background. An example would be the ivory tower view of a university, where a scientist or group of scientists with homogeneous backgrounds work on disciplinary problems. This mode is increasingly being replaced by Mode 2 knowledge production, which is socially distributed, organizationally diverse, application-oriented, and trans-disciplinary [GLN97, NSG03]. An example would be a network of university partners with different disciplinary backgrounds collaborating on an application-oriented problem with other stakeholders from e.g. industry or other public institutions.

Mode 3 knowledge production: The proliferation of data, tools and people able to make use of them might give rise to what I might call Mode 3 knowledge production which could be self organized, context-focused, and driven by individuals not primarily trained in scientific processes. An example would be an interested user (or group of users) of a social network platform that looks at data that might explain some online social network phenomenon that they feel worth exploring. Another might be a group of patients performing self experiments or experiments with n=1 in order to explore the cause of personal symptoms or health concerns. These groups might embed the discussion of their findings into community conversations and social sensemaking processes.

While this idea looks appealing on the surface, there are a number of issues. For example: Mode 1 and mode 2 knowledge production differ in terms of organization, but both follow the scientific method in terms of basic mechanisms and values. It is yet unclear whether an emerging mode 3 would adhere to the scientific method as well. Being able to use analysis tools to look at data does not necessarily mean that whatever kind of analysis follows from that contributes to scientific processes in meaningful ways.

The scientific method: So what is the scientific method, i.e. what are some of the standards, ethics and practices that mode 1 and mode 2 knowledge production follow, which a potential mode 3 knowledge production would have to adopt as well? Answers can be found in the philosophy of science, which has long been thinking about the nature of science and scientific processes. This is an entire field that can not be adquately described here –  the Hempel–Oppenheim model would just be one of many examples.

However, typical qualities of scientific processes would include, but are not limited to: the ability to reproduce results including a proper description of methods and means of data collection, sharing of data, the quality of hypotheses (w.r.t. falsifiability, explanatory power, understandability, etc), the relation to state-of-the-art research including proper citations of existing literature, critical reflections about the validity of findings, as well as the quality of interpretations and whether they follow from the data.

Do blog posts follow the scientific method? While there is nothing that prevents research published via blog posts to follow the scientific method, more often than not blog posts – even data-oriented ones – fail to meet these most basic requirements. For example, from a data visualization published via a blog post it does not necessarily become clear where the data is from, how the data has been collected, which methods have been applied, whether the results are reproducable, whether the data used will be shared, how the analysis relates to the state-of-the-art of scientific knowledge or whether there is an agreement that the conclusions presented follow from the data.

This is not surprising. In scientific articles, peer-review is the most common (but certainly not infallible) instrument to check whether submitted research follows the scientific method. In blog posts and similar user-generated media, there are currently no established social or other mechanisms enforcing the scientific method, which often makes their results – while potentially interesting – less useful from a scientific perspective. In addition, it is typically impossible for a researcher to ignore a reviewer’s comment (as an editor will make a decision based on reviewers’ comments whether to publish an article or not), at the same time it is usually easy for a blogger to delete an unwanted comment.

Conclusion: Whether a third mode of knowledge production will ultimately emerge is unclear. While the democratization of data analysis will expand without a doubt, it will depend on the masses of amateurs and bloggers to adopt principles based on the scientific method or the masses of scientists to participate and enforce the scientific method in blog conversations or both. It will probably not depend on the technicalities of the publishing medium – whether blog posts or not.

References:

M. Gibbons, C. Limoges, and H. Nowotny. The new production of knowledge: the dynamics of science and research in contemporary societies. Sage, 1997.

H. Nowotny, P. Scott, and M. Gibbons. Introduction – mode 2’revisited: The new production of knowledge. Minerva, 41(3):179–194, 2003.





When is a student ready to finish his/her PhD?

29 05 2013

I’ve made it a hobby for myself to ask this question to professors that I meet at conferences in my field. The answers that I have collected in these conversations are manifestations of an astonishing variety of underlying research philosophies and ideologies. Here’s a list of answers I have received so far, the labels in brackets are mine, they might be misleading, deceptive or misrepresent the original intent of the answer given.

  • When he is offered a position in industry or academia that assumes a PhD (the american view)
  • When he has convinced his corresponding research (sub-)community that the work he has been doing is worthy of a PhD (the sociologist’s / psychologist’s view)
  • When he has expanded the state of knowledge by a significant amount / When he added new knowledge to the existing body of knowledge about the world (the epistemological view)
  • When he has built something truly new, interesting, elegant and/or complex (the engineer’s view)
  • When he has reached his personal intellectual maximum i.e. the maximum intellectual capacity that he is capable of acquiring (the subjective view)
  • When he is able to explain the results of his work in one sentence (the communication view)
  • When he has published n papers (the bureaucrat’s view)

I am amazed that there is little repetition in the answers that I get. What is your answer? Add it to the comments.





When social bots attack: Modeling susceptibility of users in online social networks

15 04 2012

Next week, my PhD student Claudia Wagner will present results from one of our recent studies on the susceptibility of users in online social networks at the #MSM2012 workshop at WWW’2012 conference in Lyon, France.

In our paper (download socialbots.pdf), we analyze data from the Socialbot Challenge 2011 organized by T. Hwang and the WebEcologyProject, where a set of Twitter users were targeted by three teams who implemented socialbots that were released “into the wild” (i.e. implemented and implanted on Twitter). The objective for each team was to elicit certain responses from target users, such as @replies or follows. Our work on this dataset aimed to understand and model the factors that make users susceptible to such attacks.

Our results indicate that even very active Twitter users, who might be expected to develop certain skills and competencies for using social media, are prone to attacks. The work presented in this paper increases our understanding about vulnerabilities of online social networks, and represents a stepping stone towards more sophisticated measures for protecting users from socialbot attacks in online social network environments.

The Figure below depicts the network of users and socialbots in our dataset (a set of users who were targeted by social bots during the Socialbot challenge), how they link to each other, and highlights those users who were susceptible to the attacks (green and orange nodes).

Susceptibility of users in a target population of 500 twitter accounts

Susceptibility of users on Twitter who were targeted by socialbots during the Socialbot challenge 2011 (organized by T. Hwang and the WebEcologyProject). Each node represents a Twitter user: red nodes represent socialbots (total of 3), blue nodes represent users who did not interact with social bots, green nodes represent users who have interacted with at least one social bot, orange nodes represent users who have interacted with all social bots. Dashed edges represent social links between users which existed prior to the challenge, solid edges represent social links that were created during the challenge. Large nodes have a high follower/followee ratio (more popular users), small nodes have a low follower/followee ratio (less popular users). Network visualization generated by my student Simon Kendler.

Here’s the abstract of our paper:

Abstract: Social bots are automatic or semi-automatic computer programs that mimic humans and/or human behavior in online social networks. Social bots can attack users (targets) in on- line social networks to pursue a variety of latent goals, such as to spread information or to influence targets. Without a deep understanding of the nature of such attacks or the susceptibility of users, the potential of social media as an instrument for facilitating discourse or democratic processes is in jeopardy. In this paper, we study data from the Social Bot Challenge 2011 – an experiment conducted by the WebEcologyProject during 2011 – in which three teams implemented a number of social bots that aimed to influence user behavior on Twitter. Using this data, we aim to develop models to (i) identify susceptible users among a set of targets and (ii) predict users’ level of susceptibility. We explore the predictiveness of three different groups of features (network, behavioral and linguistic features) for these tasks. Our results suggest that susceptible users tend to use Twitter for a conversational purpose and tend to be more open and social since they communicate with many different users, use more social words and show more affection than non-susceptible users.

here’s the full reference, including a link to the article (socialbots.pdf):

Reference and PDF Download: C. Wagner, S. Mitter; C. Körner and M. Strohmaier. When social bots attack: Modeling susceptibility of users in online social networks. In Proceedings of the 2nd Workshop on Making Sense of Microposts (MSM’2012), held in conjunction with the 21st World Wide Web Conference (WWW’2012), Lyon, France, 2012. (download socialbots.pdf)





CfP ACM Hypertext and Social Media 2012

15 11 2011

I’d like to point you to the following Call for Papers – make sure to consider submitting your research!

23rd International Conference ACM Hypertext and Social Media (HT’2012) http://www.ht2012.org June 25-28, 2012 Milwaukee, WI, USA

 

The ACM Hypertext and Social Media conference is a premium venue for high quality peer-reviewed research on hypertext theory, systems and applications. It is concerned with all aspects of modern hypertext research including social media, semantic web, dynamic and computed hypertext and hypermedia as well as narrative systems and applications. The ACM Hypertext and Social Media 2012 conference will focus on exploring, studying and shaping relationships between four important dimensions of links in hypertextual systems and the World Wide Web: people, data, resources and stories.

Conference tracks and track co-chairs:

Track 1: Social Media (Linking people)
Claudia Müller-Birn, Freie Universität Berlin, Germany
Munmun De Choudhury, Microsoft Research, USA

Track 2: Semantic Data (Linking data)
Harith Alani, Open University, UK
Alexandre Passant, DERI, Ireland

Track 3: Adaptive Hypertext and Hypermedia (Linking resources)
Jill Freyne, CSIRO, Australia
Shlomo Berkovsky, CSIRO, Australia

Track 4: Hypertext and Narrative Connections (Linking stories)
Andrew S. Gordon, University of Southern California, USA
Frank Nack, University of Amsterdam, The Netherlands

Important Dates and Submission:

Full and Short Paper Submission: Monday Feb 6 2012

Notification: Wednesday March 21 2012

Final Version: Monday April 23 2012 Submission details are available at http://www.ht2012.org

Submissions will be accepted via https://www.easychair.org/conferences/?conf=ht2012

Organization Committee:

General Chair: Ethan Munson, University of Wisconsin – Milwaukee, USA

PC Chair: Markus Strohmaier, Graz University of Technology, Austria

Publicity Chair: Alvin Chin, Nokia Research Beijing, China





What is the size of the Library of Twitter?

22 02 2011

The Library of Babel is a theoretical library that holds the sum of all books that can be written with (i) a given set of symbols and (ii) a given page limit. According to Wikipedia, the Library of Babel is based on a short story by the author and librarian Jorge Luis Borges (1899–1986). Its idea is simple: the library holds all books that can be produced by every combinatorially possible sequence of symbols up to a certain book length. In Jorge Luis Borges case, the Library is immensly large since it contains all possible books up to 410 pages. The American Scientist calculates:

… each book has 410 pages, with 40 lines of 80 characters on each page. Thus a book consists of 410 [pages] × 40 [lines] × 80 [characters] = 1,312,000 symbols. There are 25 choices for each of these symbols, and so the library’s collection consists of 251,312,000 books.

But what is the size of a Library of Twitter, i.e. the size of the set of all theoretically possible tweets? It should be (i) much smaller and (ii) much easier to calculate due to the particular structure of tweets. Here’s a brief back-of-the-envelope calculation:

Given the 140 character limit of tweets, and assuming an english vocabulary of 26 symbols expanded by basic syntactical elements such as punctuation (.), commas (,), spaces ( ), at signs (@), hashs (#) and a few others, we end up with 140 characters and all combinatorially possible sequences of a vocabulary of maybe 50 symbols. Based on these (conservative) assumptions, the Library of Twitter holds at least  50140 tweets.

In other words, the size of the Library of Twitter is at least 7.17 × 10237 [1] or:

7174648137343063403129495466444370592154941142407760751396189613515730343351606279611587524414062500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

While this number seems impressive, it pales in comparison to the size of the Library of Babel (which is 1.956 × 101834097). As with the Library of Babel, most of the Library of Twitter contents would be non-sensical. But on the upside, the library would also contain all tweets ever written in the past and all theoretically possible tweets to be written in the future. Thereby, 50140 is an upper bound on the information that can be conveyed in 140 characters given a vocabulary of 50 symbols [2]. This first approximate upper bound should be informative for future studies of Twitter to answer questions such as: How many of the theoretically possible tweets have already been written – or in other words – how much is there left to write before we run out of (sensical) combinatorial options?

I’ll leave it to somebody else to calculate the number of bits and hard drives necessary to store, mine and search the Library of Twitter.

[1] all numbers calculated with WolframAlpha
[2] It is obvious that larger assumed vocabularies would significantly increase the size of the library.





Programming Poems with Mechanical Turk

29 12 2010

Mechanical Turk has received some bad press recently (this is one example). It has been pointed out that Mechanical Turk can be used to do evil, which got me interested in seeing whether if and how it can do any good (or at least: creative). This has led to the post here, and resulted in the following poem – collaboratively produced by independent workers on Mechanical Turk.

In the daily life of a Mechanical Turk

In the daily life of a Mechanical Turk,
Never have I quite finished my work,

For I return and refresh and come back for more
In quest of a yet higher score

Now and then my eyes may tire
If I said they didn’t, I’d be a liar

Though I am spent, It’s hard to stop
Even when I’m ready to drop

My available HITs are waiting for me
Ocassionally I’d rather go and watch TV

Nevertheless, I need the cash
Keen to throw a birthday bash!

Ever so slowly my earnings increase
Yet my passion for Mechanical Turk would never cease …

The structure of the poem is fully algorithmically determined. It has been written collaboratively by a crowd of Mechanical Turkers interacting with each other only through HITs. Before designing the poem algorithm, I’ve done some research on the structure and different types of poems, which led me to Acrostics.

“An acrostic (Greek: ákros “top”; stíchos “verse”) is a poem or other form of writing in which the first letter, syllable or word of each line, paragraph or other recurring feature in the text spells out a word or a message.” (wikipedia)

In my poem algorithm, I’ve constrained the first letter of each sentence in the poem, thereby forming an acrostic. As an additional constraint, I required the poem to consist of pairs of sentences that rhyme (similar to a Limerick).

While I determined (i.e. programmed) the structure of the poem, the content was completely produced by mechanical turkers. The only input provided was the title, which acts as the first sentence of the poem as well. Each rhyming pair of sentences was written by 2 different turkers, i.e. the output of one turker was used as an input for another turker. Total price of the poem was 1.804 USD. The poem was built incrementally, each subsequent turker had access to the output of all previous turkers. All tasks were requested at least 3 times, selection among alternatives was done by me, although it could have easily been done by Turkers themselves. In total, the contributions of 7 different Turkers were used in the poem above (while many more have worked on the HITs).

With that, I’ve initialized the poem algorithm with the acrostic “Infinite Monkey” and the title “In the daily life of a Mechanical Turk” and ran it on Mechanical Turk. The result can be seen above.

The Infinite Monkey acrostic refers to the Infinite Monkey Theorem:

“The Infinite Monkey Theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare.” (Wikipedia)

That’s what we are trying to test here, in a less statistic and a more informed manner though. Instead of producing all possible poems, we are interested in producing constrained yet plausible poems, efficiently (i.e. in very few iterations).

Which leads to a variation of the Infite Monkey Theorem that I’d like to propose here:

The Finite Turker Theorem states that a finite (yet potentially large) number of independent writers (here: Mechanical Turkers) will almost surely produce a poem that is creative, enjoyable and mostly indistinguishable from a single author poem.

With the Finite Turker Theorem, and market places such as Mechanical Turk, it might be possible to outsource creative work – such as poem writing – to a large set of workers without much penalty in terms of beauty or enjoyability. Algorithms such as the one above can constraint and influence the resulting poems, giving greater control about the outcome of creative processes (which sounds like an oxymoron).

Because HITs were requested multiple times, there were several rejects that did not make it into the final poem, but which show some of the difficulties as well as the creative potential of programmed poems, including:


For I return and refresh and come back for more
Info, my pimp: [I’m] a Dolores Labs penny whore

Conclusion: It has been suggested that the primary use of Mechanical Turk is the execution of simple, easily replacable and often spam-related work. This little experiment suggests that Mechanical Turk can serve richer purposes, by tapping into the creative energy of an underestimated, underutilized but also (currently) underpaid work force.





A game-with-a-purpose based on Twitter

11 10 2010

I am happy to announce that my research group at TU Graz has launched Bulltweetbingo!, a game-with-a-purpose based on Twitter, today. The game is already live and available at http://bingo.tugraz.at. For an introduction to the idea of Buzzword Bingo, please see the following IBM commercial (Youtube video).

 

IBM Innovation Buzzword Bingo (Youtube)

 

Rather than playing buzzword bingo while listening to a talk, the idea of Bulltweetbingo! is to play Buzzword Bingo with the people you follow on Twitter. All people you follow on Twitter automatically participate in the game by tweeting. A Bulltweetbingo game terminates (i.e. hits “Bingo!”) if the people you follow on Twitter use a particular combination of the defined buzzwords in their tweets. We intend to use the data provided by each game in our research on analzying the semantics of short messages on systems such as Twitter or Facebook. Each game provides information about the relevance and topics of tweets for a particular person as well as some information on the topics of tweets that a person expects to receive in the future.

I’m copy’n pasting some more information about the game that we have made available on the game website  (about the project).

Bulltweetbingo!
Playing a game of bingo with people you follow on Twitter.

A team of researchers from Graz University of Technology, Austria has developed one of the first games-with-a-purpose that is exclusively based on Twitter.

The goal of this project is to annotate and to better understand the short messages posted to so-called social awareness streams such as Twitter or Facebook. Using this data, the researchers aim to improve the ability of computers to effectively organize and make sense out of the sea of short messages available today.

Dr. Markus Strohmaier, Assistant Professor at the Knowledge Management Institute at Graz University of Technology, Austria explains: “While social awareness streams such as Twitter or Facebook have experienced significant popularity over the last few years, we know little about how to best understand, search and organize the information that is contained in them.”

To tackle this problem, the researchers have developed a game of Buzzword Bingo that users can play with people they follow on Twitter.

“With each game users play on our website, we will collect data that helps us develop more effective algorithms for better understanding this new kind of data” Dr. Markus Strohmaier says, “and in addition to that, we simply hope users would enjoy playing a game of Bingo on Twitter. Each game is unique and exciting in a sense that users generally don’t know what tweets people will publish during the course of a bingo game”.

The researchers have launched the site bulltweetbingo! and ask users to sign up and to play a game of Bingo with the people they follow on Twitter. Twitter users can sign up at http://bingo.tugraz.at.

The game was implemented by one of my talented students, Simon Walk – Make sure to hire him if you need a complex web project to be realized quickly and effectively!





Are Tag Clouds Useful for Navigating Social Media?

16 08 2010

This week, my colleague Denis Helic will present results from a recent collaboration investigating the usefulness of tag clouds at the IEEE SocialCom 2010 conference in Minneapolis, Minnesota, USA. In this paper (download pdf), we investigated if and to what extent tag clouds – a popular mechanism for interacting with social media – are useful for navigation.

An Exemplary Tag Cloud from Flickr.com

While tag clouds can potentially serve different purposes, there seems to be an implicit assumption among engineers of social tagging systems that tag clouds are specifically useful to support navigation. This is evident in the large-scale adoption of tag clouds for interlinking resources in numerous systems such as Flickr, Delicious, and BibSonomy. However, this Navigability Assumption has hardly been critically reflected (with some notable exceptions, for example [1]), and has largely remained untested in the past. In this paper, we demonstrate that the prevalent approach to tag cloud-based navigation in social tagging systems is highly problematic with regard to network-theoretic measures of navigability. In a series of experiments, we will show that the Navigability Assumption only holds in very specific settings, and for the most common scenarios, we can assert that it is wrong.

While recent research has studied navigation in social tagging systems from user interface [2], [3], [4] and network-theoretic [5] perspectives, the unique focus of this paper is the intersection of these issues. This paper provides answers to questions such as: How do user interface constraints of tag clouds affect the navigability of tagging systems? And how efficient is navigation via tag clouds from a network-theoretic perspective? Particularly, we first 1) investigate the intrinsic navigability of tagging datasets without considering user interface effects, and then 2) take pragmatic user interface constraints into account. We 3) will demonstrate that for many social tagging systems, the so-called Navigability Assumption does not hold and we will finally 4) use our findings to illuminate a path towards improving the navigability of tag clouds.

Here’s the abstract:

Abstract: It is a widely held belief among designers of social tagging systems that tag clouds represent a useful tool for navigation. This is evident in, for example, the increasing number of tagging systems offering tag clouds for navigational purposes, which hints towards an implicit assumption that tag clouds support efficient navigation. In this paper, we examine and test this assumption from a network-theoretic perspective, and show that in many cases it does not hold. We first model navigation in tagging systems as a bipartite graph of tags and resources and then simulate the navigation process in such a graph. We use network-theoretic properties to analyse the navigability of three tagging datasets with regard to different user interface restrictions imposed by tag clouds. Our results confirm that tag-resource networks have efficient navigation properties in theory, but they also show that popular user interface decisions (such as “pagination” combined with reverse-chronological listing of resources) significantly impair the potential of tag clouds as a useful tool for navigation. Based on our findings, we identify a number of avenues for further research and the design of novel tag cloud construction algorithms. Our work is relevant for researchers interested in navigability of emergent hypertext structures, and for engineers seeking to improve the navigability of social tagging systems.

The results presented in this paper make a theoretical and an empirical argument against existing approaches to tag cloud construction. Our work thereby both confirms and refutes the assumption that current tag cloud incarnations are a useful tool for navigating social tagging systems. While we confirm that tag-resource networks have efficient navigational properties in theory, we show that popular user interface decisions (such as “pagination” combined with reverse-chronological listing of resources) significantly impair navigability. Our experimental results demonstrate that popular approaches to using tag clouds for navigational purposes suffer from significant problems. We conclude that in order to make full use of the potential of tag clouds for navigating social tagging systems, new and more sophisticated ways of thinking about designing tag cloud algorithms are needed.

Here’s the full reference for the paper, and a link to the pdf as well as to preliminary slides:

Reference and PDF Download: D. Helic, C. Trattner, M. Strohmaier and K. Andrews, On the Navigability of Social Tagging Systems, The 2nd IEEE International Conference on Social Computing (SocialCom 2010), Minneapolis, Minnesota, USA, 2010. (download pdf) (related slides)

Further references:

[1] M. A. Hearst and D. Rosner, “Tag clouds: Data analysis tool or social signaller?” in HICSS ’08: Proceedings of the Proceedings of the 41st Annual Hawaii International Conference on System Sciences. Washington, DC, USA: IEEE Computer Society, 2008.
[2] C. S. Mesnage and M. J. Carman, “Tag navigation,” in SoSEA ’09: Proceedings of the 2nd international workshop on Social software engineering and applications. New York, NY, USA: ACM, 2009, pp. 29–32.
[3] A. W. Rivadeneira, D. M. Gruen, M. J. Muller, and D. R. Millen, “Getting our head in the clouds: toward evaluation studies of tagclouds,” in CHI ’07: Proceedings of the SIGCHI conference on Human factors in computing systems. New York, NY, USA: ACM, 2007, pp. 995–998.
[4] J. Sinclair and M. Cardew-Hall, “The folksonomy tag cloud: when is it useful?” Journal of Information Science, vol. 34, p. 15, 2008. [Online]. Available: http://jis.sagepub.com/cgi/content/abstract/34/1/15
[5] N. Neubauer and K. Obermayer, “Hyperincident connected components of tagging networks,” in HT ’09: Proceedings of the 20th ACM conference on Hypertext and hypermedia. New York, NY, USA: ACM, 2009, pp. 229–238.





On taxonomies, folksonomies, and tweetonomies

17 04 2010

Towards a Taxonomy of Meta-Desserts (by several_bees @flickr)

For centuries, taxonomies have been a tool for mankind to bring structure to the world. Taxonomies (wikipedia: “the practice and science of classification”) were developed in different fields of science, including – but not limited to – biology (e.g. taxonomies of animals) or library sciences (e.g. taxonomies of literature). Regardless of the particular domain of application, in most cases those taxonomies were developed by a selected few (e.g. librarians), and were used by many.

With the emergence of personal computers and file directories, the task of taxonomy development was brought to the masses. Suddenly everyone (i.e. every computer user) was in charge of developing, maintaining and transforming personal taxonomical structures in order to organize and (re-)find resources. While this development has led to a vast increase of personal taxonomies, it was only since del.icio.us has popularized tagging as a new form of resource organization that users’ personal taxonomies were exposed publicly. This has made it possible to aggregate a large number of personal taxonomies into collective taxonomic structures. The result of such aggregation has since then been refered to as folksonomies, i.e. an emergent structure collectively produced by a large number of users in a bottom-up manner.

In social awareness streams (pdf) such as Twitter of Facebook, users typically do not aim to classify or organize resources, but they engage in casual chatter and dialogue, ocassionally using syntax to coordinate communication (such as #hashtags or @replies). Taxonomic structures can be assumed to play a subordinate role for users of social awareness streams.

In a recent paper to be presented at the SemSearch Workshop at WWW2010 [1] however, we show that there exist latent conceptual structures – similar to taxonomies or folksonomies – in social awareness streams, and that we can acquire these structures through simple aggregation mechanisms.

Abstract: Although one might argue that little wisdom can be conveyed in messages of 140 characters or less, this paper sets out to explore whether the aggregation of messages in social awareness streams, such as Twitter, conveys meaningful information about a given domain. As a research community, we know little about the structural and semantic properties of such streams, and how they can be analyzed, characterized and used. This paper introduces a network-theoretic model of social awareness streams, a so-called “tweetonomy”, together with a set of stream-based measures that allow researchers to systematically de fine and compare di fferent stream aggregations. We apply the model and measures to a dataset acquired from Twitter to study emerging semantics in selected streams. The network-theoretic model and the corresponding measures introduced in this paper are relevant for researchers interested in information retrieval and ontology learning from social  awareness streams. Our empirical findings demonstrate that di fferent social awareness stream aggregations exhibit interesting di fferences, making them amenable for di fferent applications [1].

In the paper, we introduce the notion of tweetonomies, and a corresponding tri-partite model of social awareness streams that extends the existing model of folksonomies by accomodating user-generated syntax (such as slashtags and other emerging syntax) and thereby integrating the communicative nature of such streams.

In the figure below, we have applied the network-theoretic model of tweetonomies to acquire a semantic network of hashtags that could be used for a range of different purposes, such as for navigating social awareness streams or for recommendation problems.

A tweetonomy of hashtags, aquired from Twitter (with the help of Jan Poeschko, click for full image 2.6 MB)

Our work shows that tweetonomies are a far more complex structure than – for example – taxonomies or folksonomies. One reason for that observation lies in the dynamic and user-generated nature of its syntax, but also in the fact that tweetonomies accomodate a much richer language than the language used in social tagging systems (tweets vs tags).

The results of our work suggest that tweetonomies are a novel and promising concept, different from taxonomies and folksonomies where people engage in conscious acts of classification. Whether tweetonomies have the potential to bring order and structure to social awareness streams similar to the way folksonomies brought order to social tagging systems remains a question to be answered.

Update (May 5 2010): An interesting question that was raised during the presentation of the paper at the WWW’2010 workshop was whether it would be justified to introduce Tweetonomies as a new concept. In other words, are the structures that we observe on twitter not just a different form of folksonomies? I’d argue for the necessity of a new concept for the following reasons: While taxonomies and folksonomies emerge when users structure resources, tweetonomies emerge when users structure conversation. Because conversations are inherently different than resources (e.g. they are dynamic, and involve multiple users) the structures that emerge from social awareness streams (tweetonomies) can be expected to be different from the structures that emerge from social bookmarking systems (folksonomies). Whether this is really the case however needs to be investigated in future work.

References:

[1] C. Wagner, M. Strohmaier, The Wisdom in Tweetonomies: Acquiring Latent Conceptual Structures from Social Awareness Streams, Semantic Search 2010 Workshop (SemSearch2010), in conjunction with the 19th International World Wide Web Conference (WWW2010), Raleigh, NC, USA, April 26-30, ACM, 2010. (pdf)





Call for Papers: International Workshop on Modeling Social Media 2010 (MSM’10)

15 03 2010

I’d like to point you to a Call for Papers for a workshop I’m involved in organizing at Hypertext 2010 in Toronto this June. I’m really excited about the focus of this event, and I’m looking forward to lots of exciting discussions and presentations (check out the invited talks and panelists!).

International Workshop on
Modeling Social Media 2010 (MSM’10)

Website: http://kmi.tugraz.at/workshop/MSM10/

June 13, 2010, co-located with Hypertext 2010,
Toronto, Canada

Important Dates:

* Submission Deadline: April 9, 2010
* Notification of Acceptance: May 13, 2010
* Final Papers Due: May 20, 2010
* Workshop date: June 13, 2010, Toronto, Canada

Workshop Organizers:

  • Alvin Chin, Nokia Research Center, Beijing, China, alvin.chin (at) nokia.com
  • Andreas Hotho, University of Wuerzburg, Germany, hotho (at) informatik.uni-wuerzburg.de
  • Markus Strohmaier, Graz University of Technology, Austria, markus.strohmaier (at) tugraz.at

Format:

The workshop will be opened by an invited talk given by Ed Chi (Palo Alto Research Center). The talk will be followed by a number of peer-reviewed research and position paper presentations and a discussion panel including Barry Wellman (University of Toronto), Marti Hearst (University of California, Berkeley) and Ed Chi (Palo Alto Research Center).

Workshop’s Objectives and Goals:

The goal of this workshop is to focus the attention of researchers on the increasingly important role of modeling social media. The workshop aims to attract and discuss a wide range of modeling perspectives (such as justificative, explanative, descriptive, formative, predictive, etc models) and approaches (statistical modeling, conceptual modeling, temporal modeling, etc). We want to bring together researchers and practitioners with diverse backgrounds interested in 1) exploring different perspectives and approaches to modeling complex social media phenomena and systems, 2) the different purposes and applications that models of social media can serve, 3) issues of integrating and validating social media models and 4) new modeling techniques for social media. The workshop aims to start a dialogue aiming to reflect upon and discuss these issues.

Topics:

Topics may include, but are not limited to:

+ new modeling techniques and approaches for social media
+ models of propagation and influence in twitter, blogs and social tagging systems
+ models of expertise and trust in twitter, wikis, newsgroups, question and answering systems
+ modeling of social phenomena and emergent social behavior
+ agent-based models of social media
+ models of emergent social media properties
+ models of user motivation, intent and goals in social media
+ cooperation and collaboration models
+ software-engineering and requirements models for social media
+ adapting and adaptive hypertext models for social media
+ modeling social media users and their motivations and goals
+ architectural and framework models
+ user modeling and behavioural models
+ modeling the evolution and dynamics of social media

Preliminary Program Committee (confirmed):
  • Ansgar Scherp, Koblenz University, Germany
  • Roelof van Zwol, Yahoo! Research Barcelona, Spain
  • Marti Hearst, UC Berkeley, USA
  • Ed Chi, PARC, USA
  • Peter Pirolli, PARC, USA
  • Steffen Staab, Koblenz University, Germany
  • Barry Wellman, University of Toronto, Canada
  • Daniel Gayo-Avello, University of Oviedo, Spain
  • Jordi Cabot, INRIA, France
  • Pranam Kolari, Yahoo! Research, USA
  • Tad Hogg, Institute for Molecular Manufacturing, USA
  • Wai-Tat Fu, University of Illinois at Urbana-Champaign, USA
  • Thomas Kannampallil, University of Texas, USA
  • Justin Zhan, Carnegie Mellon University, USA
  • Marc Smith, ConnectedAction, USA
  • Mark Chignell, University of Toronto, Canada

Website: http://kmi.tugraz.at/workshop/MSM10/





WWW’2010 – Stop Thinking, Start Tagging: Tag Semantics Emerge From Collaborative Verbosity

12 02 2010

I want to share the abstract of our upcoming paper at WWW’2010 (here is a link to the full paper). In case you are interested in our research and going to WWW in Raleigh this year as well, I’d be happy if you’d get in touch.

C. Körner, D. Benz, A. Hotho, M. Strohmaier, G. Stumme, Stop Thinking, Start Tagging: Tag Semantics Emerge From Collaborative Verbosity, 19th International World Wide Web Conference (WWW2010), Raleigh, NC, USA, April 26-30, ACM, 2010.

Abstract: Recent research provides evidence for the presence of emergent semantics in collaborative tagging systems. While several methods have been proposed, little is known about the factors that influence the evolution of semantic structures in these systems. A natural hypothesis is that the quality of the emergent semantics depends on the pragmatics of tagging: Users with certain usage patterns might contribute more to the resulting semantics than others. In this work, we propose several measures which enable a pragmatic differentiation of taggers by their degree of contribution to emerging semantic structures. We distinguish between categorizers, who typically use a small set of tags as a replacement for hierarchical classification schemes, and describers, who are annotating resources with a wealth of freely associated, descriptive keywords. To study our hypothesis, we apply semantic similarity measures to 64 different partitions of a real-world and large-scale folksonomy containing different ratios of categorizers and describers. Our results not only show that ‘verbose’ taggers are most useful for the emergence  of tag semantics, but also that a subset containing only 40% of the most ‘verbose’ taggers can produce results that match and even outperform the semantic precision obtained from the whole dataset. Moreover, the results suggest that there exists a causal link between the pragmatics of tagging and resulting emergent semantics. This work is relevant for designers and analysts of tagging systems interested (i) in fostering the semantic development of their platforms, (ii) in identifying users introducing “semantic noise”, and (iii) in learning ontologies.

More details can be found in the full paper.

This work is funded in part by the Know-Center and the FWF Research Grant TransAgere. It is the result of a collaboration with the KDE group at University of Kassel and the  University of Würzburg. You might  also want to have a look at a related blog post on the bibsonomy blog.

Some background about the distinction between categorizers and describers can be found in a related paper:

M. Strohmaier, C. Koerner, R. Kern, Why do Users Tag? Detecting Users’ Motivation for Tagging in Social Tagging Systems, 4th International AAAI Conference on Weblogs and Social Media (ICWSM2010), Washington, DC, USA, May 23-26, 2010. (Download pdf)





Measuring Earthquakes on Twitter: The Twicalli Scale

15 01 2010

I got interested in the signal that Twitter received from the two last earthquakes happening in California and Haiti. It has been recently suggested that Twitter can play a role in assessing the magnitude of an earthquake, by studying the stream of tweets that contain a reference to the event, such as the stream of messages related to #earthquake, including messages like this. The term “Twichter Scale” has been used in this context to discuss the relation between Twitter and external events such as earthquakes.

Different people have expressed different ideas about a Twichter Scale, for example:

Twichter Scale (n): the fraction of Twitter traffic caused by an earthquake. Unused on the east coast. (@ian_soboroff)

While this definition does not necessarily imply that the Twichter scale indicates the magnitude of earthquakes, it is interesting to ask whether Twitter data can be used for that purpose.

Impact of two earthquakes on different Twitter hashtag streams: #earthquake, #earthquakes and #quake between Jan 9 and Jan 15

When we look at the data, we can clearly identify both earthquakes represented as spikes in the data. Both earthquakes were comparable in terms of Magnitude (6,5 vs. 7.0 on the Richter Scale). And in fact, both events produced a comparable amplitude for the #earthquake hashtag stream. On the surface, this might be a confirmation of the idea of a Twichter Scale, based on the Richter Scale, which is a scale measuring the magnitude of an earthquake. The Richter scale produces the same value for a given earthquake, no matter where you are.

However, there is another, less scientific measure to characterize earthquakes – the so-called Mercalli scale – which is a measure of an earthquake’s effect on people and structures.

Which yields to the interesting question, whether Twitter streams can better serve as an indicator of strength (Richter) or impact (Mercalli) of an earthquake?

As we can see in the figure, the amplitude produced on Twitter is approximately equal for both events (almost 400 messages per hour). My suspicion however is, that this is not because Twitter accurately captures the strengths of earthquakes, but because the Jan 9 earthquake was closer to California, where more people (more Twitter users) are willing to share their experiences. So it seems that this produced an amplitude of similar extent, although the impact of the Jan 9 earthquake in California on structures and people was much weaker than the impact of the Jan 12 earthquake in Haiti.

So how can we identify the difference of an earthquake in terms of its impact on people and structures?

When we look at the diagram above, we can see a clear difference after the initial spike: While the Californian earthquake did not cause many follow-up tweets, the aftermath of the Haiti earthquake is clearly visible.

What does that say about Twitter as a signal for earthquakes?

  1. The amplitude of the signal on Twitter is very likely biased by the density of Twitter users in a given region, and thereby can neither give reliable information about the magnitude nor the impact of an earthquake. This suggests that Twitter can not act as a reliable sensor to detect the magnitude of an earthquake in a “Richter Scale” sense.
  2. However, the “aftermath” of a spike on twitter (the integral) seems to be a good indication of an earthquake’s impact on people and structures – in a “Mercalli Scale” sense. Long after the initial spike, the Haiti earthquake is still topic of conversations on Twitter (those are likely related to  fundraising efforts and other related aid activities). Indepentent of the density of Twitter users in Haiti (which is probably low), the aftermath can clearly be identified.

The Twicalli Scale:

This suggests that Twitter as a sensor for the magnitude of earthquakes (in a Richter Scale sense) does not seem very useful. Twitter is more indicative of earthquakes in a “Twicalli scale” sense:

Using the aftermath (not the amplitude) of twitter stream data, the impact (not the magnitude) of earthquakes becomes visible on Twitter.

Update: Here are links to further resources and the datasets this analysis is based on:

Update II (Aug 27 2010): The Twicalli scale was mentioned in a recent paper on the importance of trust in social awareness streams such as Twitter (page 8, left column)

Marcelo Mendoza, Barbara Poblete and Carlos Castillo, Twitter Under Crisis: Can we trust what we RT?, Workshop on Social Media Analytics, In conjunction with the International Conference on Knowledge Discovery & Data Mining (KDD 2010), PDF download (see page 8, left column)

Update III (Oct 11 2011): Now there’s also a WWW2011 paper mentioning the Twicalli Scale (page 2, top of right column)

Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of the 20th international conference on World wide web (WWW ’11). ACM, New York, NY, USA, 675-684.

Update IV (2017): The Twicalli Scale has been implemented in a decision support tool by the National Seismology Office in Chile, See the corresponding SIGIR poster below.

Poblete, Barbara. “Twicalli: An Earthquake Detection System Based on Citizen Sensors Used for Emergency Response in Chile.” Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM link, 2017.





WSDM 2010 List of Accepted Papers

26 12 2009

The list of accepted papers for WSDM 2010 is available now. Lot’s of exciting papers, I’m particularly interested in the ones related to tagging, microblogging, search intent and user goals. Here’s an excerpt of my reading list (including links to pdf-versions whenever they were available):

  • Query Reformulation Using Anchor Text (pdf)
    Van Dang and Bruce Croft
  • Tagging Human Knowledge (technical report)
    Paul Heymann, Andreas Paepcke and Hector Garcia-Molina
  • Ranking Mechanisms in Twitter-Like Forums (pdf)
    Anish Das Sarma, Atish Das Sarma, Sreenivas Gollapudi and rina panigrahy
  • Large Scale Query Log Analysis of Re-Finding (pdf)
    Sarah Tyler and Jaime Teevan
  • TwitterRank: Finding Topic-sensitive Influential Twitterers (pdf)
    Jianshu Weng, Ee-peng Lim, Jing Jiang and Qi He
  • I tag, You tag: Translating tags for advanced user models (pdf)
    Robert Wetzker, Carsten Zimmermann and Christian Bauckhage
  • Folks in folksonomies: Social link prediction from shared metadata (pdf)
    Rossano Schifanella, Alain Barrat, Ciro Cattuto, Benjamin Markines and Filippo Menczer





Open Data for Cities: Enabling Citizens to Have the Apps They Want/Need

8 10 2009

During my recent resarch visit to PARC / the SF Bay Area, I came across a quite impressive iniative by the San Francisco Municipal Government aimed at opening up city data.

While I was aware of Obama’s data.gov initiative on a federal level, opening up municipal data seems to be interesting because in many cases it is closer to people everday’s concerns, such as finding a parking lot or avoiding areas with high levels of crime.

http://datasf.org is a website related to the San Francisco iniative, aiming to create transparency about the datasets made available by the city so far, such as the Disabled Parking Blue Zones dataset (.zip download). The general idea is to expose municipal data to the public, in order to enable the public to come up with innovations they feel are useful and/or important. Examples of such innovations can be found in a showcase, including an app for public health scores of SF restaurants or an iphone application for finding kid-friendly locations in the city.

Brilliant! What is also remarkable about these applications is that these innovations came to the city of San Francisco to no costs other than the costs related to publishing the data. Application development was done by developers who cared for a problem or companies who spotted a business opportunity.

In addition, publishing this data shifts – to some extent – responsibility from cities to citizens. If an application does not exist, people can certainly demand it to be provided – but more importantly – they can decide to develop it themselves, or organize in a way to get the applications they want developed indepedent of municipal approval.

After some further research, I was excited to see that the city of Toronto has a similar initiative, http://www.toronto.ca/open. Toronto major David Miller announced it at Mash09 (watch the video here, the interesting stuff starts at ~12:40).

From a transcript of his speech (excerpts), David Miller brings the vision of such initiatives nicely to the point:

I am very pleased to announce today at Mesh09 the development of http://toronto.ca/open, which will be a catalogue of city generated data.  The data will be provided in standardized formats, will be machine readable, and will be updated regularly.  This will be launched in the fall of 2009 with an initial series of data sets, including static data like schedules, and some feeds updated in real time.

The benefits to the city of Toronto are extremely significant.  Individuals will find new ways to apply this data, improve city services, and expand their reach.  By sharing our information, the public can help us to improve services and create a more liveable city.  And as an open government, sharing data increases our transparency and accountability.

In his speech, Major Millor also challenged the audience to develop apps that would help the government spot deficiencies and improvement potentials based on the published data (e.g. which contractor fixes reported road damage fastest/sustainably/etc?). Citizens (or better: “developers”) can come up with new ways of tapping into the data to develop new and innovative applications that provide unique services to municipal communities.

In Graz (Wikipedia), I am currently teaching – among other courses – a course on Web Science at Graz University of Technology,  with more than 100 students per semester. I can see a huge opportunity to combine latest web algorithms, and hands-on experiences on the web with the creative potential of students in order to come up with a vast number of new and innovative applications that could have an exciting impact to the city.

My results of a quick review on related efforts in Graz however have been somewhat disappointing. The only resource I found was the GeoDataServer Graz (if you are aware of other resources please post them as a comment!), which provides web interfaces to mostly static, geographic information, such as “rivers in Graz” or a “3D model of Graz” – which are fine and exciting examples. But for open data, these initiatives would need to be expanded significantly, to include up-to-date data feeds, APIs, common data representation formats and – most importantly – a grande strategy that provides a common vision of how the city wants to go about governing its data. I think this will eventually take place. In any case, I’m looking forward to getting students excited to participate and contribute to such initiatives, as these iniatives can probably serve as an excellent vehicle to let students have an impact, and at the same time teach them about the importance of service and responsibility in societies.

This development also nicely ties in with some of my research interests on people’s motivations on the web: Enabling people to develop and have access to applications they want seems to be a tremendous shortcut to a more goal-oriented, useful, and ultimately more effective web. And with the advent of end user programming and tools such as Yahoo Pipes, there is not even a requirement for users to have lots of programming skills anymore to come up with useful applications or mashups.Motivations for Tagging: Categorization vs. DescriptionOpen Data for Cities: Enabling Citizens to Have the Apps They Want





Notes on the Relationship between Search and Tagging

13 09 2009

I had a number of exciting and very inspiring conversations this week with Marc, Rakesh, Fabian, Cathy, and Pranam, as well as with Ed and Rowan. It was great talking to everybody and I wanted to share some of the issues that were discussed. Most conversations focused on the role of tagging, and how it relates to searching the web. I do not claim that any of these interesting thoughts are mine or that my notes offer answers.  They merely aim to serve as pointers for what I consider important issues.

A minority of resources on the web is tagged:

A number of current research projects study the question how tagged resources can inform/improve search. However, a minority of resources on the web is tagged, and the gap between tagged and non-tagged resources is likely increasing (although this seems difficult to predict cf. Paul Heymann’s work). This would mean that a decreasing ratio of resources on the web have tagged information associated with it. The question then becomes: Why bother analyzing tagging systems in the first place when their (relative) importance is likely to decrease over time?

Tagged resources exhibit topical bias (that’s a bad thing!):

Tagging is often a geek activity. I am not aware of any studies of delicious’ user population, but it is likely that delicious’ users are more geeky than the rest of the population. This is a bad thing because it would bias any broad attempt leveraging tagging for search. The bias might depend on the particular tagging system though: Flickr seems to have a much broader, and thereby more representative, user base.

Bookmarks exhibit timely bias (that’s a good thing!):

Bookmarking typically represents an event in time triggered by some user. Most tagging systems therefore provide timestamp information, allowing to infer more information about the context in which a given resource is being tagged. This allows us to use tagging systems for studying how information on the web is organized, filtered, diffused and consumed.

Search supercedes any other form of information access/organisation:

I found this issue to be the most fundamental and controversial one. How do increasingly sophisticated search engines change the way we interact with information? What is the role that directories (such as Yahoo!) and personal ressource collections (such as “Favorite folders”) play in a world where search engines can (re)find much information we require with increasing precision? To give an example: Would an electronic record of all resources that a user has ever visited – and a corresponding search interface to them – replace the need for information organization ala delicious or Browser Favorites? (all privacy concerns set aside for a moment). How would such a development relate to the desire of users to share information with friends?

Search intent is poorly understood:

While there has been some work on search queries and query log analysis, the intent behind queries remains largely elusive. Existing distinctions (such as the one by Broder) need further elaboration and refinement. An example would be what Rakesh called pseudo-navigational queries – where the user has a certain expectation about the information, but this information can be found on several sites (e.g. wikipedia, an encyclopedia or other sites).

Conflict in tagging systems:

Tagging systems are largely tolerant of conflicts, for example, with regard to tagging semantics. This is different from systems such as wikipedia, where conflict is regarded to be an important aspect of the collaboration process. Twitter seems to lie in between those extremes, where conflict can emerge easily (e.g. around hashtags) , with some rudimentary support for resolution.

I truly enjoyed these conversations, and hope that they will continue at some point in the future.