Why do users tag? Detecting user motivation in tagging systems

5 04 2009

On the “social web” or “web2.0”, where user participation is entirely voluntarily, User Motivation has been identified as a key factor in the mechanisms contributing to the success of tagging systems. Web researchers are trying to identify the reasons why tagging systems work for a couple of years now, evident in, for example, the organization of a panel at CHI 2006 and a number of conferences and workshops on this topic.

Recent research on tagging motivation suggests that it is a rather complex construct. However, there seems to be emerging consensus that a distinction between at least two categories of tagging motivation appears useful: Categorization vs. Description. (Update May 30 2009: I was able to trace back the earliest mention of this distinction to a blog post by Tom Coates from 2005).

UPDATE March 15 2010 – More results can be found in: M. Strohmaier, C. Koerner, R. Kern, Why do Users Tag? Detecting Users’ Motivation for Tagging in Social Tagging Systems, 4th International AAAI Conference on Weblogs and Social Media (ICWSM2010), Washington, DC, USA, May 23-26, 2010. (Download pdf)

UPDATE April 23 2010 – Even more results in: C. Körner, R. Kern, H.P. Grahsl, M. Strohmaier, Of Categorizers and Describers: An Evaluation of Quantitative Measures for Tagging Motivation, 21st ACM SIGWEB Conference on Hypertext and Hypermedia (HT2010), Toronto, Canada, June 13-16, ACM, 2010. (download pdf)

Categorization vs. Description

Categorization: Users who are motivated by Categorization engage in tagging because they want to construct and maintain a navigational aid to the resources (URLs, photos, etc) being tagged. This typically implies a limited set of tags (or categories) that is rather stable. Resources are assigned to tags whenever they share some common characteristic important to the mental model of the user (e.g. ‘family photos’, ‘trip to Vienna’ or ‘favorite list of URLs’). Because the tags assigned are very close to the mental models of users, they can act as suitable facilitators for navigation and browsing.

Description: On the other hand, users who are motivated by Description engage in tagging because they want to accurately and precisely describe the resources being tagged. This typically implies an open set of tag, with a rather dynamic and unlimited tag vocabulary. The goal of tagging is to identify those tags that match the resource best. Because the tags assigned are very close to the content of the resources, they can act as suitable facilitators for description and searching.

Related Research: This basic distinction can be identified in the work of a number of researchers who have made similar distinctions: Xu et al 2006 (“Context-based” vs. “Content-based”), Golder and Huberman 2006 (“Refining Categories” vs. “Identifying what it is/is about”), Marlow et al 2006 (“Future retrieval” – “Contribution and Sharing”),  Ames and Naaman 2007 (“Organization” vs. “Communication”) and Heckner et al 2008 (“Personal Information Management vs. Sharing”), just to give a few examples, all represent recent research aiming to demystify and conceptualize the reasons why users participate in tagging systems.

Why should we care?

In the wild“, user behavior on social tagging systems is often a combination of both. So why is this distinction interesting? I believe that this distinction is interesting because it has a number of important implications, including but not limited to:

  1. Tag Recommender Systems: Assuming that a user is a “Categorizer”, he will more likely reject tags that are recommended from a larger user population because he is primarily interested in constructing and maintaing “her” taxonomy, using “her” individual tag vocabulary.
  2. Search: Tags produced by “Describers” are more likely to be helpful for search and retrieval because they focus on the content of resources, where tags produced by “Categorizers” focus on their mental model. Tags by categorizers thus are more subjective, whereas tags by describers are more objective.
  3. Knowledge Acquisition: Folksonomies, i.e. the conceptual structures that can be inferred from the tripartite graph of tagging systems, are likely to be influenced by the “mixture” or dominance of categorizers and describers in their system. A tagging system primarily populated by categorizers is likely to give rise to a completely different set of possible folksonomies than tagging systems primarily populated  by describers. More importantly, it is plausible to assume that even within certain tagging systems, tagging motivation among users vary.

This brings me to a small research project I am currently working on: Assuming that a) this distinction in user motivation exists in real-world tagging systems and b) it has important implications, it would be interesting to measure and detect the degree to which users are Categorizers or Describers. Due to the latent nature of “tagging motivation”, past research has mostly focused on questionnaire or sample-based studies of motivation, asking users how they interpret their tagging behavior themselves. While this early work has provided fundamental insights into tagging motivation and contributed significantly to theory building, as a research community, we currently lack robust metrics and automatic methods to detect tagging motivation in tagging systems without direct user interaction.

Detecting Tagging Motivation

I think there are several approaches to detect whether users are Categorizers or Describers without the need to ask them directly. One approach would focus on analyzing the semantics of tags, using wordnet and other knowledge bases to determine the meaning of tags and infer user motivation. This would require parsing of text and performing linguistic analysis, which I believe is difficult in the presence of typos, named entities, combined tags (“toread”) and other issues. Another approach would focus on comparing the tag vocabulary of users to the tag vocabulary of “the crowd”. Users that share a greater set of common tag vocabulary might be describers, whereas users having a highly individual vocabulary might be categorizers. Again there are problems: Tagging systems that accomodate users with different language backgrounds might be prone to detecting user motivation based on false premises.

So what would be a more robust way of detecting user motivation? I am currently interested in developing a model that would be agnostic to language, semantics or social context, focusing solely on statistical properties of individual tagging histories. This way, a determination of user motivation could be made without linguistic analysis or acquiring complete folksonomies from tagging systems, based on a single users’ log of tagging. Let me explain what I mean. I hypothesize that the following statistical properties of users’ tagging history allows to conduct interesting analyses:

  1. Tag Vocabulary size over time: Over time, an ideal Categorizer’s tag vocabulary would reach a plateau, because there is only limited categories that are of interest to him. An ideal Describer is not limited in terms of her tagging vocabulary. This should be easy to observe.
  2. Tag Entropy over time: A Categorizer has an incentive to maintain high entropy (or “information value”) in his tag cloud. Tags would need to be as discriminative as possible in order for him to use them as a navigational aid, otherwise tags would be of little use in browsing. A describer would not have an interest to maintain high entropy.
  3. Percentage of Tag Orphans over time: Categorizers have an interest in a low ratio of Tag Orphans (tags that are only used once) in their set of tags, because lots of orphans would inhibit the usage of their set of tags for browsing. Describers naturally produce lots of orphans when trying to find the most descriptive and complete set of tags for resources.
  4. Tag Overlap: While a Describer would be perfectly fine assigning two or more synonymous tags to the same resource (he might not know which term to use when searching for this resource at a later point), a Categorizer would not have an interest in creating two categories that contain the exact same set of resources. This would again inhibit the usage of tags for browsing, a Categorizers’ main motivation for tagging.

Preliminary Investigations

I have done some preliminary investigations to explore whether these statistical properties of users’ tagging history can actually serve as indicators of tagging motivation. Here are my preliminary results:

tag-vocabulary1

Growth of tag vocabulary in different tagging systems

The diagram above shows the growth of tag vocabulary of different taggers. The upper most red line represents tagging behavior of an almost “ideal” Describer, in this case tags produced by the ESP game, that contain a set of tags that represent valid descriptions of the resources they are assigned to. The lower most green line represents tagging behavior of an almost “ideal” Categorizer, tags (in this case: a number of photo sets) produced by a flickr user that categorized photos into a limited set of categories (> 100 sets). All other lines represent tagging behavior of real users on different tagging platforms (bibsonomy, delicious, flickr tags). It is worth noting that all other data lies between the two identified extremes.

In the following, I will discuss the suitability of tag entropy of single users (as opposed to the work by Chi and T. Mytkowicz 2008 focusing on large sets of users) as an indicator for detecting tagging motivation:

Change of tag entropy over time

Change of tag entropy over time

In this diagram, we can see that while our “ideal” Categorizer and our “ideal” Describer almost describe extremes, there are some users “outdoing” them (e.g. “u5 bibsonomy bookmarks” has even lower entropy than the tags acquired from the “ideal” Describer “ESP game”). Entropy thus seems to be -to some extent – a useful indicator for tagging motivation.

Next, I’ll discuss data comparing the rate of tag orphans in different datasets:

Rate of Tag Orphans over time

Rate of Tag Orphans over time

Like in the previous diagram, extreme behavior represents a good (but not optimal) upper and lower bound for real tagging behavior. While the “ideal” Categorizer (flickr sets, green line at the bottom) has a very small number of tag orphans, the “ideal” Describer (ESP game data, red line at the top) has a much higher tag orphan rate.

If we can identify the functions of extreme user motivation “(ideal” Categorizers and Describers), and position real user motivation between those extremes, we might be able to come up with scores indicative of user motivation in tagging systems – e.g. a user might be 80% Categorizer, and 20% Describer. Having such a model could help exploring the implications of different user motivations outlined above. Together with students (in particular Christian Körner, Hans-Peter Grahsl and Roman Kern), I am working on constructing and validating such a model, which we are aiming to submit to a conference this year.

UPDATE March 15 2010: More results can be found in the following publication: M. Strohmaier, C. Koerner, R. Kern, Why do Users Tag? Detecting Users’ Motivation for Tagging in Social Tagging Systems, 4th International AAAI Conference on Weblogs and Social Media (ICWSM2010), Washington, DC, USA, May 23-26, 2010. (Download pdf)

UPDATE April 23 2010 – Even more results in: C. Körner, R. Kern, H.P. Grahsl, M. Strohmaier, Of Categorizers and Describers: An Evaluation of Quantitative Measures for Tagging Motivation, 21st ACM SIGWEB Conference on Hypertext and Hypermedia (HT2010), Toronto, Canada, June 13-16, ACM, 2010. (download pdf)

References:

Towards the Semantic Web: Collaborative Tag Suggestions. Proceedings of the Collaborative Web Tagging Workshop at the WWW 2006, Edinburgh, Scotland, 2006.

Can all tags be used for search?. CIKM ’08: Proceeding of the 17th ACM conference on Information and knowledge management, 193–202, ACM, New York, NY, USA, 2008.

HT06, tagging paper, taxonomy, Flickr, academic article, to read. HYPERTEXT ’06: Proceedings of the seventeenth conference on Hypertext and hypermedia, 31–40, ACM, New York, NY, USA, 2006.

Usage patterns of collaborative tagging systems. Journal of Information Science, (32)2:198, 2006.

HT06, tagging paper, taxonomy, Flickr, academic article, to read. HYPERTEXT ’06: Proceedings of the seventeenth conference on Hypertext and hypermedia, 31–40, ACM, New York, NY, USA, 2006.

Personal Information Management vs. Resource Sharing: Towards a Model of Information Behaviour in Social Tagging Systems. Int’l AAAI Conference on Weblogs and Social Media (ICWSM), San Jose, CA, USA, 2009.

Why we tag: motivations for annotation in mobile and online media. CHI ’07: Proceedings of the SIGCHI conference on Human factors in computing systems, 971–980, ACM, New York, NY, USA, 2007.

Understanding the efficiency of social tagging systems using information theory. Proceedings of the nineteenth ACM conference on Hypertext and hypermedia, 81–88, 2008.

Why do users tag? Detecting user motivation in tagging systems


Actions

Information

5 responses

1 06 2009
Study on “Why users tag” « Intentialicious

[…] background on this research can be found in a previous post. Your help would be greatly […]

21 07 2009
Motivations for Tagging: Categorization vs. Description « Intentialicious

[…] Motivations for Tagging: Categorization vs. Description 21 07 2009 In a past post, I talked about the role of tagging motivation in social tagging systems, and a distinctio…. […]

16 02 2010
29 09 2014
used car lots in Valdosta georgia

It’s nearly impossible to find knowledgeable people on this topic, however, you seem like you know what you’re talking about!
Thanks

8 08 2015
Justin

A great studying resource can be found at http://algorithmiccomplexity.com/

Leave a comment