Dynamic presentation adaptation based on user intent classification on Flickr

10 06 2009

Mathias just pointed me to a recent demonstration of their current research on dynamically adapting the user interface of an image-sharing system, in their case Flickr.com,  based on a classification of user intent.

The problem Mathias and his student, Christoph Kofler, are adressing is interesting, and can be described in the following way.

The basic underlying assumption is that in addition to learning more about the content of image-sharing systems, we also need to know more about the users’ intent in order to improve search.

A majority of research on image sharing systems such as flickr has focused on leveraging and improving the utilization of content-specific (e.g. MPEG7) as well as user-generated (e.g. tags) meta-data to better describe the content of photos or images etc. This allows systems to better reflect what a given image is about. However, when searching for content, the intent of users comes into play. Depending on the users’ search intent, only a subset of resources might be relevant. In other words, a successful search result can be considered to be a search result that successfully matches users’ intent with the content available in image sharing systems.

I’d like to give an example of a particular search intent category in image sharing systems where a recognition of user intent would be useful:

A user who wants to download an image for later commercial use (e.g. to include it in marketing material) might only want to retrieve items that specifically allow him to do that. While this data about copyright in principle is available in image-sharing systems (e.g. The Creative Commons licence) in the form of meta data, these systems need the ability to capture and approximate users’ intent in order to map it onto relevant resources. This is where existing search in image-sharing systems has an enormous potential for improvement.

Mathias and his student are interested in the different possible categories of search intent in image-sharing systems, and how they can help to inform search. They are currently developing a taxonomy of search intent in image-sharing systems and they have already developed an early prototype that aims to demonstrate the potential of learning about user intent and using this knowledge to adapt the presentation of search results. While it appears that the prototype is at an early stage, using simple rule-based mechanisms, I think the prototype excellently demonstrates the difficulty and importance of learning more about the users’ search intent in image-sharing systems.

Dynamic presentation adaptation based on user intent classification on Flickr

Other work on user intent in image-sharing systems focuses on, for example, tag intent aiming to study the different reasons why users tag (Ames and Naaman 2007).

Click here to watch the 6 min demonstration video.

References:

Ames, M. and Naaman, M. 2007. Why we tag: motivations for annotation in mobile and online media. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ’07. ACM, New York, NY, 971-980. DOI= http://doi.acm.org/10.1145/1240624.1240772

Advertisements

Actions

Information

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: