Gaze movement inference for user adapted image annotation and retrieval
Mirza, S.N.H., Izquierdo, E. and Proulx, M., 2011. Gaze movement inference for user adapted image annotation and retrieval. In: MM '11 ACM Multimedia Conference, 2011-11-28 - 2011-12-01, Scottsdale, AZ.
Related documents:This repository does not currently have the full-text of this item.
You may be able to access a copy if URLs are provided below. (Contact Author)
In media personalisation the media provider needs to receive feedbacks from its users to adapt media contents used for interaction. At the current stage this feedback is limited to mouse clicks and keyboard entries. This report explores the possible solutions to include the gaze movements of a user as a form of feedback for media personalisation and adaptation. Features are extracted from the gaze trajectory of users while they are searching in an image database for a Target Concept(TC). These features are used to measure a user's visual attention to every image appeared on the screen called user interest level(UIL). Because the reaction of different people to the same content are different, for every new user a new adapted processing interface is developed automatically. In average our interface could detect 10% of the images belonging to the TC class with no error and it could identify 40% of them with only 20% error. We show in this paper that the gaze movement is a reliable feedback to be used for measuring one's interest to images which help to personalise image annotation and retrieval.
|Item Type||Conference or Workshop Items (UNSPECIFIED)|
|Creators||Mirza, S.N.H., Izquierdo, E. and Proulx, M.|
|Departments||Faculty of Humanities & Social Sciences > Psychology|
Actions (login required)