Research

Towards automatic performance-driven animation between multiple types of facial model


Reference:

Cosker, D., Borkett, R., Marshall, D. and Rosin, P. L., 2008. Towards automatic performance-driven animation between multiple types of facial model. IET Computer Vision, 2 (3), pp. 129-141.

Related documents:

This repository does not currently have the full-text of this item.
You may be able to access a copy if URLs are provided below. (Contact Author)

Official URL:

http://dx.doi.org/10.1049/iet-cvi:20070041

Abstract

The authors describe a method to re-map animation parameters between multiple types of facial model for performance-driven animation. A facial performance can be analysed in terms of a set of facial action parameter trajectories using a modified appearance model with modes of variation encoding specific facial actions which can be pre-defined. These parameters can then be used to animate other modified appearance models or 3D morph-target-based facial models. Thus, the animation parameters analysed from the video performance may be re-used to animate multiple types of facial model. The authors demonstrate the effectiveness of the proposed approach by measuring its ability to successfully extract action parameters from performances and by displaying frames from example animations. The authors also demonstrate its potential use in fully automatic performance-driven animation applications.

Details

Item Type Articles
CreatorsCosker, D., Borkett, R., Marshall, D. and Rosin, P. L.
DOI10.1049/iet-cvi:20070041
DepartmentsFaculty of Science > Computer Science
RefereedYes
StatusPublished
ID Code12083

Export

Actions (login required)

View Item