Towards automatic performance-driven animation between multiple types of facial model
Cosker, D., Borkett, R., Marshall, D. and Rosin, P. L., 2008. Towards automatic performance-driven animation between multiple types of facial model. IET Computer Vision, 2 (3), pp. 129-141.
Related documents:This repository does not currently have the full-text of this item.
You may be able to access a copy if URLs are provided below. (Contact Author)
The authors describe a method to re-map animation parameters between multiple types of facial model for performance-driven animation. A facial performance can be analysed in terms of a set of facial action parameter trajectories using a modified appearance model with modes of variation encoding specific facial actions which can be pre-defined. These parameters can then be used to animate other modified appearance models or 3D morph-target-based facial models. Thus, the animation parameters analysed from the video performance may be re-used to animate multiple types of facial model. The authors demonstrate the effectiveness of the proposed approach by measuring its ability to successfully extract action parameters from performances and by displaying frames from example animations. The authors also demonstrate its potential use in fully automatic performance-driven animation applications.
|Creators||Cosker, D., Borkett, R., Marshall, D. and Rosin, P. L.|
|Departments||Faculty of Science > Computer Science|
Actions (login required)