Research

Automatic audio driven animation of non-verbal actions


Reference:

Cosker, D., Holt, C., Mason, D., Whatling, G., Marshall, D. and Rosin, P.L., 2007. Automatic audio driven animation of non-verbal actions. In: IET 4th European Conference on Visual Media Production, 2007-11-27 - 2007-11-28. IET, p. 16.

Related documents:

This repository does not currently have the full-text of this item.
You may be able to access a copy if URLs are provided below.

Official URL:

http://dx.doi.org/10.1049/cp:20070048

Related URLs:

Abstract

While speech driven animation for lip-synching and facial expression synthesis from speech has previously received much attention, there is no previous work on generating non-verbal actions such as laughing and crying automatically from an audio signal. In this article initial results on a system designed to address this issue are presented. 3D facial data is recorded for a participant making different actions-i.e. laughing, crying, yawning and sneezing-using a Qualysis (Sweden) optical motion-capture system while simultaneously recording audio data. 30 retro-reflective markers were placed on the participant's face to capture movement. Using this data, an analysis and synthesis machine was then trained consisting of a dual-input Hidden Markov Model (HMM) and a trellis search algorithm which converts HMM visual states and new input audio into new 3D motion-capture data.

Details

Item Type Conference or Workshop Items (UNSPECIFIED)
CreatorsCosker, D., Holt, C., Mason, D., Whatling, G., Marshall, D. and Rosin, P.L.
DOI10.1049/cp:20070048
Related URLs
URLURL Type
http://www.scopus.com/inward/record.url?scp=84868998652&partnerID=8YFLogxKUNSPECIFIED
DepartmentsFaculty of Science > Computer Science
Research CentresMedia Technology Research Centre
StatusPublished
ID Code33810

Export

Actions (login required)

View Item