Speech-Driven 3D Facial Animation
Date Issued
2006
Date
2006
Author(s)
Huang, Jun-Ze
DOI
en-US
Abstract
It is often difficult to animate a face model speaking a specific speech. Even for professional animators, it will take a lot of time. Our work provides a speech-driven 3D facial animation system which allows the user to easily generate facial animations. The user only needs to give a speech as the input. The output will be a 3D facial animation relative to the input speech.
Our work can be divided into three sub-systems: One is the MMM (multidimensional morphable model). MMM is build from the pre-recorded training video using machine learning techniques. We can use MMM to generate realistic speech video respect to the input speech.
The second part is Facial Tracking. Facial Tracking can extract the feature points of a human subject in the synthetic speech video.
The third part is Mesh-IK (mesh based inverse kinematics). Mesh-IK takes the motion of feature points as the guide line to deform 3D face models, and makes the result model have the same looking in the corresponding frame of the speech video. Thus we can have a 3D facial animation as the output.
Facing Tracking and Mesh-IK can also take a real speech video or even a real expression video as the input, and produce the corresponding facial animations.
Subjects
語音
臉部動畫
追蹤
speech
facial animation
tracking
Type
other
File(s)![Thumbnail Image]()
Loading...
Name
ntu-95-R93725012-1.pdf
Size
23.31 KB
Format
Adobe PDF
Checksum
(MD5):ad7535ee68b90c23bf3cdd82c60b5cb8
