莊永裕臺灣大學:資訊網路與多媒體研究所陳宏典Chen, Hong-DienHong-DienChen2007-11-272018-07-052007-11-272018-07-052006http://ntur.lib.ntu.edu.tw//handle/246246/58396以影像為基礎的人臉動畫技術已經達到很高的真實度,它可被應用在低頻寬的視訊會議或是在語言學習上扮演虛擬教師角色。可是以影像為基礎的人臉動畫技術需要先為特定使用者拍攝一段五到十分鐘的訓練影片,並加以分析來建立模型以產生動畫,這樣會限制它的應用。我們提出了一個簡單的方法,新使用者只需拍攝幾張特定的影像並利用原先使用者所建立的模型,就能產生出新的人臉動畫。Image-based videorealistic speech animation achieves significant visual realism such that it can be potentially used for creating virtual teachers in language learning, digital characters in movies, or even user’s representatives in video conferencing under very low bit-rate. However, it comes at the cost of the collection of a large video corpus from the specific person to be animated. This requirement hinders its use in broad applications, since a large video corpus for a specific person under a controlled recording setup may not be easily obtained. Hence, we adopt a simply method which allows us to transfer original animation model to a novel person only with a few different lip images.CHAPTER 1 INTRODUCTION 12 CHAPTER 2 RELATED WORK 15 2.1. FACIAL CODING 15 2.2. MODEL-BASED FACIAL VIDEO SYNTHESIS 16 2.3. IMAGE-BASED FACIAL VIDEO SYNTHESIS 23 CHAPTER 3 BACKGROUND: TRAINABLE VIDEOREALISTIC SPEECH ANIMATION 29 3.1. CORPUS 30 3.2. PRE-PROCESSING 30 3.3. MULTIDIMENSIONAL MORPHABLE MODELS 31 3.3.1. MMM Construction 31 3.3.2. Synthesis 32 3.3.3. Analysis 32 3.4. PHONEME MODELS 33 3.4.1. Phoneme Models Construction 33 3.4.2. Trajectory Synthesis 34 3.4.3. Training 35 3.5. POST-PROCESSING 35 CHAPTER 4 MODEL TRANSFER 36 4.1. INITIALIZATION 38 4.2. FLOW MATCHING 40 4.3. TEXTURE MATCHING 41 4.4. ANALYSIS AND SYNTHESIS 42 CHAPTER 5 EXPERIMENTAL RESULTS 43 CHAPTER 6 DISCUSSIONS AND FUTURE WORK 45 CHAPTER 7 AN APPLICATION EXAMPLE 46 REFERENCE 612012123 bytesapplication/pdfen-US人臉動畫speech animation可置換之語音驅動唇形合成方法Transferable Speech-Driven Lips Synthesisthesishttp://ntur.lib.ntu.edu.tw/bitstream/246246/58396/1/ntu-95-R93944015-1.pdf