國立臺灣大學資訊工程學系Yang, Tzong-JerTzong-JerYangHuang, Chien-FengChien-FengHuangHung, Cheng-ShengCheng-ShengHungOuhyoung, MingMingOuhyoung2006-09-272018-07-052006-09-272018-07-051999http://ntur.lib.ntu.edu.tw//handle/246246/20060927122922398857In this paper, a model-based face analysis and synthesis system is presented. The system, named VR-Face, tracks and estimates one's 3D head motion in real time, and represents the estimated motion with a pre-rendered 3D texture-mapped head model. Initially, a user has to identify two eyes and one nostril on the screen for tracking. In this way, the background can be complex, and even dynamic. When the system fails to follow up one's head motion, it prompts the user with a box indicating the original face position to recover itself from tracking errors. The overall performance, including both analysis and synthesis, is above 25 frames/sec on a PC with a 400MHz Pentinum II-MMX CPU. The system has been demonstrated under different lighting conditions with different low-price PC cameras.application/pdf214609 bytesapplication/pdfzh-TWVR-Face: An Operator Assisted Real-Time Face Tracking Systemotherhttp://ntur.lib.ntu.edu.tw/bitstream/246246/20060927122922398857/1/vrface.pdf