https://scholars.lib.ntu.edu.tw/handle/123456789/632086
標題: | DUAl-MTGAN: Stochastic and deterministic motion transfer for image-to-video synthesis | 作者: | Yang F.-E Chang J.-C Lee Y.-H YU-CHIANG WANG |
公開日期: | 2020 | 起(迄)頁: | 6764-6771 | 來源出版物: | Proceedings - International Conference on Pattern Recognition | 摘要: | Generating videos with content and motion variations is a challenging task in computer vision. While the recent development of GAN allows video generation from latent representations, it is not easy to produce videos with particular content of motion patterns of interest. In this paper, we propose Dual Motion Transfer GAN (Dual-MTGAN), which takes image and video data as inputs while learning disentangled content and motion representations. Our Dual-MTGAN is able to perform deterministic motion transfer and stochastic motion generation. Based on a given image, the former preserves the input content and transfers motion patterns observed from another video sequence, and the latter directly produces videos with plausible yet diverse motion patterns based on the input image. The proposed model is trained in an end-to-end manner, without the need to utilize pre-defined motion features like pose or facial landmarks. Our quantitative and qualitative results would confirm the effectiveness and robustness of our model in addressing such conditioned image-to-video tasks. © 2020 IEEE |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85109527502&doi=10.1109%2fICPR48806.2021.9412781&partnerID=40&md5=b0d711fdeaf353d98c6756d5a97377da https://scholars.lib.ntu.edu.tw/handle/123456789/632086 |
ISSN: | 10514651 | DOI: | 10.1109/ICPR48806.2021.9412781 | SDG/關鍵字: | Stochastic systems; Time and motion study; Video recording; Facial landmark; Motion representation; Motion transfer; Motion variation; Stochastic motion; Video generation; Video sequences; Video synthesis; Pattern recognition |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。