Chang, Kai ChengKai ChengChangYI-PING HUNGCHU-SONG CHEN2023-08-012023-08-012022-01-019781665457255https://scholars.lib.ntu.edu.tw/handle/123456789/6343603D novel view style transfer is a rising research topic. Recently developed methods aim to build globally optimized scene representations and stylize them directly on the scene. However, these methods are time-consuming because they need globally-consistent optimization or rendering fields reconstruction. In this paper, we introduce an end-to-end learning framework to handle the problem of stylized novel view synthesis, which can speed up the 3D style transfer by applying learning-based structure-of-motion (SfM) approaches. Experimental results show that our method can achieve comparable visual effects to the original style transfer module with higher efficiency.3D reconstruction | Novel View Synthesis | Stereo Video Generation | Style Transfer | Virtual RealityAn End-to-end Learning-based Approach to 3D Novel View Style Transferconference paper10.1109/AIVR56993.2022.000092-s2.0-85147841389https://api.elsevier.com/content/abstract/scopus_id/85147841389