https://scholars.lib.ntu.edu.tw/handle/123456789/607365
Title: | 3D Video Stabilization with Depth Estimation by CNN-based Optimization | Authors: | Lee Y.-C Tseng K.-W Chen Y.-T Chen C.-C YI-PING HUNG CHU-SONG CHEN |
Keywords: | Cameras;Computer vision;Knowledge management;Lead compounds;Motion estimation;Stabilization;3D frames;3D video stabilizations;Depth Estimation;Feature-tracking;Learning-based methods;Local features extractions;Optimisations;Quality enhancement;Video stabilization;Visual qualities;Deep neural networks | Issue Date: | 2021 | Start page/Pages: | 10616-10625 | Source: | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Abstract: | Video stabilization is an essential component of visual quality enhancement. Early methods rely on feature tracking to recover either 2D or 3D frame motion, which suffer from the robustness of local feature extraction and tracking in shaky videos. Recently, learning-based methods seek to find frame transformations with high-level information via deep neural networks to overcome the robustness issue of feature tracking. Nevertheless, to our best knowledge, no learning-based methods leverage 3D cues for the transformation inference yet; hence they would lead to artifacts on complex scene-depth scenarios. In this paper, we propose Deep3D Stabilizer, a novel 3D depth-based learning method for video stabilization. We take advantage of the recent self-supervised framework on jointly learning depth and camera ego-motion estimation on raw videos. Our approach requires no data for pre-training but stabilizes the input video via 3D reconstruction directly. The rectification stage incorporates the 3D scene depth and camera motion to smooth the camera trajectory and synthesize the stabilized video. Unlike most one-size-fits-all learning-based methods, our smoothing algorithm allows users to manipulate the stability of a video efficiently. Experimental results on challenging benchmarks show that the proposed solution consistently outperforms the state-of-the-art methods on almost all motion categories. ? 2021 IEEE. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85123208686&doi=10.1109%2fCVPR46437.2021.01048&partnerID=40&md5=94e064944b709a926f6a30641f05d654 https://scholars.lib.ntu.edu.tw/handle/123456789/607365 |
ISSN: | 10636919 | DOI: | 10.1109/CVPR46437.2021.01048 |
Appears in Collections: | 電機工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.