https://scholars.lib.ntu.edu.tw/handle/123456789/581472
標題: | Learn from the past – sequentially one-to-one video deblurring network | 作者: | Liang C.-H Su H.-T Hsu W.H. WINSTON HSU |
關鍵字: | Cameras; Camera shake; Computational costs; Essential features; Hand-held cameras; Object motion; Spatio temporal; State of the art; Temporal information; Recurrent neural networks | 公開日期: | 2021 | 卷: | 78 | 來源出版物: | Journal of Visual Communication and Image Representation | 摘要: | With the growing availability of hand-held cameras in recent years, more and more images and videos are taken at any time and any place. However, they usually suffer from undesirable blur due to camera shake or object motion in the scene. In recent years, a few modern video deblurring methods are proposed and achieve impressive performance. However, they are still not suitable for practical applications as high computational cost or using future information as input. To address the issues, we propose a sequentially one-to-one video deblurring network (SOON) which can deblur effectively without any future information. It transfers both spatial and temporal information to the next frame by utilizing the recurrent architecture. In addition, we design a novel Spatio-Temporal Attention module to nudge the network to focus on the meaningful and essential features in the past. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art deblurring methods, both quantitatively and qualitatively, on various challenging real-world deblurring datasets. Moreover, as our method deblurs in an online manner and is potentially real-time, it is more suitable for practical applications. ? 2021 Elsevier Inc. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85107621864&doi=10.1016%2fj.jvcir.2021.103159&partnerID=40&md5=a3eee3306bc4a1a2cc1d8a8bf7194cb5 https://scholars.lib.ntu.edu.tw/handle/123456789/581472 |
ISSN: | 10473203 | DOI: | 10.1016/j.jvcir.2021.103159 |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。