Deep Exposure Fusion with Deghosting via Homography Estimation and Attention Learning
Journal
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Journal Volume
2020-May
Pages
1464-1468
Date Issued
2020
Author(s)
Chen, S.-Y.
Abstract
Modern cameras have limited dynamic ranges and often produce images with saturated or dark regions using a single exposure. Although the problem could be addressed by taking multiple images with different exposures, exposure fusion methods need to deal with ghosting artifacts and detail loss caused by camera motion or moving objects. This paper proposes a deep network for exposure fusion. For reducing the potential ghosting problem, our network only takes two images, an underexposed image and an overexposed one. Our network integrates together thew homography estimation for compensating camera motion, the attention mechanism for correcting remaining misalignment and moving pixels, and adversarial learning for alleviating other remaining artifacts. Experiments on real-world photos taken using handheld mobile phones show that the proposed method can generate high-quality images with faithful detail and vivid color rendition in both dark and bright areas. © 2020 IEEE.
Subjects
adversarial learning; attention learning; deghosting; Exposure fusion; homography estimation
Other Subjects
Cameras; Deep learning; Speech communication; Adversarial learning; Attention mechanisms; Color rendition; Exposure fusions; Ghosting artifacts; High quality images; Homography estimations; Limited dynamic ranges; Audio signal processing
Type
conference paper
