Hsu, Chih JouChih JouHsuWu, Yu TingYu TingWuMING-SUI LEEYUNG-YU CHUANG2023-11-162023-11-162022-01-01https://scholars.lib.ntu.edu.tw/handle/123456789/637181Document images captured by smartphones and digital cameras are often subject to photometric distortions, including shadows, non-uniform shading, and color shift due to the imperfect white balance of sensors. Readers are confused by an indistinguishable background and content, which significantly reduces legibility and visual quality. Despite the fact that real photographs often contain a mixture of these distortions, the majority of existing approaches to document illumination correction concentrate on only a small subset of these distortions. This paper presents ScannerNet, a comprehensive method that can eliminate complex photometric distortions using deep learning. In order to exploit the different characteristics of shadow and shading, our model consists of a sub-network for shadow removal followed by a sub-network for shading correction. To train our model, we also devise a data synthesis method to efficiently construct a large-scale document dataset with a great deal of variation. Our extensive experiments demonstrate that our method significantly enhances visual quality by removing shadows and shading, preserving figure colors, and improving legibility.ScannerNet: A Deep Network for Scanner-Quality Document Images under Complex Illuminationconference paper2-s2.0-85174676555https://api.elsevier.com/content/abstract/scopus_id/85174676555