YU-HSUAN CHENPan, Fu ChengFu ChengPanLiao, Yu ChienYu ChienLiaoKao, Jao HongJao HongKaoYU-CHIANG WANG2023-12-212023-12-212023-01-01978172816327715206149https://scholars.lib.ntu.edu.tw/handle/123456789/638066Low-light image enhancement aims to improve the visual quality of images captured under poor lighting conditions. While recent works have successfully developed deep learning-based solutions, a large number of existing works require ground-truth normal-light images during training, and most methods are not designed to exploit and preserve semantic information in the low-light inputs. In this paper, we propose a semantics-aware yet unsupervised low-light enhancement model based on gamma correction. Without observing ground-truth images or semantic annotations of the low-light inputs, our model learns via the introduced semantics-aware adversarial learning scheme with the associated objectives given a set of unpaired reference images of interest. Guided by such high-quality reference images and the inherent semantic practicality, our proposed method performs favorably against recent unsupervised low-light enhancement approaches.adversarial learning | deep learning | low-light image enhancement | semantic segmentationSemantics-Aware Gamma Correction for Unsupervised Low-Light Image Enhancementconference paper10.1109/ICASSP49357.2023.100953942-s2.0-85177592251https://api.elsevier.com/content/abstract/scopus_id/85177592251