Liu Y.-LLai W.-SYang M.-HHuang J.-B.YUNG-YU CHUANG2021-09-022021-09-02202010636919https://www.scopus.com/inward/record.uri?eid=2-s2.0-85094626363&doi=10.1109%2fCVPR42600.2020.01422&partnerID=40&md5=1a4e4d90659dc46ea0a0f43e10663d39https://scholars.lib.ntu.edu.tw/handle/123456789/581484We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions or raindrops, from a short sequence of images captured by a moving camera. Our method leverages the motion differences between the background and the obstructing elements to recover both layers. Specifically, we alternate between estimating dense optical flow fields of the two layers and reconstructing each layer from the flow-warped images via a deep convolutional neural network. The learning-based layer reconstruction allows us to accommodate potential errors in the flow estimation and brittle assumptions such as brightness consistency. We show that training on synthetically generated data transfers well to real images. Our results on numerous challenging scenarios of reflection and fence removal demonstrate the effectiveness of the proposed method. ?2020 IEEE.Convolutional neural networks; Data transfer; Deep neural networks; Fences; Optical flows; Pattern recognition; Dense optical flow; Flow estimation; Learning-based approach; Moving cameras; Potential errors; Real images; Short sequences; Multilayer neural networksConvolutional neural networks; Data transfer; Deep neural networks; Fences; Optical flows; Pattern recognition; Dense optical flow; Flow estimation; Learning-based approach; Moving cameras; Potential errors; Real images; Short sequences; Multilayer neural networksLearning to see through obstructionsconference paper10.1109/CVPR42600.2020.014222-s2.0-85094626363