Huh MSun S.-HZhang N.SHAO-HUA SUN2022-11-112022-11-11201910636919https://www.scopus.com/inward/record.uri?eid=2-s2.0-85078800613&doi=10.1109%2fCVPR.2019.00157&partnerID=40&md5=bbcefea32e02039c49d99d2b5b9874e1https://scholars.lib.ntu.edu.tw/handle/123456789/624948We propose feedback adversarial learning (FAL) framework that can improve existing generative adversarial networks by leveraging spatial feedback from the discriminator. We formulate the generation task as a recurrent framework, in which the discriminator's feedback is integrated into the feedforward path of the generation process. Specifically, the generator conditions on the discriminator's spatial output response, and its previous generation to improve generation quality over time-allowing the generator to attend and fix its previous mistakes. To effectively utilize the feedback, we propose an adaptive spatial transform layer, which learns to spatially modulate feature maps from its previous generation and the error signal from the discriminator. We demonstrate that one can easily adapt FAL to existing adversarial learning frameworks on a wide range of tasks, including image generation, image-to-image translation, and voxel generation. © 2019 IEEE.Computational Photography; Deep Learning; Image and Video SynthesisColor photography; Computer vision; Deep learning; Recurrent neural networks; Adversarial learning; Adversarial networks; Computational photography; Feedforward paths; Generation process; Image generations; Image translation; Video synthesis; FeedbackFeedback adversarial learning: Spatial feedback for improving generative adversarial networksconference paper10.1109/CVPR.2019.001572-s2.0-85078800613