Feedback adversarial learning: Spatial feedback for improving generative adversarial networks
Journal
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Journal Volume
2019-June
Pages
1476-1485
Date Issued
2019
Author(s)
Abstract
We propose feedback adversarial learning (FAL) framework that can improve existing generative adversarial networks by leveraging spatial feedback from the discriminator. We formulate the generation task as a recurrent framework, in which the discriminator's feedback is integrated into the feedforward path of the generation process. Specifically, the generator conditions on the discriminator's spatial output response, and its previous generation to improve generation quality over time-allowing the generator to attend and fix its previous mistakes. To effectively utilize the feedback, we propose an adaptive spatial transform layer, which learns to spatially modulate feature maps from its previous generation and the error signal from the discriminator. We demonstrate that one can easily adapt FAL to existing adversarial learning frameworks on a wide range of tasks, including image generation, image-to-image translation, and voxel generation. © 2019 IEEE.
Subjects
Computational Photography; Deep Learning; Image and Video Synthesis
Other Subjects
Color photography; Computer vision; Deep learning; Recurrent neural networks; Adversarial learning; Adversarial networks; Computational photography; Feedforward paths; Generation process; Image generations; Image translation; Video synthesis; Feedback
Type
conference paper