Diverse Audio-to-Image Generation via Semantics and Feature Consistency
Journal
2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2020 - Proceedings
Pages
1188-1192
Date Issued
2020
Author(s)
Abstract
Humans are capable of imagining scene images when hearing ambient sounds. Therefore, audio-to-image synthesis becomes a challenging yet practical topic for both natural language comprehension and image content understanding. In this paper, we propose an audio-to-image generation network by applying the conditional generative adversarial networks. Specifically, we utilize such generative models with the proposed feature consistency and conditional adversarial losses, so that diverse image outputs with satisfactory visual quality can be synthesized from a single audio input. Experimental results on sports audio/visual data verify that the effectiveness and practicality of the proposed method over the state-of-the-art approaches on audio-to-image synthesis. ? 2020 APSIPA.
Subjects
Audition; Semantics; Adversarial networks; Feature consistency; Generative model; Image generations; Image synthesis; Natural languages; State-of-the-art approach; Visual qualities; Image processing
Type
conference paper