Multiple Text Style Transfer by using Word-level Conditional Generative Adversarial Network with Two-Phase Training.
Journal
EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference
Pages
3579-3584
Date Issued
2019
Author(s)
Abstract
The objective of non-parallel text style transfer is to alter specific attributes (e.g. sentiment, mood, tense, politeness, etc) of a given text while preserving unrelated content. Adversarial training is a popular method to ensure the transferred sentences have the desired target styles. However, previous works often suffer from content leaking problem. In this paper, we propose a new adversarial training model with a word-level conditional architecture and a two-phase training procedure. By using a style-related condition architecture before generating a word, our model is able to maintain style-unrelated words while changing the others. By separating the training procedure into reconstruction and transfer phases, our model is able to balance the reconstruction and adversarial losses. We test our model on polarity sentiment transfer and multiple-attribute transfer tasks. The empirical results show that our model achieves comparable evaluation scores in both transfer accuracy and fluency but significantly outperforms other state-of-the-art models in content compatibility on three real-world datasets. © 2019 Association for Computational Linguistics
Other Subjects
Network architecture; Adversarial networks; Multiple attributes; Parallel text; Real-world datasets; State of the art; Training model; Training procedures; Word level; Natural language processing systems
Type
conference paper