Shih, Yi JenYi JenShihWu, Shih LunShih LunWuZalkow, FrankFrankZalkowMuller, MeinardMeinardMullerYI-HSUAN YANG2023-10-062023-10-062023-01-0115209210https://scholars.lib.ntu.edu.tw/handle/123456789/635980Attention-based Transformer models have been increasingly employed for automatic music generation. To condition the generation process of such a model with a user-specified sequence, a popular approach is to take that conditioning sequence as a priming sequence and ask a Transformer decoder to generate a continuation. However, this prompt-based conditioning cannot guarantee that the conditioning sequence would develop or even simply repeat itself in the generated continuation. In this paper, we propose an alternative conditioning approach, called theme-based conditioning, that explicitly trains the Transformer to treat the conditioning sequence as a thematic material that has to manifest itself multiple times in its generation result. This is achieved with two main technical contributions. First, we propose a deep learning-based approach that uses contrastive representation learning and clustering to automatically retrieve thematic materials from music pieces in the training data. Second, we propose a novel gated parallel attention module to be used in a sequence-to-sequence (seq2seq) encoder/decoder architecture to more effectively account for a given conditioning thematic material in the generation process of the Transformer decoder. We report on objective and subjective evaluations of variants of the proposed Theme Transformer and the conventional prompt-based baseline, showing that our best model can generate, to some extent, polyphonic pop piano music with repetition and plausible variations of a given condition.Automatic symbolic music generation | contrastive learning | parallel attention | positional encoding | theme retrieval | theme-conditioned generation | transformersTheme Transformer: Symbolic Music Generation With Theme-Conditioned Transformerjournal article10.1109/TMM.2022.31618512-s2.0-85127081446https://api.elsevier.com/content/abstract/scopus_id/85127081446