Tsai, Fang-DuoFang-DuoTsaiWu, Shih-LunShih-LunWuLee, WeijawWeijawLeeYang, Sheng-PingSheng-PingYangChen, Bo-RuiBo-RuiChenHAO-CHUNG CHENGYI-HSUAN YANG2026-01-272026-01-272025-07https://www.scopus.com/record/display.uri?eid=2-s2.0-105023822135&origin=resultslisthttps://scholars.lib.ntu.edu.tw/handle/123456789/735611We propose MuseControlLite, a lightweight mechanism designed to fine-tune text-to-music generation models for precise conditioning using various time-varying musical attributes and reference audio signals. The key finding is that positional embeddings, which have been seldom used by text-to-music generation models in the conditioner for text conditions, are critical when the condition of interest is a function of time. Using melody control as an example, our experiments show that simply adding rotary positional embeddings to the decoupled cross-attention layers increases control accuracy from 56.6% to 61.1%, while requiring 6.75 times fewer trainable parameters than state-of-the-art fine-tuning mechanisms, using the same pre-trained diffusion Transformer model of Stable Audio Open. We evaluate various forms of musical attribute control, audio in-painting, and audio outpainting, demonstrating improved controllability over MusicGen-Large and Stable Audio Open ControlNet at a significantly lower fine-tuning cost, with only 85M trainable parameters. Source code, model checkpoints, and demo examples are available at: https: //MuseControlLite.github.io/web/.falseMuseControlLite: Multifunctional Music Generation with Lightweight Conditionersconference paper2-s2.0-105023822135