|Automated video editing based on learned styles using LSTM-GAN
|GAN; LSTM; video editing
|Proceedings of the ACM Symposium on Applied Computing
Experienced video editors use various editing techniques, including camera movement, types of shots, and shot compositions to create specific video semantics delivering messages to the viewers. In the video production process, the content of the video are essential, but so is the way to compose it. The goal of this work is to train a model learning how to edit the video that meets the videography requirements. This work proposes a deep generative model where both the generator and discriminator are unidirectional LSTM networks to generate the sequences of shot transitions for video editing. The proposed model learns different types of editing transitions from edited video clips. One is the stage performance of Korean music programs, and another is Chinese music programs. By combining different types of shots and camera movements, the proposed AI video editor brings various viewing experiences to the viewers. The quality of the generated shot sequences for video editing are evaluated by three metrics, which are creativity, inheritance, and diversity. The results show that the quality of the synthetic sequences generated by LSTM-GAN are better than those generated by the baseline model (Markov chain or LSTM). In summary, the quality of the sequence generated by LSTM-GAN is better than the quality generated by the Markov chain or LSTM while ensuring creativity, inheritance, and diversity at the same time. © 2022 ACM.
|Cameras; Long short-term memory; Semantics; User interfaces; Video recording; Automated video editing; Camera's movements; GAN; LSTM; Music program; Production process; Video editing; Video editor; Video production; Video semantics; Video signal processing
|Appears in Collections:
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.