Wang, Y.-S.Y.-S.WangSu, H.-T.H.-T.SuChang, C.-H.C.-H.ChangLiu, Z.-Y.Z.-Y.LiuWINSTON HSU2021-05-052021-05-05202015206149https://www.scopus.com/inward/record.url?eid=2-s2.0-85089224063&partnerID=40&md5=bf4cba337e99a6b850d459203fad34a1https://scholars.lib.ntu.edu.tw/handle/123456789/559297We introduce a novel task, Video Question Generation (Video QG). A Video QG model automatically generates questions given a video clip and its corresponding dialogues. Video QG requires a range of skills-sentence comprehension, temporal relation, the interplay between vision and language, and the ability to ask meaningful questions. To address this, we propose a novel semantic rich cross-modal self-attention (SR-CMSA) network to aggregate the multi-modal and diverse features. To be more precise, we enhance the video frames semantic by integrating the object-level information, and we jointly consider the cross-modal attention for the video question generation task. Excitingly, our proposed model remarkably improves the baseline from 7.58 to 14.48 in the BLEU-4 score on the TVQA dataset. Most of all, we arguably pave a novel path toward understanding the challenging video input and we provide detailed analysis in terms of diversity, which ushers the avenues for future investigations. © 2020 IEEE.Cross-Modal Attention; Video Question GenerationSemantic Web; Semantics; Speech communication; Cross-modal; Diverse features; Multi-modal; Networks learning; Novel task; Temporal relation; Video clips; Video frame; Audio signal processingVideo Question Generation via Semantic Rich Cross-Modal Self-Attention Networks Learningconference paper10.1109/ICASSP40776.2020.90534762-s2.0-85089224063