https://scholars.lib.ntu.edu.tw/handle/123456789/559297
標題: | Video Question Generation via Semantic Rich Cross-Modal Self-Attention Networks Learning | 作者: | Wang, Y.-S. Su, H.-T. Chang, C.-H. Liu, Z.-Y. WINSTON HSU |
關鍵字: | Cross-Modal Attention; Video Question Generation | 公開日期: | 2020 | 卷: | 2020-May | 起(迄)頁: | 2423-2427 | 來源出版物: | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | 摘要: | We introduce a novel task, Video Question Generation (Video QG). A Video QG model automatically generates questions given a video clip and its corresponding dialogues. Video QG requires a range of skills-sentence comprehension, temporal relation, the interplay between vision and language, and the ability to ask meaningful questions. To address this, we propose a novel semantic rich cross-modal self-attention (SR-CMSA) network to aggregate the multi-modal and diverse features. To be more precise, we enhance the video frames semantic by integrating the object-level information, and we jointly consider the cross-modal attention for the video question generation task. Excitingly, our proposed model remarkably improves the baseline from 7.58 to 14.48 in the BLEU-4 score on the TVQA dataset. Most of all, we arguably pave a novel path toward understanding the challenging video input and we provide detailed analysis in terms of diversity, which ushers the avenues for future investigations. © 2020 IEEE. |
URI: | https://www.scopus.com/inward/record.url?eid=2-s2.0-85089224063&partnerID=40&md5=bf4cba337e99a6b850d459203fad34a1 https://scholars.lib.ntu.edu.tw/handle/123456789/559297 |
ISSN: | 15206149 | DOI: | 10.1109/ICASSP40776.2020.9053476 | SDG/關鍵字: | Semantic Web; Semantics; Speech communication; Cross-modal; Diverse features; Multi-modal; Networks learning; Novel task; Temporal relation; Video clips; Video frame; Audio signal processing |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。