https://scholars.lib.ntu.edu.tw/handle/123456789/558981
標題: | Dykgchat: Benchmarking dialogue generation grounding on dynamic knowledge graphs | 作者: | Tuan, Y.-L. YUN-NUNG CHEN HUNG-YI LEE |
公開日期: | 2020 | 起(迄)頁: | 1855-1865 | 來源出版物: | EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference | 摘要: | Data-driven, knowledge-grounded neural conversation models are capable of generating more informative responses. However, these models have not yet demonstrated that they can zero-shot adapt to updated, unseen knowledge graphs. This paper proposes a new task about how to apply dynamic knowledge graphs in neural conversation model and presents a novel TV series conversation corpus (DyKgChat) for the task. Our new task and corpus aids in understanding the influence of dynamic knowledge graphs on responses generation. Also, we propose a preliminary model that selects an output from two networks at each time step: a sequence-to-sequence model (Seq2Seq) and a multi-hop reasoning model, in order to support dynamic knowledge graphs. To benchmark this new task and evaluate the capability of adaptation, we introduce several evaluation metrics and the experiments show that our proposed approach outperforms previous knowledge-grounded conversation models. The proposed corpus and model can motivate the future research directions1. ? 2019 Association for Computational Linguistics |
URI: | https://www.scopus.com/inward/record.url?eid=2-s2.0-85084288031&partnerID=40&md5=e4262146741d134ceff7c12ef1ab3d43 https://scholars.lib.ntu.edu.tw/handle/123456789/558981 |
SDG/關鍵字: | Graphic methods; Knowledge representation; Linguistics; Dialogue generations; Evaluation metrics; Future research directions; Knowledge graphs; ON dynamics; Preliminary model; Reasoning models; Sequence modeling; Natural language processing systems |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。