https://scholars.lib.ntu.edu.tw/handle/123456789/427989
標題: | Interactive Spoken Content Retrieval by Deep Reinforcement Learning | 作者: | HUNG-YI LEE Pei-Hung Chung Yen-Chen Wu Tzu-Hsiang Lin Tsung-Hsien Wen LIN-SHAN LEE |
關鍵字: | deep-Q-learning; reinforcement learning; Spoken content retrieval; user-machine interaction | 公開日期: | 2018 | 來源出版物: | IEEE/ACM Transactions on Audio, Speech, and Language Processing | 摘要: | For text content retrieval, the user can easily scan through and select from a list of retrieved items. This is impossible for spoken content retrieval, because the retrieved items are not easily displayed on-screen. In addition, due to the high degree of uncertainty for speech recognition, retrieval results can be very noisy. One way to counter such difficulties is through user-machine interaction. The machine can take different actions to interact with the user to obtain better retrieval results before showing them to the user. For example, the machine can request extra information from the user, return a list of topics for the user to select from, and so on. In this paper, we propose using deep-Q-network (DQN) to determine the machine actions for interactive spoken content retrieval. DQN bypasses the need to estimate hand-crafted states, and directly determines the best action based on the present retrieval results even without any human knowledge. It is shown to achieve significantly better performance as compared with the previous hand-crafted states. We further find that double DQN and dueling DQN improve the naive version. © 2014 IEEE. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85049486998&doi=10.1109%2fTASLP.2018.2852739&partnerID=40&md5=c168e272cfe6b94f02e0d3daae1d6d7c | ISSN: | 23299290 | DOI: | 10.1109/taslp.2018.2852739 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。