A multi-modal dialogue system for information navigation and retrieval across spoken document archives with topic hierarchies
Journal
Proceedings of ASRU 2005: 2005 IEEE Automatic Speech Recognition and Understanding Workshop
Journal Volume
2005
Pages
375 - 380
Date Issued
2005
Author(s)
Abstract
Unlike the written documents, the spoken documents are difficult to be shown on the screen and browsed by the user during retrieval. In this paper, we propose to use multi-modal dialogues to help the user to "navigate" across the spoken document archives and retrieve the desired documents based on a topic hierarchy constructed by the key terms extracted from the retrieved spoken documents. An initial prototype system for such functions has been developed, in which the broadcast news in Mandarin Chinese was taken as the example spoken documents, and the Named Entities (NEs) are taken as the key terms to construct the topic hierarchy. © 2005 IEEE.
Event(s)
ASRU 2005: 2005 IEEE Automatic Speech Recognition and Understanding Workshop
SDGs
Other Subjects
Electronic document identification systems; Information analysis; Information retrieval; Rapid prototyping; Information navigation; Multi-modal dialogue system; Named Entities (NE); Spoken document archives; Pattern recognition systems
Type
conference paper
