https://scholars.lib.ntu.edu.tw/handle/123456789/498581
Title: | Transcribing code-switched bilingual lectures using deep neural networks with unit merging in acoustic modeling | Authors: | Yeh, C.-F. LIN-SHAN LEE |
Keywords: | Bilingual; Code-switching; Deep Neural Networks; Speech Recognition; Unit Merging | Issue Date: | 2014 | Start page/Pages: | 220-224 | Source: | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | Conference: | 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014 | Abstract: | This paper considers the transcription of the widely observed yet less investigated bilingual code-switched speech: the words or phrases of the guest language are inserted within the utterances of the host language, so the languages are switched back and forth within an utterance, and much less data are available for the guest language. Two approaches utilizing the deep neural network (DNN) were tested and analyzed, including using DNN bottleneck features in HMM/GMM (BF-HMM/GMM) and modeling context-dependent HMM senones by DNN (CD-DNN-HMM). In both cases the unit merging (and recovery) techniques in acoustic modeling were used to handle the data imbalance problem. Improved recognition accuracies were observed with unit merging (and recovery) for the two approaches under different conditions. © 2014 IEEE. |
URI: | https://scholars.lib.ntu.edu.tw/handle/123456789/498581 https://www.scopus.com/inward/record.uri?eid=2-s2.0-84905252028&doi=10.1109%2fICASSP.2014.6853590&partnerID=40&md5=c90bea739daa265177e0a45bef18c7ef |
ISSN: | 15206149 | DOI: | 10.1109/ICASSP.2014.6853590 | SDG/Keyword: | Codes (symbols); Computer system recovery; FORTH (programming language); Signal processing; Speech recognition; Transcription; Acoustic model; Bilingual; Bottleneck features; Code-switching; Context dependent; Data imbalance; Deep neural networks; Recognition accuracy; Merging |
Appears in Collections: | 電機工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.