Yeh, C.-F.C.-F.YehLIN-SHAN LEE2020-06-112020-06-11201415206149https://scholars.lib.ntu.edu.tw/handle/123456789/498581https://www.scopus.com/inward/record.uri?eid=2-s2.0-84905252028&doi=10.1109%2fICASSP.2014.6853590&partnerID=40&md5=c90bea739daa265177e0a45bef18c7efThis paper considers the transcription of the widely observed yet less investigated bilingual code-switched speech: the words or phrases of the guest language are inserted within the utterances of the host language, so the languages are switched back and forth within an utterance, and much less data are available for the guest language. Two approaches utilizing the deep neural network (DNN) were tested and analyzed, including using DNN bottleneck features in HMM/GMM (BF-HMM/GMM) and modeling context-dependent HMM senones by DNN (CD-DNN-HMM). In both cases the unit merging (and recovery) techniques in acoustic modeling were used to handle the data imbalance problem. Improved recognition accuracies were observed with unit merging (and recovery) for the two approaches under different conditions. © 2014 IEEE.Bilingual; Code-switching; Deep Neural Networks; Speech Recognition; Unit Merging[SDGs]SDG4Codes (symbols); Computer system recovery; FORTH (programming language); Signal processing; Speech recognition; Transcription; Acoustic model; Bilingual; Bottleneck features; Code-switching; Context dependent; Data imbalance; Deep neural networks; Recognition accuracy; MergingTranscribing code-switched bilingual lectures using deep neural networks with unit merging in acoustic modelingconference paper10.1109/ICASSP.2014.68535902-s2.0-84905252028