Sequence-to-Sequence Automatic Speech Recognition with Word Embedding Regularization and Fused Decoding
Journal
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Journal Volume
2020-May
Pages
7879-7883
Date Issued
2020
Author(s)
Abstract
In this paper, we investigate the benefit that off-the-shelf word embedding can bring to the sequence-to-sequence (seq-to-seq) automatic speech recognition (ASR). We first introduced the word embedding regularization by maximizing the cosine similarity between a transformed decoder feature and the target word embedding. Based on the regularized decoder, we further proposed the fused decoding mechanism. This allows the decoder to consider the semantic consistency during decoding by absorbing the information carried by the transformed decoder feature, which is learned to be close to the target word embedding. Initial results on LibriSpeech demonstrated that pre-trained word embedding can signifi-cantly lower ASR recognition error with a negligible cost, and the choice of word embedding algorithms among Skip-gram, CBOW and BERT is important. © 2020 IEEE.
Subjects
automatic speech recognition; decoding; regularization; sequence-to-sequence; word embedding
Type
conference paper
