TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech
Journal
IEEE/ACM Transactions on Audio Speech and Language Processing
Journal Volume
29
Pages
2351-2366
Date Issued
2021
Author(s)
Abstract
We introduce a self-supervised speech pre-training method called TERA, which stands for Transformer Encoder Representations from Alteration. Recent approaches often learn by using a single auxiliary task like contrastive prediction, autoregressive prediction, or masked reconstruction. Unlike previous methods, we use alteration along three orthogonal axes to pre-train Transformer Encoders on a large amount of unlabeled speech. The model learns through the reconstruction of acoustic frames from their altered counterpart, where we use a stochastic policy to alter along various dimensions: time, frequency, and magnitude. TERA can be used for speech representations extraction or fine-tuning with downstream models. We evaluate TERA on several downstream tasks, including phoneme classification, keyword spotting, speaker recognition, and speech recognition. We present a large-scale comparison of various self-supervised models. TERA achieves strong performance in the comparison by improving upon surface features and outperforming previous models. In our experiments, we study the effect of applying different alteration techniques, pre-training on more data, and pre-training on various features. We analyze different model sizes and find that smaller models are strong representation learners than larger models, while larger models are more effective for downstream fine-tuning than smaller models. Furthermore, we show the proposed method is transferable to downstream datasets not used in pre-training. ? 2014 IEEE.
Subjects
Pre-training
Representation
Self-supervised
Signal encoding
Speech
Speech recognition
Stochastic systems
Auto-regressive
Keyword spotting
Large amounts
Orthogonal axes
Phoneme classification
Speaker recognition
Stochastic policy
Surface feature
Stochastic models
Type
journal article