Introducing Semantics into Speech Encoders
Journal
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Journal Volume
1
ISBN
9781959429722
Date Issued
2023-01-01
Author(s)
Xu, Derek
Dong, Shuyan
Wang, Changhan
Kim, Suyoun
Lin, Zhaojiang
Liu, Bing
Shrivastava, Akshat
Li, Shang Wen
Tseng, Liang Hsuan
Lin, Guan Ting
Baevski, Alexei
Sun, Yizhou
Wang, Wei
Abstract
Recent studies find existing self-supervised speech encoders contain primarily acoustic rather than semantic information. As a result, pipelined supervised automatic speech recognition (ASR) to large language model (LLM) systems achieve state-of-the-art results on semantic spoken language tasks by utilizing rich semantic representations from the LLM. These systems come at the cost of labeled audio transcriptions, which is expensive and time-consuming to obtain. We propose a task-agnostic unsupervised way of incorporating semantic information from LLMs into self-supervised speech encoders without labeled audio transcriptions. By introducing semantics, we improve existing speech encoder spoken language understanding (SLU) performance by over 5% on intent classification (IC), with modest gains in named entity resolution (NER) and slot filling (SF), and spoken question answering (SQA) FF1 score by over 2%. Our approach, which uses no ASR data, achieves similar performance as methods trained on over 100 hours of labeled audio transcripts, demonstrating the feasibility of unsupervised semantic augmentations to existing speech encoders.
Type
conference paper