Understanding self-attention of self-supervised audio transformers
Journal
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Journal Volume
2020-October
Pages
3785-3789
Date Issued
2020
Author(s)
Abstract
Self-supervised Audio Transformers (SAT) enable great success in many downstream speech applications like ASR, but how they work has not been widely explored yet. In this work, we present multiple strategies for the analysis of attention mechanisms in SAT. We categorize attentions into explainable categories, where we discover each category possesses its own unique functionality. We provide a visualization tool for understanding multi-head self-attention, importance ranking strategies for identifying critical attention, and attention refinement techniques to improve model performance. ? 2020 ISCA
Subjects
Computer applications; Computer simulation; Attention mechanisms; Audio transformers; Model performance; Multiple strategy; Ranking strategy; Refinement techniques; Speech applications; Visualization tools; Speech communication
Type
conference paper
