Defense Against Adversarial Attacks on Spoofing Countermeasures of ASV
Journal
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Journal Volume
2020-May
Pages
6564-6568
Date Issued
2020
Author(s)
Abstract
Various forefront countermeasure methods for automatic speaker verification (ASV) with considerable performance in anti-spoofing are proposed in the ASVspoof 2019 challenge. However, previous work has shown that countermeasure models are vulnerable to adversarial examples indistinguishable from natural data. A good countermeasure model should not only be robust against spoofing audio, including synthetic, converted, and replayed audios; but counteract deliberately generated examples by malicious adversaries. In this work, we introduce a passive defense method, spatial smoothing, and a proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models against adversarial examples. This paper is among the first to use defense methods to improve the robustness of ASV spoofing countermeasure models under adversarial attacks. The experimental results show that these two defense methods positively help spoofing countermeasure models counter adversarial examples. ? 2020 IEEE.
Subjects
Network security; Speech communication; Speech recognition; Anti-spoofing; Automatic speaker verification; Malicious adversaries; Passive defense; Proactive defense; Spatial smoothing; Audio signal processing
Type
conference paper