https://scholars.lib.ntu.edu.tw/handle/123456789/580905
標題: | Defense for black-box attacks on anti-spoofing models by self-supervised learning | 作者: | Wu H Liu A.T Lee H.-Y. HUNG-YI LEE |
關鍵字: | Acoustic noise; Network security; Speech communication; Speech recognition; Supervised learning; Anti-spoofing; Automatic speaker verification; Learning Based Models; Noise-to-signal ratios; Phone classifications; Task performance; Text to speech; Voice conversion; Learning systems | 公開日期: | 2020 | 卷: | 2020-October | 起(迄)頁: | 3780-3784 | 來源出版物: | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH | 摘要: | High-performance anti-spoofing models for automatic speaker verification (ASV), have been widely used to protect ASV by identifying and filtering spoofing audio that is deliberately generated by text-to-speech, voice conversion, audio replay, etc. However, it has been shown that high-performance antispoofing models are vulnerable to adversarial attacks. Adversarial attacks, that are indistinguishable from original data but result in the incorrect predictions, are dangerous for antispoofing models and not in dispute we should detect them at any cost. To explore this issue, we proposed to employ Mockingjay, a self-supervised learning based model, to protect antispoofing models against adversarial attacks in the black-box scenario. Self-supervised learning models are effective in improving downstream task performance like phone classification or ASR. However, their effect in defense for adversarial attacks has not been explored yet. In this work, we explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks. A layerwise noise-to-signal ratio (LNSR) is proposed to quantize and measure the effectiveness of deep models in countering adversarial noise. Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples, and successfully counter black-box attacks. Copyright ? 2020 ISCA |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85098110118&doi=10.21437%2fInterspeech.2020-2026&partnerID=40&md5=401f760371ceb60092ffec202a05437d https://scholars.lib.ntu.edu.tw/handle/123456789/580905 |
ISSN: | 2308457X | DOI: | 10.21437/Interspeech.2020-2026 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。