https://scholars.lib.ntu.edu.tw/handle/123456789/607149
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Huang S.-F | en_US |
dc.contributor.author | Chuang S.-P | en_US |
dc.contributor.author | Liu D.-R | en_US |
dc.contributor.author | Chen Y.-C | en_US |
dc.contributor.author | Yang G.-P | en_US |
dc.contributor.author | HUNG-YI LEE | en_US |
dc.creator | Huang S.-F;Chuang S.-P;Liu D.-R;Chen Y.-C;Yang G.-P;Lee H.-Y. | - |
dc.date.accessioned | 2022-04-25T06:42:27Z | - |
dc.date.available | 2022-04-25T06:42:27Z | - |
dc.date.issued | 2021 | - |
dc.identifier.issn | 2308457X | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85119205277&doi=10.21437%2fInterspeech.2021-763&partnerID=40&md5=ec4477ece29732cedbc0c143ad1eb2da | - |
dc.identifier.uri | https://scholars.lib.ntu.edu.tw/handle/123456789/607149 | - |
dc.description.abstract | Speech separation has been well developed, with the very successful permutation invariant training (PIT) approach, although the frequent label assignment switching happening during PIT training remains to be a problem when better convergence speed and achievable performance are desired. In this paper, we propose to perform self-supervised pre-training to stabilize the label assignment in training the speech separation model. Experiments over several types of self-supervised approaches, several typical speech separation models and two different datasets showed that very good improvements are achievable if a proper self-supervised approach is chosen. Copyright ? 2021 ISCA. | - |
dc.relation.ispartof | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH | - |
dc.subject | Label Permutation Switch | - |
dc.subject | Self-supervised Pre-train | - |
dc.subject | Speech Enhancement | - |
dc.subject | Speech Separation | - |
dc.subject | Separation | - |
dc.subject | Source separation | - |
dc.subject | Speech analysis | - |
dc.subject | Speech communication | - |
dc.subject | Achievable performance | - |
dc.subject | Convergence speed | - |
dc.subject | Label permutation switch | - |
dc.subject | Pre-training | - |
dc.subject | Self-supervised pre-train | - |
dc.subject | Separation model | - |
dc.subject | Speech separation | - |
dc.subject | Speed performance | - |
dc.subject | Speech enhancement | - |
dc.title | Stabilizing label assignment for speech separation by self-supervised pre-training | en_US |
dc.type | conference paper | en |
dc.identifier.doi | 10.21437/Interspeech.2021-763 | - |
dc.identifier.scopus | 2-s2.0-85119205277 | - |
dc.relation.pages | 2303-2307 | - |
dc.relation.journalvolume | 3 | - |
item.cerifentitytype | Publications | - |
item.fulltext | no fulltext | - |
item.openairecristype | http://purl.org/coar/resource_type/c_5794 | - |
item.openairetype | conference paper | - |
item.grantfulltext | none | - |
crisitem.author.dept | Electrical Engineering | - |
crisitem.author.dept | Intel-NTU Connected Context Computing Center | - |
crisitem.author.dept | Communication Engineering | - |
crisitem.author.dept | Computer Science and Information Engineering | - |
crisitem.author.dept | Networking and Multimedia | - |
crisitem.author.dept | Center for Artificial Intelligence and Advanced Robotics | - |
crisitem.author.dept | Master's Program in Smart Medicine and Health Informatics (SMARTMHI) | - |
crisitem.author.orcid | 0000-0002-9654-5747 | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | Others: University-Level Research Centers | - |
crisitem.author.parentorg | Others: International Research Centers | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | Others: University-Level Research Centers | - |
crisitem.author.parentorg | International College | - |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。