Hsieh T.-HChen J.-CCHU-SONG CHEN2021-09-022021-09-02202015206149https://www.scopus.com/inward/record.uri?eid=2-s2.0-85089240145&doi=10.1109%2fICASSP40776.2020.9053362&partnerID=40&md5=bab8b8f7f9d2293f1fac8004bdacc5e8https://scholars.lib.ntu.edu.tw/handle/123456789/581329In this paper, we propose a new learning strategy for semi-supervised deep learning algorithms, called label reuse, aiming to significantly reduce the expensive computational cost of pseudo label generation and the like for each unlabeled training instance since pseudo labels require to be repeatedly evaluated through the whole training process. For label reuse, we first divide the unlabeled training data into several partitions, replicate each partition in several copies, and place them consecutively in the training queue so as to reuse the pseudo labels computed at first time before invalidation. To evaluate the effectiveness of the proposed approach, we conduct extensive experiments on CIFAR-10 [1] and SVHN [2] by applying it upon the recent state-of-the-art semi-supervised deep learning approach, MixMatch [3]. The results demonstrate the proposed approach can not only significantly reduce the cost of pseudo label computation of MixMatch by a large amount but also keep comparable classification performance. ? 2020 IEEE.Audio signal processing; Cost reduction; Deep learning; Semi-supervised learning; Speech communication; Classification performance; Computational costs; Large amounts; Learning approach; Learning strategy; Semi-supervised; Training data; Training process; Learning algorithmsLabel Reuse for Efficient Semi-Supervised Learningconference paper10.1109/ICASSP40776.2020.90533622-s2.0-85089240145