PGADA: Perturbation-Guided Adversarial Alignment for Few-Shot Learning Under the Support-Query Shift
Journal
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Journal Volume
13280 LNAI
Pages
3月15日
Date Issued
2022
Author(s)
Abstract
Few-shot learning methods aim to embed the data to a low-dimensional embedding space and then classify the unseen query data to the seen support set. While these works assume that the support set and the query set lie in the same embedding space, a distribution shift usually occurs between the support set and the query set, i.e., the Support-Query Shift, in the real world. Though optimal transportation has shown convincing results in aligning different distributions, we find that the small perturbations in the images would significantly misguide the optimal transportation and thus degrade the model performance. To relieve the misalignment, we first propose a novel adversarial data augmentation method, namely Perturbation-Guided Adversarial Alignment (PGADA), which generates the hard examples in a self-supervised manner. In addition, we introduce Regularized Optimal Transportation to derive a smooth optimal transportation plan. Extensive experiments on three benchmark datasets manifest that our framework significantly outperforms the eleven state-of-the-art methods on three datasets. Our code is available at https://github.com/772922440/PGADA. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Subjects
Adversarial data augmentation; Few-shot learning; Optimal transportation
Other Subjects
Classification (of information); Computer vision; Perturbation techniques; Adversarial data augmentation; Data augmentation; Different distributions; Few-shot learning; Learning methods; Low dimensional embedding; Optimal transportations; Query datum; Real-world; Embeddings
Type
conference paper
