Lee Y.-HYang F.-EYU-CHIANG WANG2023-06-092023-06-092022https://www.scopus.com/inward/record.uri?eid=2-s2.0-85126130733&doi=10.1109%2fWACV51458.2022.00167&partnerID=40&md5=976498eade258b9c96cdf95db68cdc4dhttps://scholars.lib.ntu.edu.tw/handle/123456789/632089Few-shot semantic segmentation addresses the learning task in which only few images with ground truth pixel-level labels are available for the novel classes of interest. One is typically required to collect a large mount of data (i.e., base classes) with such ground truth information, followed by meta-learning strategies to address the above learning task. When only image-level semantic labels can be observed during both training and testing, it is considered as an even more challenging task of weakly supervised few-shot semantic segmentation. To address this problem, we propose a novel meta-learning framework, which predicts pseudo pixel-level segmentation masks from a limited amount of data and their semantic labels. More importantly, our learning scheme further exploits the produced pixel-level information for query image inputs with segmentation guarantees. Thus, our proposed learning model can be viewed as a pixel-level meta-learner. Through extensive experiments on benchmark datasets, we show that our model achieves satisfactory performances under fully supervised settings, yet performs favorably against state-of-the-art methods under weakly supervised settings. © 2022 IEEE.Deep Learning Segmentation; Few-shot; Grouping and Shape; Semi- and Un- supervised Learning; Transfer[SDGs]SDG3[SDGs]SDG4Computer vision; Deep learning; Pixels; Semantics; Deep learning segmentation; Few-shot; Grouping and shape; Learning tasks; Meta-learner; Pixel level; Semantic segmentation; Semi-supervised learning; Transfer; Un-supervised learning; Semantic SegmentationA Pixel-Level Meta-Learner for Weakly Supervised Few-Shot Semantic Segmentationconference paper10.1109/WACV51458.2022.001672-s2.0-85126130733