Weakly supervised saliency detection with a category-driven map generator
Journal
British Machine Vision Conference 2017, BMVC 2017
Date Issued
2017
Author(s)
Abstract
Top-down saliency detection aims to highlight the regions of a specific object category, and typically relies on pixel-wise annotated training data. In this paper, we address the high cost of collecting such training data by presenting a weakly supervised approach to object saliency detection, where only image-level labels, indicating the presence or absence of a target object in an image, are available. The proposed framework is composed of two deep modules, an image-level classifier and a pixel-level map generator. While the former distinguishes images with objects of interest from the rest, the latter is learned to generate saliency maps so that the training images masked by the maps can be better predicted by the former. In addition to the top-down guidance from class labels, the map generator is derived by also referring to other image information, including the background prior, area balance and spatial consensus. This information greatly regularizes the training process and reduces the risk of overfitting, especially when learning deep models with few training data. In the experiments, we show that our method gets superior results, and even outperforms many strongly supervised methods. ? 2017. The copyright of this document resides with its authors.
Subjects
Computer vision; Pixels; Annotated training data; Image information; Object categories; Saliency detection; Supervised methods; Top-down guidances; Training image; Training process; Object detection
Other Subjects
Computer vision; Pixels; Annotated training data; Image information; Object categories; Saliency detection; Supervised methods; Top-down guidances; Training image; Training process; Object detection
Type
conference paper
