EPSNet: Efficient Panoptic Segmentation Network with Cross-layer Attention Fusion
Journal
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Journal Volume
12622 LNCS
Pages
689-705
Date Issued
2021
Author(s)
Abstract
Panoptic segmentation is a scene parsing task which unifies semantic segmentation and instance segmentation into one single task. However, the current state-of-the-art studies did not take too much concern on inference time. In this work, we propose an Efficient Panoptic Segmentation Network (EPSNet) to tackle the panoptic segmentation tasks with fast inference speed. Basically, EPSNet generates masks based on simple linear combination of prototype masks and mask coefficients. The light-weight network branches for instance segmentation and semantic segmentation only need to predict mask coefficients and produce masks with the shared prototypes predicted by prototype network branch. Furthermore, to enhance the quality of shared prototypes, we adopt a module called “cross-layer attention fusion module”, which aggregates the multi-scale features with attention mechanism helping them capture the long-range dependencies between each other. To validate the proposed work, we have conducted various experiments on the challenging COCO panoptic dataset, which achieve highly promising performance with significantly faster inference speed (51 ms on GPU). ? 2021, Springer Nature Switzerland AG.
Event(s)
15th Asian Conference on Computer Vision, ACCV 2020
Subjects
Computer vision; Semantics; Attention mechanisms; Fast inference; Fusion modules; Linear combinations; Long-range dependencies; Multi-scale features; Semantic segmentation; State of the art; Network layers
Type
conference paper