Attention-guided Unified Network for Panoptic Segmentation
Li, Yanwei1,5; Chen, Xinze4; Zhu, Zheng1,5; Xie, Lingxi2,3; Huang, Guan4; Du, Dalong4; Wang, Xingang1
2019
会议日期2019.6.16-2019.6.20
会议地点美国长滩
英文摘要

This paper studies panoptic segmentation, a recently proposed task which segments foreground (FG) objects at the instance level as well as background (BG) contents at the semantic level. Existing methods mostly dealt with these two problems separately, but in this paper, we reveal the underlying relationship between them, in particular, FG objects provide complementary cues to assist BG understanding. Our approach, named the Attention-guided Unified Network (AUNet), is a unified framework with two branches for FG and BG segmentation simultaneously. Two sources of attentions are added to the BG branch, namely, RPN and FG segmentation mask to provide object-level and pixellevel attentions, respectively. Our approach is generalized to different backbones with consistent accuracy gain in both FG and BG segmentation, and also sets new state-of-thearts both in the MS-COCO (46.5% PQ) and Cityscapes (59.0% PQ) benchmarks.

会议录出版者IEEE
语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/39169]  
专题精密感知与控制研究中心_精密感知与控制
通讯作者Wang, Xingang
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.Noah’s Ark Lab, Huawei Inc
3.Johns Hopkins University
4.Horizon Robotics
5.University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Li, Yanwei,Chen, Xinze,Zhu, Zheng,et al. Attention-guided Unified Network for Panoptic Segmentation[C]. 见:. 美国长滩. 2019.6.16-2019.6.20.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace