Attention-Guided Network for Semantic Video Segmentation
Li, Jiangyun1,4; Zhao, Yikai1,4; Fu, Jun2; Wu, Jiajia3; Liu, Jing2
刊名IEEE ACCESS
2019
卷号7页码:140680-140689
关键词Semantics Image segmentation Feature extraction Active appearance model Optical imaging Context modeling Task analysis Semantic video segmentation attention convolutional neural networks
ISSN号2169-3536
DOI10.1109/ACCESS.2019.2943365
通讯作者Li, Jiangyun(leejy@ustb.edu.cn)
英文摘要Remarkable success has been made by deep convolutional neural network (CNN) models in semantic image segmentation. However, most segmentation models are based on classification networks which tend to learn image-level features and lost abundant spatial information due to repeated pooling and downsampling operations, and the CNN-based methods are not robust to inputs, hence directly applying existing segmentation methods to semantic video segmentation will result in spatially inconsecutive and temporally inconsistent segmentation predictions within one instance and of the same objects across adjacent frames, respectively. To tackle this challenge, we propose an Attention-Guided Network (AGNet) to adaptively strengthen inter-frame and intra-frame features for more precise segmentation predictions. Specifically, we append an adjacent attention module (AAM) and a spatial attention module (SAM) on the top of dilated fully convolutional network (FCN), which model the feature correlations in temporal and spatial dimensions, respectively. The AAM selectively enhances the inter-frame features of the same objects across adjacent frames for temporally consistent predictions. Meanwhile, the SAM selectively aggregates the intra-frame features within one instance for spatially consecutive predictions. Finally, we sum the outputs of the two attention modules to further improve feature representations which contribute to more precise segmentation predictions across temporal and spatial dimensions simultaneously. Extensive experiments demonstrate the effectiveness of the proposed method, obtaining state-of-the-art mean intersection of union (mIoU) of 75.22 on CamVid dataset.
资助项目National Nature Science Foundation of China[61671054] ; Beijing Natural Science Foundation[4182038]
WOS关键词DEEP ; DECODER
WOS研究方向Computer Science ; Engineering ; Telecommunications
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000497156000044
资助机构National Nature Science Foundation of China ; Beijing Natural Science Foundation
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/29337]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
通讯作者Li, Jiangyun
作者单位1.Minist Educ, Key Lab Knowledge Automat Ind Proc, Beijing 100083, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
3.Beijing Technol & Business Univ, Sch Comp & Informat Engn, Beijing 102488, Peoples R China
4.Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing 100083, Peoples R China
推荐引用方式
GB/T 7714
Li, Jiangyun,Zhao, Yikai,Fu, Jun,et al. Attention-Guided Network for Semantic Video Segmentation[J]. IEEE ACCESS,2019,7:140680-140689.
APA Li, Jiangyun,Zhao, Yikai,Fu, Jun,Wu, Jiajia,&Liu, Jing.(2019).Attention-Guided Network for Semantic Video Segmentation.IEEE ACCESS,7,140680-140689.
MLA Li, Jiangyun,et al."Attention-Guided Network for Semantic Video Segmentation".IEEE ACCESS 7(2019):140680-140689.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace