CORC  > 计算技术研究所  > 中国科学院计算技术研究所
Retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning
Xu, Chunpu1; Yang, Min1; Ao, Xiang2; Shen, Ying3; Xu, Ruifeng4; Tian, Jinwen5
刊名KNOWLEDGE-BASED SYSTEMS
2021-02-28
卷号214页码:10
关键词Image paragraph captioning Key-value memory network Adversarial training
ISSN号0950-7051
DOI10.1016/j.knosys.2020.106730
英文摘要Existing image paragraph captioning methods generate long paragraph captions solely from input images, relying on insufficient information. In this paper, we propose a retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning (RAMP), which makes full use of the R-best retrieved candidate captions to enhance the image paragraph captioning via adversarial training. Concretely, RAMP treats the retrieved captions as reference captions to augment the discriminator during adversarial training, encouraging the image captioning model (generator) to incorporate informative content in retrieved captions into the generated caption. In addition, a retrieval-enhanced dynamic memory-augmented attention network is devised to keep track of the coverage information and attention history along with the update-chain of the decoder state, and therefore avoiding generating repetitive or incomplete image descriptions. Finally, a copying mechanism is applied to select words from the retrieved candidate captions, which are then put into the proper positions of the target caption so as to improve the fluency and informativeness of the generated caption. Extensive experiments on a benchmark dataset (i.e., Stanford) demonstrate that the proposed RAMP model significantly outperforms the state-of-the-art methods across multiple evaluation metrics. For reproducibility, we submit the code and data at https://github.com/anonymous-caption/RAMP. (C) 2020 Elsevier B.V. All rights reserved.
资助项目National Natural Science Foundation of China[61906185] ; Natural Science Foundation of Guangdong Province of China[2019A1515011705] ; Shenzhen Science and Technology Innovation Program, China[KQTD20190929172835662] ; Youth Innovation Promotion Association of CAS China, China ; Shenzhen Basic Research Foundation, China[JCYJ20200109113441941]
WOS研究方向Computer Science
语种英语
出版者ELSEVIER
WOS记录号WOS:000618605200010
内容类型期刊论文
源URL[http://119.78.100.204/handle/2XEOYT63/16170]  
专题中国科学院计算技术研究所
通讯作者Yang, Min
作者单位1.Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen Key Lab High Performance Data Min, Shenzhen, Guangdong, Peoples R China
2.Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
3.Sun Yat Sen Univ, Sch Intelligent Engn, Guangzhou, Guangdong, Peoples R China
4.Harbin Inst Technol, Shenzhen, Peoples R China
5.Huazhong Univ Sci & Technol, Wuhan, Hubei, Peoples R China
推荐引用方式
GB/T 7714
Xu, Chunpu,Yang, Min,Ao, Xiang,et al. Retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning[J]. KNOWLEDGE-BASED SYSTEMS,2021,214:10.
APA Xu, Chunpu,Yang, Min,Ao, Xiang,Shen, Ying,Xu, Ruifeng,&Tian, Jinwen.(2021).Retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning.KNOWLEDGE-BASED SYSTEMS,214,10.
MLA Xu, Chunpu,et al."Retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning".KNOWLEDGE-BASED SYSTEMS 214(2021):10.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace