Cross-modal subspace learning for fine-grained sketch-based image retrieval
Xu, Peng1; Yin, Qiyue2; Huang, Yongye1; Song, Yi-Zhe3; Ma, Zhanyu1; Wang, Liang2; Xiang, Tao3; Kleijn, W. Bastiaan4; Guo, Jun1
刊名NEUROCOMPUTING
2018-02-22
卷号278页码:75-86
关键词Cross-modal Subspace Learning Sketch-based Image Retrieval Fine-grained
DOI10.1016/j.neucom.2017.05.099
文献子类Article
英文摘要Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are insufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research. (C) 2017 Elsevier B.V. All rights reserved.
WOS关键词PARTIAL-LEAST-SQUARES ; FACE RECOGNITION ; TAGS
WOS研究方向Computer Science
语种英语
WOS记录号WOS:000423965000009
资助机构National Natural Science Foundation of China (NSFC)(61773071) ; Beijing Natural Science Foundation (BNSF) grant(4162044) ; Beijing Nova Program Grant(Z171100001117049) ; Open Project Program of National Laboratory of Pattern Recognition grant(201600018) ; Chinese 111 program of Advanced Intelligence and Network Service Grant(B08004) ; BUPT-SICE Excellent Graduate Student Innovation Foundation
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/21943]  
专题自动化研究所_智能感知与计算研究中心
作者单位1.Beijing Univ Posts & Telecommun, Pattern Recognit & Intelligent Syst Lab, Beijing, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
3.Queen Mary Univ London, Sch Elect Engn & Comp Sci, SketchX Lab, London, England
4.Victoria Univ Wellington, Commun & Signal Proc Grp, Wellington, New Zealand
推荐引用方式
GB/T 7714
Xu, Peng,Yin, Qiyue,Huang, Yongye,et al. Cross-modal subspace learning for fine-grained sketch-based image retrieval[J]. NEUROCOMPUTING,2018,278:75-86.
APA Xu, Peng.,Yin, Qiyue.,Huang, Yongye.,Song, Yi-Zhe.,Ma, Zhanyu.,...&Guo, Jun.(2018).Cross-modal subspace learning for fine-grained sketch-based image retrieval.NEUROCOMPUTING,278,75-86.
MLA Xu, Peng,et al."Cross-modal subspace learning for fine-grained sketch-based image retrieval".NEUROCOMPUTING 278(2018):75-86.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace