Cascaded Decoding and Multi-Stage Inference for Spatio-Temporal Video Grounding
Li Yang2,3; Peixuan Wu2,3; Chunfeng Yuan3; Bing Li3; Weiming Hu1,2,3
2022-10
会议日期2022-10
会议地点Lisbon, Portugal
英文摘要

Human-centric spatio-temporal video grounding (HC-STVG) is a challenging task that aims to localize the spatio-temporal tube of the target person in a video based on a natural language description. In this report, we present our approach for this challenging HC-STVG task. Specifically, based on the TubeDETR framework, we propose two cascaded decoders to decouple spatial and temporal grounding, which allows the model to capture respective favorable features for these two grounding subtasks. We also devise a multi-stage inference strategy to reason about the target in a coarse-to-fine manner and thereby produce more precise grounding results for the target. To further improve accuracy, we propose a model ensemble strategy that incorporates the results of models with better performance in spatial or temporal grounding. We validated the effectiveness of our proposed method on the HC-STVG 2.0 dataset and won second place in the HC-STVG track of the 4th Person in Context (PIC) workshop at ACM MM 2022.

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/52323]  
专题自动化研究所_模式识别国家重点实验室_视频内容安全团队
通讯作者Chunfeng Yuan
作者单位1.CAS Center for Excellence in Brain Science and Intelligence Technology
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Li Yang,Peixuan Wu,Chunfeng Yuan,et al. Cascaded Decoding and Multi-Stage Inference for Spatio-Temporal Video Grounding[C]. 见:. Lisbon, Portugal. 2022-10.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace