Asynchronous Trajectory Matching-Based Multimodal Maritime Data Fusion for Vessel Traffic Surveillance in Inland Waterways
Guo, Yu1,2; Liu, Ryan Wen1,2; Qu, Jingxiang1,2; Lu, Yuxu1,2; Zhu, Fenghua3; Lv, Yisheng3
刊名IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
2023-06-22
页码14
关键词Inland waterways vessel traffic surveillance deep neural network anti-occlusion tracking data fusion
ISSN号1524-9050
DOI10.1109/TITS.2023.3285415
通讯作者Liu, Ryan Wen(wenliu@whut.edu.cn) ; Zhu, Fenghua(fenghua.zhu@ia.ac.cn)
英文摘要The automatic identification system (AIS) and video cameras have been widely exploited for vessel traffic surveillance in inland waterways. The AIS data could provide vessel identity and dynamic information on vessel position and movements. In contrast, the video data could describe the visual appearances of moving vessels without knowing the information on identity, position, movements, etc. To further improve vessel traffic surveillance, it becomes necessary to fuse the AIS and video data to simultaneously capture the visual features, identity, and dynamic information for the vessels of interest. However, the performance of AIS and video data fusion is susceptible to issues such as data spatial difference, message asynchronous transmission, visual object occlusion, etc. In this work, we propose a deep learning-based simple online and real-time vessel data fusion method (termed DeepSORVF). We first extract the AIS- and video-based vessel trajectories, and then propose an asynchronous trajectory matching method to fuse the AIS-based vessel information with the corresponding visual targets. In addition, by combining the AIS- and video-based movement features, we also present a prior knowledge-driven anti-occlusion method to yield accurate and robust vessel tracking results under occlusion conditions. To validate the efficacy of our DeepSORVF, we have also constructed a new benchmark dataset (termed FVessel) for vessel detection, tracking, and data fusion. It consists of many videos and the corresponding AIS data collected in various weather conditions and locations. The experimental results have demonstrated that our method is capable of guaranteeing high-reliable data fusion and anti-occlusion vessel tracking. The DeepSORVF code and FVessel dataset are publicly available at https://github.com/gy65896/DeepSORVF and https://github.com/gy65896/FVessel, respectively.
资助项目National Key Ramp;D Program of China[2022YFB4300300] ; National Natural Science Foundation of China[52271365] ; Fundamental Research Funds for the Central Universities[2023-vb-045]
WOS关键词SHIP DETECTION ; OBJECT DETECTION ; TRACKING ; TIME
WOS研究方向Engineering ; Transportation
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:001021944800001
资助机构National Key Ramp;D Program of China ; National Natural Science Foundation of China ; Fundamental Research Funds for the Central Universities
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/53756]  
专题多模态人工智能系统全国重点实验室
通讯作者Liu, Ryan Wen; Zhu, Fenghua
作者单位1.State Key Lab Maritime Technol & Safety, Wuhan 430063, Peoples R China
2.Wuhan Univ Technol, Sch Nav, Wuhan 430063, Peoples R China
3.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Guo, Yu,Liu, Ryan Wen,Qu, Jingxiang,et al. Asynchronous Trajectory Matching-Based Multimodal Maritime Data Fusion for Vessel Traffic Surveillance in Inland Waterways[J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS,2023:14.
APA Guo, Yu,Liu, Ryan Wen,Qu, Jingxiang,Lu, Yuxu,Zhu, Fenghua,&Lv, Yisheng.(2023).Asynchronous Trajectory Matching-Based Multimodal Maritime Data Fusion for Vessel Traffic Surveillance in Inland Waterways.IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS,14.
MLA Guo, Yu,et al."Asynchronous Trajectory Matching-Based Multimodal Maritime Data Fusion for Vessel Traffic Surveillance in Inland Waterways".IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2023):14.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace