A Fast and High-Performance Object Proposal Method for Vision Sensors: Application to Object Detection | |
Jiang, Chao1,2,3; Wang, Zhiling1,2,3; Liang, Huawei1,2,3; Tan, Shuhang1,2,3 | |
刊名 | IEEE SENSORS JOURNAL |
2022-05-15 | |
卷号 | 22 |
关键词 | Proposals Computational efficiency Object detection Location awareness Merging Feature extraction Visualization Object proposals object detection enhanced frequency feature binarization lateral inhibition autonomous vehicle |
ISSN号 | 1530-437X |
DOI | 10.1109/JSEN.2022.3155232 |
通讯作者 | Liang, Huawei() |
英文摘要 | Use of the object proposal method as a preprocessing step for object detection of vision sensors has improved computational efficiency in recent years. Good object proposal methods require high object detection recall, low computational cost, good localization accuracy, and repeatability. However, existing methods cannot always achieve a good balance of performance. To solve this problem, we propose a fast and high-performance object proposal algorithm. First, we propose a construction method to enhance frequency features that are combined with a linear classifier to learn and generate a set of proposal boxes. Second, we propose a strategy of binarizing frequency features and classifiers to accelerate the calculation. Last, we propose a merging strategy to improve the localization quality of the proposal boxes. Empirically, for the VOC2007 and MSCOCO2017 datasets using the intersection over union (IOU) threshold of 0.5 and 10(4) proposals, our method achieves 99.3% object detection recall, 81.1% mean average best overlap, and 80% mean repeatability with an average time of 0.0014 seconds per image. The average time is three times faster than the current fastest method, and the mean repeatability is 11% higher than that of the region proposal network (RPN) method. We applied our method to the target detection of autonomous vehicles, and in the experiment with the Oxford RobotCar dataset, we achieved 95.6% detection precision and 91.2% detection recall. This work could provide a new way to improve real-time performance and detection accuracy in the object detection of visual sensors. |
资助项目 | National Key Research and Development Program of China[2020AAA0108103] ; Key Supported Project in the 13th Five-Year Plan of Hefei Institutes of Physical Science, Chinese Academy of Sciences[KP-2019-16] ; Key Science and Technology Project of Anhui[202103a05020007] ; Technological Innovation Project for New Energy and Intelligent Networked Automobile Industry of Anhui Province |
WOS关键词 | RECOGNITION ; USERS ; LIDAR |
WOS研究方向 | Engineering ; Instruments & Instrumentation ; Physics |
语种 | 英语 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
WOS记录号 | WOS:000795148500044 |
资助机构 | National Key Research and Development Program of China ; Key Supported Project in the 13th Five-Year Plan of Hefei Institutes of Physical Science, Chinese Academy of Sciences ; Key Science and Technology Project of Anhui ; Technological Innovation Project for New Energy and Intelligent Networked Automobile Industry of Anhui Province |
内容类型 | 期刊论文 |
源URL | [http://ir.hfcas.ac.cn:8080/handle/334002/130937] |
专题 | 中国科学院合肥物质科学研究院 |
通讯作者 | Liang, Huawei |
作者单位 | 1.Chinese Acad Sci, Hefei Inst Phys Sci, Hefei 230031, Peoples R China 2.Chinese Acad Sci, Innovat Res Inst Robot & Intelligent Mfg, Hefei 230031, Peoples R China 3.Anhui Engn Lab Intelligent Driving Technol & Appl, Hefei 230031, Peoples R China |
推荐引用方式 GB/T 7714 | Jiang, Chao,Wang, Zhiling,Liang, Huawei,et al. A Fast and High-Performance Object Proposal Method for Vision Sensors: Application to Object Detection[J]. IEEE SENSORS JOURNAL,2022,22. |
APA | Jiang, Chao,Wang, Zhiling,Liang, Huawei,&Tan, Shuhang.(2022).A Fast and High-Performance Object Proposal Method for Vision Sensors: Application to Object Detection.IEEE SENSORS JOURNAL,22. |
MLA | Jiang, Chao,et al."A Fast and High-Performance Object Proposal Method for Vision Sensors: Application to Object Detection".IEEE SENSORS JOURNAL 22(2022). |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论