A Single-Shot Oriented Scene Text Detector With Learnable Anchors
Sheng, Fenfen2,3; Chen, Zhineng3; Mei, Tao1; Xo, Bo3
2019-07
会议日期2019-7-8 ~ 2019-7-12
会议地点Shanghai, China
英文摘要

Current regression based text detectors mainly use fixed anchors, where scales and positions cannot be changed during network training. As scene texts tend to have large variation in orientations, aspect ratios and sizes, fixed anchors are insufficient to cover all varieties. This paper proposes a novel text detector with learnable anchors, named LATD. LATD contains two prediction branches. One aims to refine scales and locations of anchors according to the characteristics of scene texts. The other one receives refined anchors as defaults and regresses their offsets to text regions. These two branches are optimized jointly without sacrifices much speed. Meanwhile, we explore the class-imbalance issue between texts and backgrounds, and replace softmax loss with focal loss. Extensive experiments on both oriented and horizontal benchmarks demonstrate the effectiveness of LATD with new state-of-the-art performance. By visualizing qualitative results, as expected, LATD provides more accurate locations and lower rate of missed detections.

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/39270]  
专题数字内容技术与服务研究中心_听觉模型与认知计算
作者单位1.JD AI Research
2.University of Chinese Academy of Sciences
3.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Sheng, Fenfen,Chen, Zhineng,Mei, Tao,et al. A Single-Shot Oriented Scene Text Detector With Learnable Anchors[C]. 见:. Shanghai, China. 2019-7-8 ~ 2019-7-12.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace