One In A Hundred: Selecting the Best Predicted Sequence from Numerous Candidates for Speech Recognition
Zhengkun Tian2,3; Jiangyan Yi2; Ye Bai2,3; Jianhua Tao1,2,3; Shuai Zhang2,3; Zhengqi Wen2
2021-12
会议日期14-17 December 2021
会议地点Tokyo, Japan
英文摘要

The RNN - Transducers and improved attention-based encoder-decoder models are widely applied to streaming speech recognition. Compared with these two end-to-end models, the CTC model is more efficient in training and inference. However, it cannot capture the linguistic dependencies between the output tokens. Inspired by the success of two-pass end-to-end models, we introduce a transformer decoder and the two-stage inference method into the streaming CTC model. During inference, the CTC decoder first generates many candidates in a streaming fashion. Then the transformer decoder selects the best candidate based on the corresponding acoustic encoded states. The second -stage transformer decoder can be regarded as a conditional language model. We assume that a large enough number and enough diversity of candidates generated in the first stage can compensate the CTC model for the lack of language modeling ability. All the experiments are conducted on a Chinese Mandarin dataset AISHELL-l. The results show that our proposed model can implement streaming decoding in a fast and straightforward way. Our model can achieve up to a 20% reduction in the character error rate than the baseline CTC model. In addition, our model can also perform non-streaming inference with only a little performance degradation.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/48606]  
专题模式识别国家重点实验室_智能交互
通讯作者Jianhua Tao
作者单位1.CAS Center for Excellence in Brain Science and Intelligence Technology
2.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
3.School of Artificial Intelligence, University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zhengkun Tian,Jiangyan Yi,Ye Bai,et al. One In A Hundred: Selecting the Best Predicted Sequence from Numerous Candidates for Speech Recognition[C]. 见:. Tokyo, Japan. 14-17 December 2021.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace