Synchronous Transformers for end-to-end Speech Recognition
Zhengkun Tian; Jiangyan Yi; Ye Bai; Jianhua Tao; Shuai Zhang; Zhengqi Wen
2020
会议日期2020.05.04-2020.05.08
会议地点Barcelona, Spain
英文摘要

For most of the attention-based sequence-to-sequence models, the decoder predicts the output sequence conditioned on the entire input sequence processed by the encoder. The asynchronous problem between the encoding and decoding makes these models difficult to be applied for online speech recognition. In this paper, we propose a model named synchronous transformer to address this problem, which can predict the output sequence chunk by chunk. Once a fixed-length chunk of the input sequence is processed by the encoder, the decoder begins to predict symbols immediately. During training, a forward-backward algorithm is introduced to optimize all the possible alignment paths. Our model is evaluated on a Mandarin dataset AISHELL-1. The experiments show that the synchronous transformer is able to perform encoding and decoding synchronously, and achieves a character error rate of 8.91% on the test set.

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/40666]  
专题模式识别国家重点实验室_智能交互
作者单位1.中国科学院大学
2.中国科学院自动化研究所;
推荐引用方式
GB/T 7714
Zhengkun Tian,Jiangyan Yi,Ye Bai,et al. Synchronous Transformers for end-to-end Speech Recognition[C]. 见:. Barcelona, Spain. 2020.05.04-2020.05.08.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace