Attention Calibration for Transformer in Neural Machine Translation
Yu, Lu2,3; Jiali Zeng1; Jiajun, Zhang2,3; Shuangzhi Wu1; Mu, Li1
2021-08
会议日期2021-8
会议地点线上
关键词神经机器翻译
英文摘要

Attention mechanisms have achieved substantial improvements in neural machine translation by dynamically selecting relevant inputs for different predictions. However, recent studies have questioned the attention mechanisms’ capability for discovering decisive inputs. In this paper, we propose to calibrate the attention weights by introducing a mask perturbation model that automatically evaluates each input’s contribution to the model outputs. We increase the attention weights assigned to the indispensable tokens, whose removal leads to a dramatic performance decrease. The extensive experiments on the Transformer-based translation have demonstrated the effectiveness of our model. We further find that the calibrated attention weights are more uniform at lower layers to collect multiple information while more concentrated on the specific inputs at higher layers. Detailed analyses also show a great need for calibration in the attention weights with high entropy where the model is unconfident about its decision.

语种英语
URL标识查看原文
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/51839]  
专题模式识别国家重点实验室_自然语言处理
通讯作者Jiajun, Zhang
作者单位1.Tencent Cloud Xiaowei
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Yu, Lu,Jiali Zeng,Jiajun, Zhang,et al. Attention Calibration for Transformer in Neural Machine Translation[C]. 见:. 线上. 2021-8.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace