Token-level Direct Preference Optimization
Zeng,Yongcheng3; Liu,Guoqing2; Ma,Weiyu3; Yang,Ning3; Zhang,Haifeng3; Wang,Jun1
2024
会议日期2024/7/21-27
会议地点Vienna, Austria
英文摘要

Fine-tuning pre-trained Large Language Models (LLMs) is essential to align them with human values and intentions. This process often uti- lizes methods like pairwise comparisons and KL divergence against a reference LLM, focusing on the evaluation of full answers generated by the models. However, the generation of these responses occurs in a token level, following a sequential, auto-regressive fashion. In this pa- per, we introduce Token-level Direct Preference Optimization (TDPO), a novel approach to align LLMs with human preferences by optimizing pol- icy at the token level. Unlike previous methods, which face challenges in divergence efficiency, TDPO incorporates forward KL divergence con- straints for each token, improving alignment and diversity. Utilizing the Bradley-Terry model for a token-based reward system, TDPO enhances the regulation of KL divergence, while preserv- ing simplicity without the need for explicit re- ward modeling. Experimental results across vari- ous text tasks demonstrate TDPO’s superior per- formance in balancing alignment with genera- tion diversity. Notably, fine-tuning with TDPO strikes a better balance than DPO in the controlled sentiment generation and single-turn dialogue datasets, and significantly improves the quality of generated responses compared to both DPO and PPO-based RLHF methods. Our code is open- sourced at https://github.com/Vance0124/Token- level-Direct-Preference-Optimization.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/57249]  
专题复杂系统认知与决策实验室_群体决策智能团队
通讯作者Zhang,Haifeng; Wang,Jun
作者单位1.University College London
2.Microsoft Research AI4Science
3.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zeng,Yongcheng,Liu,Guoqing,Ma,Weiyu,et al. Token-level Direct Preference Optimization[C]. 见:. Vienna, Austria. 2024/7/21-27.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace