Knowledge Transfer from Situation Evaluation to Multi-agent Reinforcement Learning | |
Chen M(陈敏); Pu ZQ(蒲志强); Pan Y(潘一); Yi JQ(易建强) | |
2023-03 | |
会议日期 | 2022年11月22-2022年11月26 |
会议地点 | New Delhi, India |
关键词 | Multi-agent reinforcement learning Transfer learning |
英文摘要 | Recently, multi-agent reinforcement learning (MARL) has achieved amazing performance on complex tasks. However, it still suffers from challenges of sparse rewards and contradiction between consistent cognition and policy diversity. In this paper, we propose novel methods for transferring knowledge from situation evaluation task to MARL task. Specifically, we utilize offline data from a single-agent scenario to train two situation evaluation models for: (1) constructing guiding dense rewards (GDR) in multi-agent scenarios to help agents explore real sparse rewards faster and jump out of locally optimal policies without changing the global optimal policy; (2) transferring a situation comprehension network (SCN) to multi-agent scenarios that balances the contradiction between consistent cognition and policy diversity among agents. Our methods can be easily combined with existing MARL methods. Empirical results show that our methods achieve state-of-the-art performance on Google Research Football which brings together above challenges. |
会议录出版者 | Springer Nature Singapore |
会议录出版地 | Singapore |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/52202] |
专题 | 复杂系统认知与决策实验室 |
作者单位 | 1.Institute of Automation, Chinese Academy of Sciences, Beijing, China 2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China |
推荐引用方式 GB/T 7714 | Chen M,Pu ZQ,Pan Y,et al. Knowledge Transfer from Situation Evaluation to Multi-agent Reinforcement Learning[C]. 见:. New Delhi, India. 2022年11月22-2022年11月26. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论