Traffic Signal Control Using Offline Reinforcement Learning
Dai, Xingyuan1,2; Zhao, Chen1,2; Li, Xiaoshuang1,2; Wang, Xiao1; Wang, Fei-Yue1
2021-10
会议日期2021-10
会议地点Beijing
英文摘要

The problem of traffic signal control is essential but remains unsolved. Some researchers use online reinforcement learning, including the off-policy one, to derive an optimal control policy through interaction between agents and environments in simulation. However, it is difficult to deploy the policy in real transportation systems due to the gap between simulated and real traffic data. In this paper, we consider an offline reinforcement learning method to tackle the problem. First, we construct a realistic traffic environment and obtain offline data based on a classic actuated traffic signal controller. Then, we use an offline reinforcement learning algorithm, namely conservative Q-learning, to learn an efficient control policy via offline datasets. We conduct experiments on a typical road intersection and compare the conservative Q-learning policy with the actuated policy and two data-driven policies based on off-policy reinforcement learning and imitation learning. Empirical results indicate that in the offline-learning setting the conservative Q-learning policy performs significantly better than other baselines, including the actuated policy, but the other two data-driven policies perform poorly in test scenarios.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/49936]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_先进控制与自动化团队
作者单位1.The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Dai, Xingyuan,Zhao, Chen,Li, Xiaoshuang,et al. Traffic Signal Control Using Offline Reinforcement Learning[C]. 见:. Beijing. 2021-10.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace