Parallel reinforcement learning-based energy efficiency improvement for a cyber-physical system
Liu, Teng3,4; Tian, Bin2,3; Ai, Yunfeng1,3; Wang, Fei-Yue2
刊名IEEE-CAA JOURNAL OF AUTOMATICA SINICA
2020-03-01
卷号7期号:2页码:617-626
关键词Bidirectional long short-term memory (LSTM) network cyber-physical system (CPS) energy management parallel system reinforcement learning (RL)
ISSN号2329-9266
DOI10.1109/JAS.2020.1003072
通讯作者Tian, Bin(bin.tian@ia.ac.cn)
英文摘要As a complex and critical cyber-physical system (CPS), the hybrid electric powertrain is significant to mitigate air pollution and improve fuel economy. Energy management strategy (EMS) is playing a key role to improve the energy efficiency of this CPS. This paper presents a novel bidirectional long shortterm memory (LSTM) network based parallel reinforcement learning (PRL) approach to construct EMS for a hybrid tracked vehicle (HTV). This method contains two levels. The high-level establishes a parallel system first, which includes a real powertrain system and an artificial system. Then, the synthesized data from this parallel system is trained by a bidirectional LSTM network. The lower-level determines the optimal EMS using the trained action state function in the model-free reinforcement learning (RL) framework. PRL is a fully data-driven and learning-enabled approach that does not depend on any prediction and predefined rules. Finally, real vehicle testing is implemented and relevant experiment data is collected and calibrated. Experimental results validate that the proposed EMS can achieve considerable energy efficiency improvement by comparing with the conventional RL approach and deep RL.
资助项目National Natural Science Foundation of China[61533019] ; National Natural Science Foundation of China[91720000] ; Beijing Municipal Science and Technology Commission[Z181100008918007] ; Intel Collaborative Research Institute for Intelligent and Automated Connected Vehicles (pICRI-IACVq)
WOS关键词HYBRID ELECTRIC VEHICLES ; REAL-TIME ; MANAGEMENT ; STRATEGY
WOS研究方向Automation & Control Systems
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000519596200028
资助机构National Natural Science Foundation of China ; Beijing Municipal Science and Technology Commission ; Intel Collaborative Research Institute for Intelligent and Automated Connected Vehicles (pICRI-IACVq)
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/38913]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_先进控制与自动化团队
通讯作者Tian, Bin
作者单位1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 101408, Peoples R China
2.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
3.Vehicle Intelligence Pioneers Inc, Qingdao 266109, Peoples R China
4.Chongqing Univ, Dept Automot Engn, Chongqing 400044, Peoples R China
推荐引用方式
GB/T 7714
Liu, Teng,Tian, Bin,Ai, Yunfeng,et al. Parallel reinforcement learning-based energy efficiency improvement for a cyber-physical system[J]. IEEE-CAA JOURNAL OF AUTOMATICA SINICA,2020,7(2):617-626.
APA Liu, Teng,Tian, Bin,Ai, Yunfeng,&Wang, Fei-Yue.(2020).Parallel reinforcement learning-based energy efficiency improvement for a cyber-physical system.IEEE-CAA JOURNAL OF AUTOMATICA SINICA,7(2),617-626.
MLA Liu, Teng,et al."Parallel reinforcement learning-based energy efficiency improvement for a cyber-physical system".IEEE-CAA JOURNAL OF AUTOMATICA SINICA 7.2(2020):617-626.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace