Path Planning Method With Improved Artificial Potential Field-A Reinforcement Learning Perspective
Yao QF(么庆丰)1,2,3; Zheng ZY(郑泽宇)1,2,3; Qi, Liang6; Yuan HT(苑海涛)4,7; Guo XW(郭希旺)5; Zhao M(赵明)1,2,3; Liu Z(刘智)1,2,3; Yang TJ(杨天吉)2,3
刊名IEEE ACCESS
2020
卷号8页码:135513-135523
关键词Path planning Learning (artificial intelligence) Gravity Potential energy Mobile agents Real-time systems Reinforcement learning neural network potential field path planning
ISSN号2169-3536
产权排序1
英文摘要

The artificial potential field approach is an efficient path planning method. However, to deal with the local-stable-point problem in complex environments, it needs to modify the potential field and increases the complexity of the algorithm. This study combines improved black-hole potential field and reinforcement learning to solve the problems which are scenarios of local-stable-points. The black-hole potential field is used as the environment in a reinforcement learning algorithm. Agents automatically adapt to the environment and learn how to utilize basic environmental information to find targets. Moreover, trained agents adopt variable environments with the curriculum learning method. Meanwhile, the visualization of the avoidance process demonstrates how agents avoid obstacles and reach the target. Our method is evaluated under static and dynamic experiments. The results show that agents automatically learn how to jump out of local stability points without prior knowledge.

资助项目National Key Research and Development Program of China[2018YFF0214704] ; Liaoning Revitalization Talents Program[XLYC1907166] ; Liaoning Province Department of Education Foundation of China[L2019027] ; Liaoning Province Dr. Research Foundation of China[20170520135] ; National Natural Science Foundation of China[61903229] ; National Natural Science Foundation of China[61973180] ; National Natural Science Foundation of China[61802015] ; Natural Science Foundation of Shandong Province[ZR2019BF004] ; Natural Science Foundation of Shandong Province[ZR2019BF041]
WOS关键词MOBILE ; OPTIMIZATION
WOS研究方向Computer Science ; Engineering ; Telecommunications
语种英语
WOS记录号WOS:000554892500001
资助机构National Key Research and Development Program of China [2018YFF0214704] ; Liaoning Revitalization Talents Program [XLYC1907166] ; Liaoning Province Department of Education Foundation of China [L2019027] ; Liaoning Province Dr. Research Foundation of China [20170520135] ; National Natural Science Foundation of ChinaNational Natural Science Foundation of China [61903229, 61973180, 61802015] ; Natural Science Foundation of Shandong ProvinceNatural Science Foundation of Shandong Province [ZR2019BF004, ZR2019BF041]
内容类型期刊论文
源URL[http://ir.sia.cn/handle/173321/27477]  
专题沈阳自动化研究所_数字工厂研究室
通讯作者Zheng ZY(郑泽宇); Qi, Liang; Guo XW(郭希旺)
作者单位1.University of Chinese Academy of Sciences, Beijing 100049, China
2.Department of Digital Factory, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
3.Institutes for Robotics and Intelligent Manufacturing, Shenyang 110016, China
4.Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07029, USA
5.College of Computer and Communication Engineering, Liaoning Shihua University, Fushun, Liaoning 113001, China
6.College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
7.School of Software Engineering, Beijing Jiaotong University, Beijing, China
推荐引用方式
GB/T 7714
Yao QF,Zheng ZY,Qi, Liang,et al. Path Planning Method With Improved Artificial Potential Field-A Reinforcement Learning Perspective[J]. IEEE ACCESS,2020,8:135513-135523.
APA Yao QF.,Zheng ZY.,Qi, Liang.,Yuan HT.,Guo XW.,...&Yang TJ.(2020).Path Planning Method With Improved Artificial Potential Field-A Reinforcement Learning Perspective.IEEE ACCESS,8,135513-135523.
MLA Yao QF,et al."Path Planning Method With Improved Artificial Potential Field-A Reinforcement Learning Perspective".IEEE ACCESS 8(2020):135513-135523.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace