Institute of Computing Technology, Chinese Academy IR
Learning controllable elements oriented representations for reinforcement learning | |
Yi, Qi1,2,3; Zhang, Rui2,3; Peng, Shaohui2,3,4; Guo, Jiaming2,3,4; Hu, Xing2,3; Du, Zidong2,3; Guo, Qi2; Chen, Ruizhi5; Li, Ling4,5; Chen, Yunji2,4 | |
2023-09-07 | |
发表期刊 | NEUROCOMPUTING |
ISSN | 0925-2312 |
卷号 | 549页码:13 |
摘要 | Deep Reinforcement Learning (deep RL) has been successfully applied to solve various decision-making problems in recent years. However, the observations in many real-world tasks are often high dimensional and include much task-irrelevant information, limiting the applications of RL algorithms. To tackle this problem, we propose LCER, a representation learning method that aims to provide RL algorithms with compact and sufficient descriptions of the original observations. Specifically, LCER trains representations to retain the controllable elements of the environment, which can reflect the action-related environment dynamics and thus are likely to be task-relevant. We demonstrate the strength of LCER on the DMControl Suite, proving that it can achieve state-of-the-art performance. LCER enables the pixel -based SAC to outperform state-based SAC on the DMControl 100 K benchmark, showing that the obtained representations can match the oracle descriptions (i.e. the physical states) of the environment. We also carry out experiments to show that LCER can efficiently filter out various distractions, especially when those distractions are not controllable.& COPY; 2023 Elsevier B.V. All rights reserved. |
关键词 | Reinforcement learning Representation learning |
DOI | 10.1016/j.neucom.2023.126455 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Key Research and Development Program of China[2017YFA0700900] ; NSF of China[61925208] ; NSF of China[62102399] ; NSF of China[62002338] ; NSF of China[U19B2019] ; NSF of China[61732020] ; Beijing Academy of Artificial Intelligence (BAAI) ; CAS Project for Young Scientists in Basic Research[YSBR-029] ; Youth Innovation Promotion Association CAS and Xplore Prize |
WOS研究方向 | Computer Science |
WOS类目 | Computer Science, Artificial Intelligence |
WOS记录号 | WOS:001035238900001 |
出版者 | ELSEVIER |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/21302 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Yi, Qi |
作者单位 | 1.Univ Sci & Technol China, Hefei, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, SKL Processors, Beijing, Peoples R China 3.Cambricon Technol, Beijing, Peoples R China 4.Univ Chinese Acad Sci, Beijing, Peoples R China 5.Chinese Acad Sci, Inst Software, Beijing, Peoples R China |
推荐引用方式 GB/T 7714 | Yi, Qi,Zhang, Rui,Peng, Shaohui,et al. Learning controllable elements oriented representations for reinforcement learning[J]. NEUROCOMPUTING,2023,549:13. |
APA | Yi, Qi.,Zhang, Rui.,Peng, Shaohui.,Guo, Jiaming.,Hu, Xing.,...&Chen, Yunji.(2023).Learning controllable elements oriented representations for reinforcement learning.NEUROCOMPUTING,549,13. |
MLA | Yi, Qi,et al."Learning controllable elements oriented representations for reinforcement learning".NEUROCOMPUTING 549(2023):13. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论