Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Wei, Xiaolong | Huang, Xianglin | Yang, LiFang; * | Cao, Gang | Tao, Zhulin | Wang, Bing | An, Jing
Affiliations: State Key Laboratory of Media Convergence and Communication, Communication University of China
Correspondence: [*] Corresponding author. LiFang Yang, State Key Laboratory of Media Convergence and Communication, Communication University of China. E-mail: yanglifang@cuc.edu.cn.
Abstract: Structural models based on Attention can not only record the relationships between features’ position, but also can measure the importance of different features based on their weights. By establishing dynamically weighted parameters for choosing relevant and irrelevant features, the key information can be strengthened, and the irrelevant information can be weakened. Therefore, the efficiency of Deep Learning algorithms can be significantly elevated and improved. Although Transformer have been performed very well in many fields including Reinforcement Learning (RL). We tried to integrate Transformers into RL, however there are some challenge in this task. Especially, MARL (known as Multi-Agent Reinforcement Learning), which can be recognized as a set of independent agents trying to adapt and learn through their way to reach the goal. In order to emphasize the relationship between each MDP decision in a certain time period, we applied the hierarchical coding method and validated the effectiveness of this method. This paper proposed a Hierarchical Transformer MADDPG based on recurrent neural network(RNN) which we call it Hierarchical RNNs-Based Transformers MADDPG(HRTMADDPG). It consists of a lower level encoder based on RNNs that encodes multiple step sizes in each time sequence, and it also consists of an upper sequence level encoder based on Transformer for learning the correlations between multiple sequences. Then we can capture the causal relationship between sub-time sequences and make HRTMADDPG more efficient.
Keywords: MADDPG, Attention, RNN
DOI: 10.3233/JIFS-212795
Journal: Journal of Intelligent & Fuzzy Systems, vol. 43, no. 1, pp. 1011-1022, 2022
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl