Institute of Computing Technology, Chinese Academy IR
Scheduling of Real-Time Wireless Flows: A Comparative Study of Centralized and Decentralized Reinforcement Learning Approaches | |
Wang, Qi1,2; Huang, Jianhui1,2; Xu, Yongjun1,2 | |
2024-06-04 | |
发表期刊 | IEEE-ACM TRANSACTIONS ON NETWORKING |
ISSN | 1063-6692 |
页码 | 16 |
摘要 | This paper addresses the problem of scheduling real-time wireless flows with general traffic patterns in dynamic network conditions. The main goal is to maximize the fraction of packets to be delivered within their deadlines, which is referred to as timely-throughput. While scheduling algorithms for frame-based traffic models and greedy maximal scheduling methods like LDF have been thoroughly studied, algorithms providing deadline guarantees on packet delivery for general traffic under dynamic network conditions are insufficient. To address this issue, we present a comparative study of two deep reinforcement learning-based scheduling algorithms: RL-Centralized and RL-Decentralized, which are designed to optimize timely-throughput for real-time wireless flows with general traffic patterns in dynamic wireless networks. The RL-Centralized scheduling algorithm formulates the centralized scheduling problem as a Markov Decision Process (MDP) and leverages a Multi-Environments Dueling Double Deep Q-Network (ME-D3QN) structure to adapt to dynamic network conditions. The RL-Decentralized scheduling problem is formulated as a Multi-Agent Markov Decision Process (MMDP) and employs the Node State Consensus Protocol (NSCP) and Lifelong Reinforcement Learning Decentralized Training and Decentralized Execution (LRL-DTDE) structure to accelerate training. Our experimental results indicate that both proposed algorithms can converge quickly and efficiently adapt to dynamic network conditions with better performance than their baseline policies. Finally, test-bed experiments validate simulation results and confirm that the proposed algorithms are practical on resource-limited platforms. |
关键词 | Scheduling timely-throughput deep reinforcement learning real-time wireless networks distributed system |
DOI | 10.1109/TNET.2024.3405950 |
收录类别 | SCI |
语种 | 英语 |
WOS研究方向 | Computer Science ; Engineering ; Telecommunications |
WOS类目 | Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic ; Telecommunications |
WOS记录号 | WOS:001242925100001 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/40041 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Wang, Qi |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China |
推荐引用方式 GB/T 7714 | Wang, Qi,Huang, Jianhui,Xu, Yongjun. Scheduling of Real-Time Wireless Flows: A Comparative Study of Centralized and Decentralized Reinforcement Learning Approaches[J]. IEEE-ACM TRANSACTIONS ON NETWORKING,2024:16. |
APA | Wang, Qi,Huang, Jianhui,&Xu, Yongjun.(2024).Scheduling of Real-Time Wireless Flows: A Comparative Study of Centralized and Decentralized Reinforcement Learning Approaches.IEEE-ACM TRANSACTIONS ON NETWORKING,16. |
MLA | Wang, Qi,et al."Scheduling of Real-Time Wireless Flows: A Comparative Study of Centralized and Decentralized Reinforcement Learning Approaches".IEEE-ACM TRANSACTIONS ON NETWORKING (2024):16. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论