Institute of Computing Technology, Chinese Academy IR
MJOA-MU: End-to-edge collaborative computation for DNN inference based on model uploading | |
Yang, Huan1,2; Sun, Sheng1; Liu, Min1,2,3; Zhang, Qiuping1,2; Wang, Yuwei1 | |
2023-07-01 | |
发表期刊 | COMPUTER NETWORKS |
ISSN | 1389-1286 |
卷号 | 231页码:17 |
摘要 | As an emerging computing paradigm, edge computing can assist user equipments (UEs) in executing computation-intensive deep neural network (DNN) inference tasks, thereby satisfying the stringent QoS requirement and relieving the burden of UEs. Due to the customizability of DNN models and limited capacity of the edge server, it is more realistic to upload DNN models on demand during end-to-edge co-inference, instead of deploying all DNN models at the edge server in advance. Existing works adopt the serial model uploading manner that uploads subsequent DNN layers only after antecedent DNN layers finish execution, inevitably prolonging the DNN execution latency. To this end, we innovatively design a parallel-efficient model uploading mechanism that allows subsequent DNN layers to be uploaded simultaneously when executing antecedent DNN layers, so as to efficiently mitigate the performance drop caused by model uploading. On this basis, we propose a Multi-UE Joint Optimization Algorithm based on Model Uploading (MJOA-MU) to optimize DNN partitioning and resource allocation for heterogeneous UEs. Specifically, MJOA-MU includes a Pruned Binary Tree based DNN Partitioning (PBT-DP) sub-algorithm to efficiently make the near-optimal partitioning decision for chain and non-chain models based on the long-term influence between DNN layers, and an Asynchronous Resource Allocation (ARA) sub-algorithm to allocate computation and communication resources for UEs by quantifying the inner-and inter-association, so as to match with individual demand and resource budget. Extensive simulation results demonstrate that MJOA-MU outperforms the state-of-the-art in terms of the DNN execution latency, and specifically achieves up to 64.5% reduction. |
关键词 | DNN inference Model uploading DNN partitioning Resource allocation |
DOI | 10.1016/j.comnet.2023.109801 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Key Research and Devel-opment Program of China[2021YFB2900102] ; National Natural Science Foundation of China[62072436] ; National Natural Science Foundation of China[62202449] |
WOS研究方向 | Computer Science ; Engineering ; Telecommunications |
WOS类目 | Computer Science, Hardware & Architecture ; Computer Science, Information Systems ; Engineering, Electrical & Electronic ; Telecommunications |
WOS记录号 | WOS:001001503300001 |
出版者 | ELSEVIER |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/21476 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Liu, Min |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China 2.Univ Chinese Acad Sci, Beijing, Peoples R China 3.Zhongguancun Lab, Beijing, Peoples R China |
推荐引用方式 GB/T 7714 | Yang, Huan,Sun, Sheng,Liu, Min,et al. MJOA-MU: End-to-edge collaborative computation for DNN inference based on model uploading[J]. COMPUTER NETWORKS,2023,231:17. |
APA | Yang, Huan,Sun, Sheng,Liu, Min,Zhang, Qiuping,&Wang, Yuwei.(2023).MJOA-MU: End-to-edge collaborative computation for DNN inference based on model uploading.COMPUTER NETWORKS,231,17. |
MLA | Yang, Huan,et al."MJOA-MU: End-to-edge collaborative computation for DNN inference based on model uploading".COMPUTER NETWORKS 231(2023):17. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论