Institute of Computing Technology, Chinese Academy IR
| DSparse: A Distributed Training Method for Edge Clusters Based on Sparse Update | |
| Peng, Xiao-Hui1,2; Sun, Yi-Xuan2; Zhang, Zheng-Hui1; Wang, Yi-Fan1,2 | |
| 2025-05-01 | |
| 发表期刊 | JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY
![]() |
| ISSN | 1000-9000 |
| 卷号 | 40期号:3页码:637-653 |
| 摘要 | Edge machine learning creates a new computational paradigm by enabling the deployment of intelligent applications at the network edge. It enhances application efficiency and responsiveness by performing inference and training tasks closer to data sources. However, it encounters several challenges in practice. The variance in hardware specifications and performance across different devices presents a major issue for the training and inference tasks. Additionally, edge devices typically possess limited network bandwidth and computing resources compared with data centers. Moreover, existing distributed training architectures often fail to consider the constraints of resources and communication efficiency in edge environments. In this paper, we propose DSparse, a method for distributed training based on sparse update in edge clusters with various memory capacities. It aims at maximizing the utilization of memory resources across all devices within a cluster. To reduce memory consumption during the training process, we adopt sparse update to prioritize the updating of selected layers on the devices in the cluster, which not only lowers memory usage but also reduces the data volume of parameters and the time required for parameter aggregation. Furthermore, DSparse utilizes a parameter aggregation mechanism based on multi-process groups, subdividing the aggregation tasks into AllReduce and Broadcast types, thereby further reducing the communication frequency for parameter aggregation. Experimental results using the MobileNetV2 model on the CIFAR-10 dataset demonstrate that DSparse reduces memory consumption by an average of 59.6% across seven devices, with a 75.4% reduction in parameter aggregation time, while maintaining model precision. |
| 关键词 | distributed training edge computing edge machine learning sparse update edge cluster |
| DOI | 10.1007/s11390-025-4821-5 |
| 收录类别 | SCI |
| 语种 | 英语 |
| 资助项目 | National Natural Science Foundation of China[62072434] ; National Natural Science Foundation of China[U23B2004] ; Innovation Funding of Institute of Computing Technology, Chinese Academy of Sciences[E361050] ; Innovation Funding of Institute of Computing Technology, Chinese Academy of Sciences[E361030] |
| WOS研究方向 | Computer Science |
| WOS类目 | Computer Science, Hardware & Architecture ; Computer Science, Software Engineering |
| WOS记录号 | WOS:001529691300017 |
| 出版者 | SPRINGER SINGAPORE PTE LTD |
| 引用统计 | |
| 文献类型 | 期刊论文 |
| 条目标识符 | http://119.78.100.204/handle/2XEOYT63/42069 |
| 专题 | 中国科学院计算技术研究所期刊论文_英文 |
| 通讯作者 | Wang, Yi-Fan |
| 作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Beijing 101408, Peoples R China |
| 推荐引用方式 GB/T 7714 | Peng, Xiao-Hui,Sun, Yi-Xuan,Zhang, Zheng-Hui,et al. DSparse: A Distributed Training Method for Edge Clusters Based on Sparse Update[J]. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY,2025,40(3):637-653. |
| APA | Peng, Xiao-Hui,Sun, Yi-Xuan,Zhang, Zheng-Hui,&Wang, Yi-Fan.(2025).DSparse: A Distributed Training Method for Edge Clusters Based on Sparse Update.JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY,40(3),637-653. |
| MLA | Peng, Xiao-Hui,et al."DSparse: A Distributed Training Method for Edge Clusters Based on Sparse Update".JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 40.3(2025):637-653. |
| 条目包含的文件 | 条目无相关文件。 | |||||
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论