CSpace

浏览/检索结果: 共7条,第1-7条 帮助

已选(0)清除 条数/页:   排序方式:
DeFT: Relaxing data dependencies for efficient communication scheduling in distributed training 期刊论文
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2026, 卷号: 175, 页码: 15
作者:  Meng, Lin;  Sun, Yuzhong;  Zhu, Jie
收藏  |  浏览/下载:1/0  |  提交时间:2025/12/03
Distributed deep learning  Communication scheduling  Data parallelism  
Learning Critically: Selective Self-Distillation in Federated Learning on Non-IID Data 期刊论文
IEEE TRANSACTIONS ON BIG DATA, 2024, 卷号: 10, 期号: 6, 页码: 789-800
作者:  He, Yuting;  Chen, Yiqiang;  Yang, XiaoDong;  Yu, Hanchao;  Huang, Yi-Hua;  Gu, Yang
收藏  |  浏览/下载:30/0  |  提交时间:2024/12/06
Data models  Training  Servers  Collaborative work  Adaptation models  Convergence  Feature extraction  Federated learning  knowledge distillation  non-identically distributed  deep learning  catastrophic forgetting  
FastTuning: Enabling Fast and Efficient Hyper-Parameter Tuning With Partitioning and Parallelism of Search Space 期刊论文
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 卷号: 35, 期号: 7, 页码: 1174-1188
作者:  Li, Xiaqing;  Guo, Qi;  Zhang, Guangyan;  Ye, Siwei;  He, Guanhua;  Yao, Yiheng;  Zhang, Rui;  Hao, Yifan;  Du, Zidong;  Zheng, Weimin
收藏  |  浏览/下载:57/0  |  提交时间:2024/12/06
Deep learning  distributed hyper-parameter tuning (HPT) system  parallel computing  
Scheduling of Real-Time Wireless Flows: A Comparative Study of Centralized and Decentralized Reinforcement Learning Approaches 期刊论文
IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 页码: 16
作者:  Wang, Qi;  Huang, Jianhui;  Xu, Yongjun
收藏  |  浏览/下载:27/0  |  提交时间:2024/12/06
Scheduling  timely-throughput  deep reinforcement learning  real-time wireless networks  distributed system  
Sketch-fusion: A gradient compression method with multi-layer fusion for communication-efficient distributed training 期刊论文
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2024, 卷号: 185, 页码: 10
作者:  Dai, Lingfei;  Gong, Luqi;  An, Zhulin;  Xu, Yongjun;  Diao, Boyu
收藏  |  浏览/下载:37/0  |  提交时间:2024/05/20
Gradient compression  Multi-layer fusion  Distributed stochastic gradient descent  Deep learning training  
Fast and accurate variable batch size convolution neural network training on large scale distributed systems 期刊论文
CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 页码: 26
作者:  Hu, Zhongzhe;  Xiao, Junmin;  Sun, Ninghui;  Tan, Guangming
收藏  |  浏览/下载:80/0  |  提交时间:2022/12/07
deep learning  distributed computing  ImageNet-1K  large-batch training  synchronous SGD  
TransGPerf: Exploiting Transfer Learning for Modeling Distributed Graph Computation Performance 期刊论文
JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2021, 卷号: 36, 期号: 4, 页码: 778-791
作者:  Niu, Songjie;  Chen, Shimin
收藏  |  浏览/下载:62/0  |  提交时间:2021/12/01
performance modeling  distributed graph computation  deep learning  transfer learning