CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
moDNN: Memory Optimal Deep Neural Network Training on Graphics Processing Units
Chen, Xiaoming1; Chen, Danny Ziyi2; Han, Yinhe1; Hu, Xiaobo Sharon2
2019-03-01
发表期刊IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
ISSN1045-9219
卷号30期号:3页码:646-661
摘要Graphics processing units (GPUs) have been widely adopted to accelerate the training of deep neural networks (DNNs). Although the computational performance of GPUs has been improving steadily, the memory size of modern GPUs is still quite limited, which restricts the sizes of DNNs that can be trained on GPUs, and hence raises serious challenges. This paper introduces a framework, referred to as moDNN (memory optimal DNN training on GPUs), to optimize the memory usage in DNN training. moDNN supports automatic tuning of DNN training code to match any given memory budget (not smaller than the theoretical lower bound). By taking full advantage of overlapping computations and data transfers, we develop new heuristics to judiciously schedule data offloading and prefetching transfers, together with convolution algorithm selection, to optimize memory usage. We further devise a new sub-batch size selection method which also greatly reduces memory usage. moDNN can save memory usage up to 59x, compared with an ideal case which assumes that the GPU memory is sufficient to hold all data. When executing moDNN on a GPU with 12 GB memory, the training time is increased by only 3 percent, which is much shorter than that incurred by the best known approach, vDNN. Furthermore, we propose an optimization strategy for moDNN on multiple GPUs again by utilizing the idea of overlapping data transfers and GPU computations. The results show that 3.7x speedup is attained on four GPUs.
关键词Deep neural networks graphics processing units memory usage
DOI10.1109/TPDS.2018.2866582
收录类别SCI
语种英语
资助项目National Science Foundation (NSF)[CCF-1217906] ; National Science Foundation (NSF)[CNS-1629914] ; National Science Foundation (NSF)[CCF-1617735] ; National Science Foundation (NSF)[CCF-1640081] ; Nanoelectronics Research Corporation (NERC) of the Semiconductor Research Corporation (SRC), through Extremely Energy Efficient Collective Electronics (EXCEL), an SRC-NRI Nanoelectronics Research Initiative[2698.004] ; Nanoelectronics Research Corporation (NERC) of the Semiconductor Research Corporation (SRC), through Extremely Energy Efficient Collective Electronics (EXCEL), an SRC-NRI Nanoelectronics Research Initiative[2698.005]
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:000458820700012
出版者IEEE COMPUTER SOC
引用统计
被引频次:8[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/3413
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Chen, Xiaoming
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
2.Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 USA
推荐引用方式
GB/T 7714
Chen, Xiaoming,Chen, Danny Ziyi,Han, Yinhe,et al. moDNN: Memory Optimal Deep Neural Network Training on Graphics Processing Units[J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,2019,30(3):646-661.
APA Chen, Xiaoming,Chen, Danny Ziyi,Han, Yinhe,&Hu, Xiaobo Sharon.(2019).moDNN: Memory Optimal Deep Neural Network Training on Graphics Processing Units.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,30(3),646-661.
MLA Chen, Xiaoming,et al."moDNN: Memory Optimal Deep Neural Network Training on Graphics Processing Units".IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 30.3(2019):646-661.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Chen, Xiaoming]的文章
[Chen, Danny Ziyi]的文章
[Han, Yinhe]的文章
百度学术
百度学术中相似的文章
[Chen, Xiaoming]的文章
[Chen, Danny Ziyi]的文章
[Han, Yinhe]的文章
必应学术
必应学术中相似的文章
[Chen, Xiaoming]的文章
[Chen, Danny Ziyi]的文章
[Han, Yinhe]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。