CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Evaluating and analyzing the energy efficiency of CNN inference on high-performance GPU
Yao, Chunrong1; Liu, Wantao2; Tang, Weiqing1,3; Guo, Jinrong2; Hu, Songlin2; Lu, Yijun4; Jiang, Wei5
2020-10-21
发表期刊CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE
ISSN1532-0626
页码26
摘要Convolutional neural network (CNN) inference usually runs on high-performance graphic processing units (GPUs). Since GPU is a high power consumption unit, that makes the energy consumption increases sharply due to the deep learning tasks. The energy efficiency of CNN inference is not only related to the software and hardware configurations, but also closely related to the application requirements of inference tasks. However, it is not clear on GPUs at present. In this paper, we conduct a comprehensive study on the model-level and layer-level energy efficiency of popular CNN models. The results point out several opportunities for further optimization. We also analyze the parameter settings (i.e., batch size, dynamic voltage and frequency scaling) and propose a revenue model to allow an optimal trade-off between energy efficiency and latency. Compared with the default settings, the optimal settings can improve revenue by up to 15.31x. We obtain the following main findings: (i) GPUs do not exploit the parallelism from the model depth and small convolution kernels, resulting in low energy efficiency. (ii) Convolutional layers are the most energy-consuming CNN layers. However, due to the cache, the power consumption of all layers is relatively balanced. (iii) The energy efficiency of TensorRT is 1.53xthan that of TensorFlow.
关键词CNNs energy efficiency high-performance GPU inference
DOI10.1002/cpe.6064
收录类别SCI
语种英语
资助项目National Key Research and Development Program of China[2017YFB1010000]
WOS研究方向Computer Science
WOS类目Computer Science, Software Engineering ; Computer Science, Theory & Methods
WOS记录号WOS:000580529000001
出版者WILEY
引用统计
被引频次:11[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/15726
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Liu, Wantao
作者单位1.Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing, Peoples R China
2.Chinese Acad Sci, Inst Informat Engn, Beijing 100093, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
4.Alibaba Cloud Comp Co Ltd, Hangzhou, Peoples R China
5.State Grid Corp China, Dept Energy Internet, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Yao, Chunrong,Liu, Wantao,Tang, Weiqing,et al. Evaluating and analyzing the energy efficiency of CNN inference on high-performance GPU[J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE,2020:26.
APA Yao, Chunrong.,Liu, Wantao.,Tang, Weiqing.,Guo, Jinrong.,Hu, Songlin.,...&Jiang, Wei.(2020).Evaluating and analyzing the energy efficiency of CNN inference on high-performance GPU.CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE,26.
MLA Yao, Chunrong,et al."Evaluating and analyzing the energy efficiency of CNN inference on high-performance GPU".CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE (2020):26.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yao, Chunrong]的文章
[Liu, Wantao]的文章
[Tang, Weiqing]的文章
百度学术
百度学术中相似的文章
[Yao, Chunrong]的文章
[Liu, Wantao]的文章
[Tang, Weiqing]的文章
必应学术
必应学术中相似的文章
[Yao, Chunrong]的文章
[Liu, Wantao]的文章
[Tang, Weiqing]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。