CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks
Lu, Wenyan1,2; Yan, Guihai1,2; Li, Jiajun1,2; Gong, Shijun1,2; Jiang, Shuhao1,2; Wu, Jingya1,2; Li, Xiaowei1,2
2019-06-01
发表期刊IEEE TRANSACTIONS ON COMPUTERS
ISSN0018-9340
卷号68期号:6页码:867-881
摘要There are two approaches to improve the performance of Convolutional Neural Networks (CNNs): 1) accelerating computation and 2) reducing the amount of computation. The acceleration approaches take the advantage of CNN computing regularity which enables abundant fine-grained parallelisms in feature maps, neurons, and synapses. Alternatively, reducing computations leverages the intrinsic sparsity of CNN neurons and synapses. The sparsity represents as the computing "bubbles", i. e., zero or tiny-valued neurons and synapses. These bubbles can be removed to reduce the volume of computations. Although distinctly different from each other in principle, we find that the two types of approaches are not orthogonal to each other. Even worse, they may conflict to each other when working together. The conditional branches introduced by some bubble-removing mechanisms in the original computations destroy the regularity of deeply nested loops, thereby impairing the intrinsic parallelisms. Therefore, enabling the synergy between the two types of approaches is critical to arrive at superior performance. This paper proposed a relaxed synchronous computing architecture, FlexFlow-Pro, to fulfill this purpose. Compared with the state-of-the-art accelerators, the FlexFlow-Pro gains more than 2.5 x performance on average and 2x energy efficiency.
关键词Convolutional neural networks accelerator architecture parallelism sparsity
DOI10.1109/TC.2018.2890258
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[61572470] ; National Natural Science Foundation of China[61532017] ; National Natural Science Foundation of China[61522406] ; National Natural Science Foundation of China[61872336] ; National Natural Science Foundation of China[61432017] ; National Natural Science Foundation of China[61376043] ; National Natural Science Foundation of China[61521092] ; Youth Innovation Promotion Association, CAS[404441000]
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Hardware & Architecture ; Engineering, Electrical & Electronic
WOS记录号WOS:000467523100005
出版者IEEE COMPUTER SOC
引用统计
被引频次:3[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/4256
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Yan, Guihai; Li, Xiaowei
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100864, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Lu, Wenyan,Yan, Guihai,Li, Jiajun,et al. Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks[J]. IEEE TRANSACTIONS ON COMPUTERS,2019,68(6):867-881.
APA Lu, Wenyan.,Yan, Guihai.,Li, Jiajun.,Gong, Shijun.,Jiang, Shuhao.,...&Li, Xiaowei.(2019).Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks.IEEE TRANSACTIONS ON COMPUTERS,68(6),867-881.
MLA Lu, Wenyan,et al."Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks".IEEE TRANSACTIONS ON COMPUTERS 68.6(2019):867-881.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Lu, Wenyan]的文章
[Yan, Guihai]的文章
[Li, Jiajun]的文章
百度学术
百度学术中相似的文章
[Lu, Wenyan]的文章
[Yan, Guihai]的文章
[Li, Jiajun]的文章
必应学术
必应学术中相似的文章
[Lu, Wenyan]的文章
[Yan, Guihai]的文章
[Li, Jiajun]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。