Institute of Computing Technology, Chinese Academy IR
Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks | |
Lu, Wenyan1,2; Yan, Guihai1,2; Li, Jiajun1,2; Gong, Shijun1,2; Jiang, Shuhao1,2; Wu, Jingya1,2; Li, Xiaowei1,2 | |
2019-06-01 | |
发表期刊 | IEEE TRANSACTIONS ON COMPUTERS |
ISSN | 0018-9340 |
卷号 | 68期号:6页码:867-881 |
摘要 | There are two approaches to improve the performance of Convolutional Neural Networks (CNNs): 1) accelerating computation and 2) reducing the amount of computation. The acceleration approaches take the advantage of CNN computing regularity which enables abundant fine-grained parallelisms in feature maps, neurons, and synapses. Alternatively, reducing computations leverages the intrinsic sparsity of CNN neurons and synapses. The sparsity represents as the computing "bubbles", i. e., zero or tiny-valued neurons and synapses. These bubbles can be removed to reduce the volume of computations. Although distinctly different from each other in principle, we find that the two types of approaches are not orthogonal to each other. Even worse, they may conflict to each other when working together. The conditional branches introduced by some bubble-removing mechanisms in the original computations destroy the regularity of deeply nested loops, thereby impairing the intrinsic parallelisms. Therefore, enabling the synergy between the two types of approaches is critical to arrive at superior performance. This paper proposed a relaxed synchronous computing architecture, FlexFlow-Pro, to fulfill this purpose. Compared with the state-of-the-art accelerators, the FlexFlow-Pro gains more than 2.5 x performance on average and 2x energy efficiency. |
关键词 | Convolutional neural networks accelerator architecture parallelism sparsity |
DOI | 10.1109/TC.2018.2890258 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Natural Science Foundation of China[61572470] ; National Natural Science Foundation of China[61532017] ; National Natural Science Foundation of China[61522406] ; National Natural Science Foundation of China[61872336] ; National Natural Science Foundation of China[61432017] ; National Natural Science Foundation of China[61376043] ; National Natural Science Foundation of China[61521092] ; Youth Innovation Promotion Association, CAS[404441000] |
WOS研究方向 | Computer Science ; Engineering |
WOS类目 | Computer Science, Hardware & Architecture ; Engineering, Electrical & Electronic |
WOS记录号 | WOS:000467523100005 |
出版者 | IEEE COMPUTER SOC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/4256 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Yan, Guihai; Li, Xiaowei |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100864, Peoples R China 2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Lu, Wenyan,Yan, Guihai,Li, Jiajun,et al. Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks[J]. IEEE TRANSACTIONS ON COMPUTERS,2019,68(6):867-881. |
APA | Lu, Wenyan.,Yan, Guihai.,Li, Jiajun.,Gong, Shijun.,Jiang, Shuhao.,...&Li, Xiaowei.(2019).Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks.IEEE TRANSACTIONS ON COMPUTERS,68(6),867-881. |
MLA | Lu, Wenyan,et al."Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks".IEEE TRANSACTIONS ON COMPUTERS 68.6(2019):867-881. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论