CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Accelerating Convolutional Neural Networks by Exploiting the Sparsity of Output Activation
Fan, Zhihua1,2; Li, Wenming1,2; Wang, Zhen1,2; Liu, Tianyu1,2; Wu, Haibin1,2; Liu, Yanhuan1,2; Wu, Meng1,2; Wu, Xinxin1; Ye, Xiaochun1; Fan, Dongrui1,2; Sun, Ninghui1,2; An, Xuejun1,2
2023-12-01
发表期刊IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
ISSN1045-9219
卷号34期号:12页码:3253-3265
摘要Deep Convolutional Neural Networks (CNNs) are the most widely used family of machine learning methods that have had a transformative effect on a wide range of applications. Previous studies have made great breakthroughs in accelerating CNNs, but they only target on the input sparsity of activation and weight, thus do not eliminate the unnecessary computations due to the fact that more zeros in the output results are not directly caused by the zero-valued positions of the input data. In this paper, we take advantage of the output activation sparsity to reduce the execution time and energy consumption of CNNs. First, we propose an effective prediction method that leverages the output activation sparsity. Our method first predicts the output activation polarity of convolutional layers based on the singular value decomposition (SVD) approach. Then, it uses the predicted negative value to skip invalid computations. Second, an effective accelerator is designed to take advantage of sparsity to achieve CNN inference acceleration. Each PE is equipped with a prediction unit and a non-zero value detection unit to remove invalid computation blocks. And an instruction bypass technique is proposed which further exploits the sparsity of the weights. The efficient dataflow graph mapping approach and pipeline execution ensure high computational resource utilization. Experiments show that our approach achieves up to 1.63x speedup and 55.30% energy reduction compared with dense networks with a slight loss of accuracy. Compared with Eyeriss, our accelerator achieves on average 1.31 x performance improvement and 54% energy reduction. Our accelerator also achieves a similar performance to SnaPEA, but with a better energy efficiency.
关键词Accelerator output activation prediction sparse convolutional neural network
DOI10.1109/TPDS.2023.3324934
收录类别SCI
语种英语
资助项目National Key R&D Program of China[2022YFB4501404] ; Beijing Nova Program[2022079] ; CAS Project for Young Scientists in Basic Research[YSBR- 029] ; CAS Project for Youth Innovation Promotion Association and Open Research Projects of Zhejiang Lab[2022PB0AB01]
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:001097049800002
出版者IEEE COMPUTER SOC
引用统计
被引频次:1[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/38094
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Li, Wenming
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processor, Beijing 100045, Peoples R China
2.Univ Chinese Acad Sci, Sch Comp & Control Engn, Beijing 101408, Peoples R China
推荐引用方式
GB/T 7714
Fan, Zhihua,Li, Wenming,Wang, Zhen,et al. Accelerating Convolutional Neural Networks by Exploiting the Sparsity of Output Activation[J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,2023,34(12):3253-3265.
APA Fan, Zhihua.,Li, Wenming.,Wang, Zhen.,Liu, Tianyu.,Wu, Haibin.,...&An, Xuejun.(2023).Accelerating Convolutional Neural Networks by Exploiting the Sparsity of Output Activation.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,34(12),3253-3265.
MLA Fan, Zhihua,et al."Accelerating Convolutional Neural Networks by Exploiting the Sparsity of Output Activation".IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 34.12(2023):3253-3265.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Fan, Zhihua]的文章
[Li, Wenming]的文章
[Wang, Zhen]的文章
百度学术
百度学术中相似的文章
[Fan, Zhihua]的文章
[Li, Wenming]的文章
[Wang, Zhen]的文章
必应学术
必应学术中相似的文章
[Fan, Zhihua]的文章
[Li, Wenming]的文章
[Wang, Zhen]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。