Institute of Computing Technology, Chinese Academy IR
EasiEdge: A Novel Global Deep Neural Networks Pruning Method for Efficient Edge Computing | |
Yu, Fang1; Cui, Li1,2; Wang, Pengcheng1; Han, Chuanqi1; Huang, Ruoran1; Huang, Xi1,2 | |
2021-02-01 | |
发表期刊 | IEEE INTERNET OF THINGS JOURNAL |
ISSN | 2327-4662 |
卷号 | 8期号:3页码:1259-1271 |
摘要 | Deep neural networks (DNNs) have shown tremendous success in many areas, such as signal processing, computer vision, and artificial intelligence. However, the DNNs require intensive computation resources, hindering their practical applications on the edge devices with limited storage and computation resources. Filter pruning has been recognized as a useful technique to compress and accelerate the DNNs, but most existing works tend to prune filters in a layerwise manner, facing some significant drawbacks. First, the layerwise pruning methods require prohibitive computation for per-layer sensitivity analysis. Second, layerwise pruning suffers from the accumulation of pruning errors, leading to performance degradation of pruned networks. To address these challenges, we propose a novel global pruning method, namely, EasiEdge, to compress and accelerate the DNNs for efficient edge computing. More specifically, we introduce an alternating direction method of multipliers (ADMMs) to formulate the pruning problem as a performance improving subproblem and a global pruning subproblem. In the global pruning subproblem, we propose to use information gain (IG) to quantify the impact of filters removal on the class probability distributions of network output. Besides, we propose a Taylor-based approximate algorithm (TBAA) to efficiently calculate the IG of filters. Extensive experiments on three data sets and two edge computing platforms verify that our proposed EasiEdge can efficiently accelerate DNNs on edge computing platforms with nearly negligible accuracy loss. For example, when EasiEdge prunes 80% filters in VGG-16, the accuracy drops by 0.22%, but inference latency on CPU of Jetson TX2 decreases from 76.85 to 8.01 ms. |
关键词 | Acceleration Internet of Things Computational modeling Edge computing Sensitivity analysis Optimization Convex functions Deep neural networks (DNNs) edge computing filter pruning model compression and acceleration |
DOI | 10.1109/JIOT.2020.3034925 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Natural Science Foundation of China[61672498] ; National Key Research and Development Program of China[2016YFC0302300] |
WOS研究方向 | Computer Science ; Engineering ; Telecommunications |
WOS类目 | Computer Science, Information Systems ; Engineering, Electrical & Electronic ; Telecommunications |
WOS记录号 | WOS:000612146000001 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/16276 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Cui, Li |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Wireless Sensor Network Lab, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Yu, Fang,Cui, Li,Wang, Pengcheng,et al. EasiEdge: A Novel Global Deep Neural Networks Pruning Method for Efficient Edge Computing[J]. IEEE INTERNET OF THINGS JOURNAL,2021,8(3):1259-1271. |
APA | Yu, Fang,Cui, Li,Wang, Pengcheng,Han, Chuanqi,Huang, Ruoran,&Huang, Xi.(2021).EasiEdge: A Novel Global Deep Neural Networks Pruning Method for Efficient Edge Computing.IEEE INTERNET OF THINGS JOURNAL,8(3),1259-1271. |
MLA | Yu, Fang,et al."EasiEdge: A Novel Global Deep Neural Networks Pruning Method for Efficient Edge Computing".IEEE INTERNET OF THINGS JOURNAL 8.3(2021):1259-1271. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论