CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
EasiEdge: A Novel Global Deep Neural Networks Pruning Method for Efficient Edge Computing
Yu, Fang1; Cui, Li1,2; Wang, Pengcheng1; Han, Chuanqi1; Huang, Ruoran1; Huang, Xi1,2
2021-02-01
发表期刊IEEE INTERNET OF THINGS JOURNAL
ISSN2327-4662
卷号8期号:3页码:1259-1271
摘要Deep neural networks (DNNs) have shown tremendous success in many areas, such as signal processing, computer vision, and artificial intelligence. However, the DNNs require intensive computation resources, hindering their practical applications on the edge devices with limited storage and computation resources. Filter pruning has been recognized as a useful technique to compress and accelerate the DNNs, but most existing works tend to prune filters in a layerwise manner, facing some significant drawbacks. First, the layerwise pruning methods require prohibitive computation for per-layer sensitivity analysis. Second, layerwise pruning suffers from the accumulation of pruning errors, leading to performance degradation of pruned networks. To address these challenges, we propose a novel global pruning method, namely, EasiEdge, to compress and accelerate the DNNs for efficient edge computing. More specifically, we introduce an alternating direction method of multipliers (ADMMs) to formulate the pruning problem as a performance improving subproblem and a global pruning subproblem. In the global pruning subproblem, we propose to use information gain (IG) to quantify the impact of filters removal on the class probability distributions of network output. Besides, we propose a Taylor-based approximate algorithm (TBAA) to efficiently calculate the IG of filters. Extensive experiments on three data sets and two edge computing platforms verify that our proposed EasiEdge can efficiently accelerate DNNs on edge computing platforms with nearly negligible accuracy loss. For example, when EasiEdge prunes 80% filters in VGG-16, the accuracy drops by 0.22%, but inference latency on CPU of Jetson TX2 decreases from 76.85 to 8.01 ms.
关键词Acceleration Internet of Things Computational modeling Edge computing Sensitivity analysis Optimization Convex functions Deep neural networks (DNNs) edge computing filter pruning model compression and acceleration
DOI10.1109/JIOT.2020.3034925
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[61672498] ; National Key Research and Development Program of China[2016YFC0302300]
WOS研究方向Computer Science ; Engineering ; Telecommunications
WOS类目Computer Science, Information Systems ; Engineering, Electrical & Electronic ; Telecommunications
WOS记录号WOS:000612146000001
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
被引频次:23[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/16276
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Cui, Li
作者单位1.Chinese Acad Sci, Inst Comp Technol, Wireless Sensor Network Lab, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Yu, Fang,Cui, Li,Wang, Pengcheng,et al. EasiEdge: A Novel Global Deep Neural Networks Pruning Method for Efficient Edge Computing[J]. IEEE INTERNET OF THINGS JOURNAL,2021,8(3):1259-1271.
APA Yu, Fang,Cui, Li,Wang, Pengcheng,Han, Chuanqi,Huang, Ruoran,&Huang, Xi.(2021).EasiEdge: A Novel Global Deep Neural Networks Pruning Method for Efficient Edge Computing.IEEE INTERNET OF THINGS JOURNAL,8(3),1259-1271.
MLA Yu, Fang,et al."EasiEdge: A Novel Global Deep Neural Networks Pruning Method for Efficient Edge Computing".IEEE INTERNET OF THINGS JOURNAL 8.3(2021):1259-1271.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yu, Fang]的文章
[Cui, Li]的文章
[Wang, Pengcheng]的文章
百度学术
百度学术中相似的文章
[Yu, Fang]的文章
[Cui, Li]的文章
[Wang, Pengcheng]的文章
必应学术
必应学术中相似的文章
[Yu, Fang]的文章
[Cui, Li]的文章
[Wang, Pengcheng]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。