Institute of Computing Technology, Chinese Academy IR
BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks | |
Li, Hongyan1; Lu, Hang2,3; Wang, Haoxuan1; Deng, Shengji4; Li, Xiaowei1 | |
2023 | |
发表期刊 | IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS |
ISSN | 1063-8210 |
卷号 | 31期号:1页码:90-103 |
摘要 | Classic deep neural network (DNN) pruning mostly leverages software-based methodologies to tackle the accuracy/speed tradeoff, which involves complicated procedures such as critical parameter searching, fine-tuning, and sparse training to find the best plan. In this article, we explore the opportunities of hardware runtime pruning and propose a regularity-aware hardware runtime pruning methodology, termed "BitXpro" to empower versatile DNN inference. The method targets the bit-level sparsity and the sparsity irregularity in the parameters and pinpoints and prunes the useless bits on-the-fly in the proposed BitXpro accelerator. The versatility of BitXpro lies in: 1) software effortless; 2) orthogonal to the software-based pruning; and 3) multiprecision support (including both floating point and fixed point). Empirical studies on various domain-specific artificial intelligence (AI) tasks highlight the following results: 1) up to 8.27x speedup over the original nonpruned DNN and 10.81x speedup collaborated with the software-pruned DNN; 2) up to 0.3% and 0.04% higher accuracy for the floating- and fixed-point DNNs, respectively; and 3) 6.01x and 8.20x performance improvement over the state-of-the-art accelerators, with 0.068 mm2 and 74.82 mW (floating point 32) and 40.44 mW (16-bit fixed point) power consumption under the TSMC 28-nm technology library. |
关键词 | Deep learning accelerator deep neural network (DNN) hardware runtime pruning |
DOI | 10.1109/TVLSI.2022.3221732 |
收录类别 | SCI |
语种 | 英语 |
WOS研究方向 | Computer Science ; Engineering |
WOS类目 | Computer Science, Hardware & Architecture ; Engineering, Electrical & Electronic |
WOS记录号 | WOS:000911286400009 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/20054 |
专题 | 中国科学院计算技术研究所期刊论文 |
通讯作者 | Lu, Hang |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China 2.Chinese Acad Sci, Zhongguancun Lab, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China 3.Chinese Acad Sci, Shanghai Innovat Ctr Processor Technol SHIC, Beijing 100190, Peoples R China 4.Civil Aviat Adm China CAAC, Res Inst 2, Beijing 101318, Peoples R China |
推荐引用方式 GB/T 7714 | Li, Hongyan,Lu, Hang,Wang, Haoxuan,et al. BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks[J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS,2023,31(1):90-103. |
APA | Li, Hongyan,Lu, Hang,Wang, Haoxuan,Deng, Shengji,&Li, Xiaowei.(2023).BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks.IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS,31(1),90-103. |
MLA | Li, Hongyan,et al."BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks".IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS 31.1(2023):90-103. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论