CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks
Gong, Cheng1; Lu, Ye2,3; Dai, Su-Rong2; Deng, Qian2; Du, Cheng-Kun2; Li, Tao2,3
2024-03-01
发表期刊JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY
ISSN1000-9000
卷号39期号:2页码:401-420
摘要Exploring the expected quantizing scheme with suitable mixed-precision policy is the key to compress deep neural networks (DNNs) in high efficiency and accuracy. This exploration implies heavy workloads for domain experts, and an automatic compression method is needed. However, the huge search space of the automatic method introduces plenty of computing budgets that make the automatic process challenging to be applied in real scenarios. In this paper, we propose an end-to-end framework named AutoQNN, for automatically quantizing different layers utilizing different schemes and bitwidths without any human labor. AutoQNN can seek desirable quantizing schemes and mixed-precision policies for mainstream DNN models efficiently by involving three techniques: quantizing scheme search (QSS), quantizing precision learning (QPL), and quantized architecture generation (QAG). QSS introduces five quantizing schemes and defines three new schemes as a candidate set for scheme search, and then uses the Differentiable Neural Architecture Search (DNAS) algorithm to seek the layer- or model-desired scheme from the set. QPL is the first method to learn mixed-precision policies by reparameterizing the bitwidths of quantizing schemes, to the best of our knowledge. QPL optimizes both classification loss and precision loss of DNNs efficiently and obtains the relatively optimal mixed-precision model within limited model size and memory footprint. QAG is designed to convert arbitrary architectures into corresponding quantized ones without manual intervention, to facilitate end-to-end neural network quantization. We have implemented AutoQNN and integrated it into Keras. Extensive experiments demonstrate that AutoQNN can consistently outperform state-of-the-art quantization. For 2-bit weight and activation of AlexNet and ResNet18, AutoQNN can achieve the accuracy results of 59.75% and 68.86%, respectively, and obtain accuracy improvements by up to 1.65% and 1.74%, respectively, compared with state-of-the-art methods. Especially, compared with the full-precision AlexNet and ResNet18, the 2-bit models only slightly incur accuracy degradation by 0.26% and 0.76%, respectively, which can fulfill practical application demands.
关键词automatic quantization mixed precision quantizing scheme search quantizing precision learning quantized architecture generation
DOI10.1007/s11390-022-1632-9
收录类别SCI
语种英语
资助项目China Postdoctoral Science Foundation[2022M721707] ; National Natural Science Foundation of China[62002175] ; National Natural Science Foundation of China[62272248] ; Special Funding for Excellent Enterprise Technology Correspondent of Tianjin[21YDTPJC00380] ; Open Project Foundation of Information Security Evaluation Center of Civil Aviation, Civil Aviation University of China[ISECCA-202102]
WOS研究方向Computer Science
WOS类目Computer Science, Hardware & Architecture ; Computer Science, Software Engineering
WOS记录号WOS:001244495800005
出版者SPRINGER SINGAPORE PTE LTD
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/39917
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Li, Tao
作者单位1.Nankai Univ, Coll Software, Tianjin 300350, Peoples R China
2.Nankai Univ, Coll Comp Sci, Tianjin 300350, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Gong, Cheng,Lu, Ye,Dai, Su-Rong,et al. AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks[J]. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY,2024,39(2):401-420.
APA Gong, Cheng,Lu, Ye,Dai, Su-Rong,Deng, Qian,Du, Cheng-Kun,&Li, Tao.(2024).AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks.JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY,39(2),401-420.
MLA Gong, Cheng,et al."AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks".JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 39.2(2024):401-420.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Gong, Cheng]的文章
[Lu, Ye]的文章
[Dai, Su-Rong]的文章
百度学术
百度学术中相似的文章
[Gong, Cheng]的文章
[Lu, Ye]的文章
[Dai, Su-Rong]的文章
必应学术
必应学术中相似的文章
[Gong, Cheng]的文章
[Lu, Ye]的文章
[Dai, Su-Rong]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。