CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
An Automatic Neural Network Architecture-and-Quantization Joint Optimization Framework for Efficient Model Inference
Liu, Lian1,2; Wang, Ying1,2; Zhao, Xiandong3; Chen, Weiwei; Li, Huawei1,2; Li, Xiaowei1,2; Han, Yinhe1,2
2024-05-01
发表期刊IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS
ISSN0278-0070
卷号43期号:5页码:1497-1510
摘要Efficient deep learning models, especially optimized for edge devices, benefit from low inference latency to efficient energy consumption. Two classical techniques for efficient model inference are lightweight neural architecture search (NAS), which automatically designs compact network models, and quantization, which reduces the bit-precision of neural network models. As a consequence, joint design for both neural architecture and quantization precision settings is becoming increasingly popular. There are three main aspects that affect the performance of the joint optimization between neural architecture and quantization: 1) quantization precision selection (QPS); 2) quantization-aware training (QAT); and 3) NAS. However, existing works focus on at most twofold of these aspects, and result in secondary performance. To this end, we proposed a novel automatic optimization framework, DAQU, that allows jointly searching for Pareto-optimal neural architecture and quantization precision combination among more than $10<^>{47}$ quantized subnet models. To overcome the instability of the conventional automatic optimization framework, DAQU incorporates a warm-up strategy to reduce the accuracy gap among different neural architectures, and a precision-transfer training approach to maintain flexibility among different quantization precision settings. Our experiments show that the quantized lightweight neural networks generated by DAQU consistently outperform state-of-the-art NAS and quantization joint optimization methods.
关键词Optimization Quantization (signal) Computer architecture Training Computational modeling Integrated circuit modeling Convergence Automatic joint optimization efficient model inference network quantization neural architecture search (NAS)
DOI10.1109/TCAD.2023.3339438
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China (NSFC)
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Hardware & Architecture ; Computer Science, Interdisciplinary Applications ; Engineering, Electrical & Electronic
WOS记录号WOS:001225897600014
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/40075
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Wang, Ying; Li, Huawei
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Dept Comp Sci, Beijing 100190, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Liu, Lian,Wang, Ying,Zhao, Xiandong,et al. An Automatic Neural Network Architecture-and-Quantization Joint Optimization Framework for Efficient Model Inference[J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,2024,43(5):1497-1510.
APA Liu, Lian.,Wang, Ying.,Zhao, Xiandong.,Chen, Weiwei.,Li, Huawei.,...&Han, Yinhe.(2024).An Automatic Neural Network Architecture-and-Quantization Joint Optimization Framework for Efficient Model Inference.IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,43(5),1497-1510.
MLA Liu, Lian,et al."An Automatic Neural Network Architecture-and-Quantization Joint Optimization Framework for Efficient Model Inference".IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS 43.5(2024):1497-1510.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Liu, Lian]的文章
[Wang, Ying]的文章
[Zhao, Xiandong]的文章
百度学术
百度学术中相似的文章
[Liu, Lian]的文章
[Wang, Ying]的文章
[Zhao, Xiandong]的文章
必应学术
必应学术中相似的文章
[Liu, Lian]的文章
[Wang, Ying]的文章
[Zhao, Xiandong]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。