CSpace  > 中国科学院计算技术研究所期刊论文
A Framework for Neural Network Architecture and Compile Co-optimization
Chen, Weiwei1,2; Wang, Ying3; Xu, Ying1,2; Gao, Chengsi1,2; Liu, Cheng1; Zhang, Lei1
2023
发表期刊ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS
ISSN1539-9087
卷号22期号:1页码:24
摘要The efficiency of deep neural network (DNN) solutions on real hardware devices are mainly decided by the DNN architecture and the compiler-level scheduling strategy on the hardware. When we try to fully exploit the underlying hardware and obtain the optimal tradeoff between DNN accuracy and runtime performance, we discovered that the two optimization goals of DNN architecture and scheduling policy are intimately related to each other. However, current hardware-aware Neural Architecture Search (NAS) methods primarily focus on the DNN architecture search process, ignoring the effects of various compiler-level scheduling strategies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. As a result, they may overlook the true-optimal DNN implementations on hardware, which can only be discovered by trying-out different combinations of scheduling strategies and DNN architectures. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated compiler-level scheduling policy, as the optimal co-design solution on the target hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. Further, we introduce a new search objective function based on the generalization gap to prevent the selection of architectures that are prone to overfitting. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method based on the MobileNet-v3 search space. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6x, 1.9x, and 1.7x performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy.
关键词DNN-scheduling Co-design hardware-aware neural architecture search compiler optimization
DOI10.1145/3533251
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[61874124] ; National Natural Science Foundation of China[61876173] ; Strategic Priority Research Program of Chinese Academy of Science[XDC05030201] ; 2025 Key Technology Innovation Program of Ningbo City[2018B10035]
WOS研究方向Computer Science
WOS类目Computer Science, Hardware & Architecture ; Computer Science, Software Engineering
WOS记录号WOS:000908419900005
出版者ASSOC COMPUTING MACHINERY
引用统计
被引频次:1[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/20027
专题中国科学院计算技术研究所期刊论文
通讯作者Wang, Ying
作者单位1.Chinese Acad Sci, Inst Comp Technol, 6 Ke Xue Yuan South Rd, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, Zhejiang Lab, 6 Ke Xue Yuan South Rd, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Chen, Weiwei,Wang, Ying,Xu, Ying,et al. A Framework for Neural Network Architecture and Compile Co-optimization[J]. ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS,2023,22(1):24.
APA Chen, Weiwei,Wang, Ying,Xu, Ying,Gao, Chengsi,Liu, Cheng,&Zhang, Lei.(2023).A Framework for Neural Network Architecture and Compile Co-optimization.ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS,22(1),24.
MLA Chen, Weiwei,et al."A Framework for Neural Network Architecture and Compile Co-optimization".ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS 22.1(2023):24.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Chen, Weiwei]的文章
[Wang, Ying]的文章
[Xu, Ying]的文章
百度学术
百度学术中相似的文章
[Chen, Weiwei]的文章
[Wang, Ying]的文章
[Xu, Ying]的文章
必应学术
必应学术中相似的文章
[Chen, Weiwei]的文章
[Wang, Ying]的文章
[Xu, Ying]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。