Institute of Computing Technology, Chinese Academy IR
A Framework for Neural Network Architecture and Compile Co-optimization | |
Chen, Weiwei1,2; Wang, Ying3; Xu, Ying1,2; Gao, Chengsi1,2; Liu, Cheng1; Zhang, Lei1 | |
2023 | |
发表期刊 | ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS |
ISSN | 1539-9087 |
卷号 | 22期号:1页码:24 |
摘要 | The efficiency of deep neural network (DNN) solutions on real hardware devices are mainly decided by the DNN architecture and the compiler-level scheduling strategy on the hardware. When we try to fully exploit the underlying hardware and obtain the optimal tradeoff between DNN accuracy and runtime performance, we discovered that the two optimization goals of DNN architecture and scheduling policy are intimately related to each other. However, current hardware-aware Neural Architecture Search (NAS) methods primarily focus on the DNN architecture search process, ignoring the effects of various compiler-level scheduling strategies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. As a result, they may overlook the true-optimal DNN implementations on hardware, which can only be discovered by trying-out different combinations of scheduling strategies and DNN architectures. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated compiler-level scheduling policy, as the optimal co-design solution on the target hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. Further, we introduce a new search objective function based on the generalization gap to prevent the selection of architectures that are prone to overfitting. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method based on the MobileNet-v3 search space. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6x, 1.9x, and 1.7x performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy. |
关键词 | DNN-scheduling Co-design hardware-aware neural architecture search compiler optimization |
DOI | 10.1145/3533251 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Natural Science Foundation of China[61874124] ; National Natural Science Foundation of China[61876173] ; Strategic Priority Research Program of Chinese Academy of Science[XDC05030201] ; 2025 Key Technology Innovation Program of Ningbo City[2018B10035] |
WOS研究方向 | Computer Science |
WOS类目 | Computer Science, Hardware & Architecture ; Computer Science, Software Engineering |
WOS记录号 | WOS:000908419900005 |
出版者 | ASSOC COMPUTING MACHINERY |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/20027 |
专题 | 中国科学院计算技术研究所期刊论文 |
通讯作者 | Wang, Ying |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, 6 Ke Xue Yuan South Rd, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China 3.Chinese Acad Sci, Inst Comp Technol, Zhejiang Lab, 6 Ke Xue Yuan South Rd, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Chen, Weiwei,Wang, Ying,Xu, Ying,et al. A Framework for Neural Network Architecture and Compile Co-optimization[J]. ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS,2023,22(1):24. |
APA | Chen, Weiwei,Wang, Ying,Xu, Ying,Gao, Chengsi,Liu, Cheng,&Zhang, Lei.(2023).A Framework for Neural Network Architecture and Compile Co-optimization.ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS,22(1),24. |
MLA | Chen, Weiwei,et al."A Framework for Neural Network Architecture and Compile Co-optimization".ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS 22.1(2023):24. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论