CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
CUTE: A scalable CPU-centric and Ultra-utilized Tensor Engine for convolutions
Li, Wenqing1,2; Ye, Jinpeng1,2; Zhang, Fuxin1,2; Liu, Tianyi3; Zhang, Tingting1,4; Wang, Jian1,2
2024-04-01
发表期刊JOURNAL OF SYSTEMS ARCHITECTURE
ISSN1383-7621
卷号149页码:15
摘要Convolution is a fundamental and computationally expensive primitive and finds ubiquitous in deep neural networks (DNNs). The evolving DNNs have spurred the emergence of numerous accelerators and they successfully achieve high throughput. However, for DNN inference with small batch sizes, the computational resources of the accelerators are often under-utilized, and the overhead of offloading is significant. Compared to accelerators, the CPU can better meet fast response requirements of inference, flexibly handle various models, and is suitable for various scenarios (from edge to data center). Therefore, CPU remains an attractive platform for DNN inference, despite the sub-optimal performance, and resource efficiency. In this paper, we propose CUTE, a scalable CPU-centric and ultra-utilized tensor engine for convolutions. It co-designs data flow and hardware architecture to leverage the data reuse and parallelism of convolutions. CUTE is composed of several small tensor elements (TEs) and two-level buffers. It employs a decoupled accessexecution architecture and greedy strategy to feed data to TEs, enabling it to achieve ultra utilization and great scalability. CUTE is tightly coupled with the CPU to minimize offloading latency, thereby providing efficient convolution computing capabilities for the system. Experimental results show that under the same bandwidth, CUTE achieves an average performance improvement of 3.8x compared with the CPU AVX512 unit and 1.6x compared with the CPU AMX unit. Besides, CUTE achieves a speedup of 7.0x and 3.9x over Nvidia V100 GPU and Eyeriss accelerator respectively, due to higher utilization of computing units.
关键词Tensor engine Convolution Scalable architecture CPU-centric Utilization
DOI10.1016/j.sysarc.2024.103106
收录类别SCI
语种英语
资助项目Strategic Priority Research Program of the Chinese Academy of Sciences[XDC05020100]
WOS研究方向Computer Science
WOS类目Computer Science, Hardware & Architecture ; Computer Science, Software Engineering
WOS记录号WOS:001207560600001
出版者ELSEVIER
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/38703
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Wang, Jian
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Beijing, Peoples R China
3.Univ Texas San Antonio, San Antonio, TX USA
4.Loongson Technol Corp Ltd, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Li, Wenqing,Ye, Jinpeng,Zhang, Fuxin,et al. CUTE: A scalable CPU-centric and Ultra-utilized Tensor Engine for convolutions[J]. JOURNAL OF SYSTEMS ARCHITECTURE,2024,149:15.
APA Li, Wenqing,Ye, Jinpeng,Zhang, Fuxin,Liu, Tianyi,Zhang, Tingting,&Wang, Jian.(2024).CUTE: A scalable CPU-centric and Ultra-utilized Tensor Engine for convolutions.JOURNAL OF SYSTEMS ARCHITECTURE,149,15.
MLA Li, Wenqing,et al."CUTE: A scalable CPU-centric and Ultra-utilized Tensor Engine for convolutions".JOURNAL OF SYSTEMS ARCHITECTURE 149(2024):15.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Li, Wenqing]的文章
[Ye, Jinpeng]的文章
[Zhang, Fuxin]的文章
百度学术
百度学术中相似的文章
[Li, Wenqing]的文章
[Ye, Jinpeng]的文章
[Zhang, Fuxin]的文章
必应学术
必应学术中相似的文章
[Li, Wenqing]的文章
[Ye, Jinpeng]的文章
[Zhang, Fuxin]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。