CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Cambricon-G: A Polyvalent Energy-Efficient Accelerator for Dynamic Graph Neural Networks
Song, Xinkai1,2,3; Zhi, Tian1,3; Fan, Zhe1,2,3; Zhang, Zhenxing1,2,3; Zeng, Xi1,3; Li, Wei1,3; Hu, Xing1; Du, Zidong1,3; Guo, Qi1; Chen, Yunji1,2,4
2022
发表期刊IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS
ISSN0278-0070
卷号41期号:1页码:116-128
摘要Graph neural networks (GNNs), which extend traditional neural networks for processing graph-structured data, have been widely used in many fields. The GNN computation mainly consists of the edge processing to generate messages by combining the edge/vertex features and the vertex processing to update the vertex features with aggregated messages. In addition to nontrivial vector operations in the edge processing, huge random accesses and neural network operations in the vertex processing, the graph topology of GNNs may also vary during the computation (i.e., dynamic GNNs). The above characteristics pose significant challenges on existing architectures. In this article, we propose a novel accelerator named CAMBRICON-G for efficient processing of both dynamic and static GNNs. The key of CAMBRICON-G is to abstract the irregular computation of a broad range of GNN variants to the process of regularly tiled adjacent cuboid (which extends the traditional adjacent matrix of graph by adding the dimension of vertex features). The intuition is that the adjacent cuboid facilitates exploitation of both data locality and parallelism by offering multidimensional multilevel tiling (including spatial and temporal tiling) opportunities. To perform the multidimensional spatial tiling, the CAMBRICON-G architecture mainly consists of the cuboid engine (CE) and hybrid on-chip memory. The CE has multiple vertex processing units (VPUs) working in a coordinated manner to efficiently process the sparse data and dynamically update the graph topology with dedicated instructions. The hybrid on-chip memory contains the topology-aware cache and multiple scratch-pad memory to reduce off-chip memory access. To perform the multidimensional temporal tiling, an easy-to-use programming model is provided to flexibly explore different tiling options for large graphs. Experimental results show that compared against Nvidia P100 GPU, the performance and energy efficiency can be improved by 7.14x and 20.18x, respectively, on various GNNs, which validates both the versatility and energy efficiency of CAMBRICON-G.
关键词Accelerator architecture graph neural networks (GNNs)
DOI10.1109/TCAD.2021.3052138
收录类别SCI
语种英语
资助项目National Key Research and Development Program of China[2017YFA0700900] ; National Key Research and Development Program of China[2017YFA0700902] ; National Key Research and Development Program of China[2017YFA0700901] ; NSF of China[61925208] ; NSF of China[61732007] ; NSF of China[61732002] ; NSF of China[61702478] ; NSF of China[61906179] ; NSF of China[62002338] ; NSF of China[61702459] ; NSF of China[U19B2019] ; NSF of China[U20A20227] ; Beijing Natural Science Foundation[JQ18013] ; Key Research Projects in Frontier Science of Chinese Academy of Sciences[QYZDB-SSW-JSC001] ; Strategic Priority Research Program of Chinese Academy of Science[XDB32050200] ; Strategic Priority Research Program of Chinese Academy of Science[XDC05010300] ; Strategic Priority Research Program of Chinese Academy of Science[XDC08040102] ; Beijing Academy of Artificial Intelligence (BAAI)[Z191100001119093] ; Beijing Nova Program of Science and Technology[Z191100001119093] ; Guangdong Science and Technology Program[2019B090909005] ; Youth Innovation Promotion Association CAS ; Xplore Prize
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Hardware & Architecture ; Computer Science, Interdisciplinary Applications ; Engineering, Electrical & Electronic
WOS记录号WOS:000732986400013
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
被引频次:11[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/17965
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Chen, Yunji
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100864, Peoples R China
2.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100864, Peoples R China
3.Cambricon Technol, Beijing 100191, Peoples R China
4.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100864, Peoples R China
推荐引用方式
GB/T 7714
Song, Xinkai,Zhi, Tian,Fan, Zhe,et al. Cambricon-G: A Polyvalent Energy-Efficient Accelerator for Dynamic Graph Neural Networks[J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,2022,41(1):116-128.
APA Song, Xinkai.,Zhi, Tian.,Fan, Zhe.,Zhang, Zhenxing.,Zeng, Xi.,...&Chen, Yunji.(2022).Cambricon-G: A Polyvalent Energy-Efficient Accelerator for Dynamic Graph Neural Networks.IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,41(1),116-128.
MLA Song, Xinkai,et al."Cambricon-G: A Polyvalent Energy-Efficient Accelerator for Dynamic Graph Neural Networks".IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS 41.1(2022):116-128.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Song, Xinkai]的文章
[Zhi, Tian]的文章
[Fan, Zhe]的文章
百度学术
百度学术中相似的文章
[Song, Xinkai]的文章
[Zhi, Tian]的文章
[Fan, Zhe]的文章
必应学术
必应学术中相似的文章
[Song, Xinkai]的文章
[Zhi, Tian]的文章
[Fan, Zhe]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。