CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
DianNao Family: Energy-Efficient Hardware Accelerators for Machine Learning
Chen, Yunji1; Chen, Tianshi1; Xu, Zhiwei1; Sun, Ninghui1; Temam, Olivier2
2016-11-01
发表期刊COMMUNICATIONS OF THE ACM
ISSN0001-0782
卷号59期号:11页码:105-112
摘要Machine Learning (ML) tasks are becoming pervasive in a broad range of applications, and in a broad range of systems (from embedded systems to data centers). As computer architectures evolve toward heterogeneous multi-cores composed of a mix of cores and hardware accelerators, designing hardware accelerators for ML techniques can simultaneously achieve high efficiency and broad application scope. While efficient computational primitives are important for a hardware accelerator, inefficient memory transfers can potentially void the throughput, energy, or cost advantages of accelerators, that is, an Amdahl's law effect, and thus, they should become a first-order concern, just like in processors, rather than an element factored in accelerator design on a second step. In this article, we introduce a series of hardware accelerators (i.e., the DianNao family) designed for ML (especially neural networks), with a special emphasis on the impact of memory on accelerator design, performance, and energy. We show that, on a number of representative neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip DaDianNao system (a member of the DianNao family).
DOI10.1145/2996864
收录类别SCI
语种英语
资助项目NSF of China[61133004] ; NSF of China[61303158] ; NSF of China[61432016] ; NSF of China[61472396] ; NSF of China[61473275] ; NSF of China[61522211] ; NSF of China[61532016] ; NSF of China[61521092] ; 973 Program of China[2015CB358800] ; Strategic Priority Research Program of the CAS[XDA06010403] ; Strategic Priority Research Program of the CAS[XDB02040009] ; International Collaboration Key Program of the CAS[171111KYS-B20130002] ; 10,000 talent program, a Google Faculty Research Award ; Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI)
WOS研究方向Computer Science
WOS类目Computer Science, Hardware & Architecture ; Computer Science, Software Engineering ; Computer Science, Theory & Methods
WOS记录号WOS:000387897700028
出版者ASSOC COMPUTING MACHINERY
引用统计
被引频次:122[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/7912
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Chen, Yunji
作者单位1.Chinese Acad Sci, ICT, Beijing, Peoples R China
2.Inria Saclay, Palaiseau, France
推荐引用方式
GB/T 7714
Chen, Yunji,Chen, Tianshi,Xu, Zhiwei,et al. DianNao Family: Energy-Efficient Hardware Accelerators for Machine Learning[J]. COMMUNICATIONS OF THE ACM,2016,59(11):105-112.
APA Chen, Yunji,Chen, Tianshi,Xu, Zhiwei,Sun, Ninghui,&Temam, Olivier.(2016).DianNao Family: Energy-Efficient Hardware Accelerators for Machine Learning.COMMUNICATIONS OF THE ACM,59(11),105-112.
MLA Chen, Yunji,et al."DianNao Family: Energy-Efficient Hardware Accelerators for Machine Learning".COMMUNICATIONS OF THE ACM 59.11(2016):105-112.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Chen, Yunji]的文章
[Chen, Tianshi]的文章
[Xu, Zhiwei]的文章
百度学术
百度学术中相似的文章
[Chen, Yunji]的文章
[Chen, Tianshi]的文章
[Xu, Zhiwei]的文章
必应学术
必应学术中相似的文章
[Chen, Yunji]的文章
[Chen, Tianshi]的文章
[Xu, Zhiwei]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。