CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
An Accelerator for High Efficient Vision Processing
Du, Zidong1,2; Liu, Shaoli1; Fasthuber, Robert3; Chen, Tianshi1; Ienne, Paolo3; Li, Ling1; Luo, Tao1,2; Guo, Qi1; Feng, Xiaobing1; Chen, Yunji1,4; Temam, Olivier5
2017-02-01
发表期刊IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS
ISSN0278-0070
卷号36期号:2页码:227-240
摘要In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications. Still, both the energy efficiency and performance of such accelerators remain limited by memory accesses. In this paper, we focus on image applications, arguably the most important category among recognition and mining applications. The neural networks which are state-of-the-art for these applications are convolutional neural networks (CNNs), and they have an important property: weights are shared among many neurons, considerably reducing the neural network memory footprint. This property allows to entirely map a CNN within an SRAM, eliminating all DRAM accesses for weights. By further hoisting this accelerator next to the image sensor, it is possible to eliminate all remaining DRAM accesses, i.e., for inputs and outputs. In this paper, we propose such a CNN accelerator, placed next to a CMOS or CCD sensor. The absence of DRAM accesses combined with a careful exploitation of the specific data access patterns within CNNs allows us to design an accelerator which is highly energy-efficient. We present a single-core implementation down to the layout at 65 nm, with a modest footprint of 5.94mm(2) and consuming only 336mm(2), but still about 30x faster than high-end GPUs. For visual processing with higher resolution and frame-rate requirements, we further present a multicore implementation with elevated performance.
关键词Accelerator architectures convolutional neural network vision sensor
DOI10.1109/TCAD.2016.2584062
收录类别SCI
语种英语
资助项目NSF of China[61133004] ; NSF of China[61303158] ; NSF of China[61432016] ; NSF of China[61472396] ; NSF of China[61473275] ; NSF of China[61522211] ; NSF of China[61532016] ; NSF of China[61521092] ; 973 Program of China[XDA06010403] ; 973 Program of China[XDB02040009] ; International Collaboration Key Program of the CAS[171111KYSB20130002] ; 10000 talent program
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Hardware & Architecture ; Computer Science, Interdisciplinary Applications ; Engineering, Electrical & Electronic
WOS记录号WOS:000394682200003
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
被引频次:5[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/7508
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Du, Zidong
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
3.Ecole Polytech Fed Lausanne, CH-1015 Lausanne, Switzerland
4.Chinese Acad Sci, CAS Ctr Excellence Brain Sci, Beijing 100190, Peoples R China
5.INRIA, F-91120 Palaiseau, France
推荐引用方式
GB/T 7714
Du, Zidong,Liu, Shaoli,Fasthuber, Robert,et al. An Accelerator for High Efficient Vision Processing[J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,2017,36(2):227-240.
APA Du, Zidong.,Liu, Shaoli.,Fasthuber, Robert.,Chen, Tianshi.,Ienne, Paolo.,...&Temam, Olivier.(2017).An Accelerator for High Efficient Vision Processing.IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,36(2),227-240.
MLA Du, Zidong,et al."An Accelerator for High Efficient Vision Processing".IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS 36.2(2017):227-240.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Du, Zidong]的文章
[Liu, Shaoli]的文章
[Fasthuber, Robert]的文章
百度学术
百度学术中相似的文章
[Du, Zidong]的文章
[Liu, Shaoli]的文章
[Fasthuber, Robert]的文章
必应学术
必应学术中相似的文章
[Du, Zidong]的文章
[Liu, Shaoli]的文章
[Fasthuber, Robert]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。