CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
SynergyFlow: An Elastic Accelerator Architecture Supporting Batch Processing of Large-Scale Deep Neural Networks
Li, Jiajun1,2; Yan, Guihai1,2; Lu, Wenyan1,2; Gong, Shijun1,2; Jiang, Shuhao1,2; Wu, Jingya1,2; Li, Xiaowei1,2
2019
发表期刊ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS
ISSN1084-4309
卷号24期号:1页码:27
摘要Neural networks (NNs) have achieved great success in a broad range of applications. As NN-based methods are often both computation and memory intensive, accelerator solutions have been proved to be highly promising in terms of both performance and energy efficiency. Although prior solutions can deliver high computational throughput for convolutional layers, they could incur severe performance degradation when accommodating the entire network model, because there exist very diverse computing and memory bandwidth requirements between convolutional layers and fully connected layers and, furthermore, among different NN models. To overcome this problem, we proposed an elastic accelerator architecture, called SynergyFlow, which intrinsically supports layer-level and model-level parallelism for large-scale deep neural networks. SynergyFlow boosts the resource utilization by exploiting the complementary effect of resource demanding in different layers and different NN models. SynergyFlow can dynamically reconfigure itself according to the workload characteristics, maintaining a high performance and high resource utilization among various models. As a case study, we implement SynergyFlow on a P395-AB FPGA board. Under 100MHz working frequency, our implementation improves the performance by 33.8% on average (up to 67.2% on AlexNet) compared to comparable provisioned previous architectures.
关键词Deep neural networks convolutional neural networks accelerator architecture resource utilization complementary effect
DOI10.1145/3275243
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[61572470] ; National Natural Science Foundation of China[61872336] ; National Natural Science Foundation of China[61532017] ; National Natural Science Foundation of China[61432017] ; National Natural Science Foundation of China[61521092] ; National Natural Science Foundation of China[61376043] ; Youth Innovation Promotion Association, CAS[Y404441000]
WOS研究方向Computer Science
WOS类目Computer Science, Hardware & Architecture ; Computer Science, Software Engineering
WOS记录号WOS:000455951700008
出版者ASSOC COMPUTING MACHINERY
引用统计
被引频次:1[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/3471
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Yan, Guihai; Li, Xiaowei
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, 6 Kexueyuan South Rd, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Li, Jiajun,Yan, Guihai,Lu, Wenyan,et al. SynergyFlow: An Elastic Accelerator Architecture Supporting Batch Processing of Large-Scale Deep Neural Networks[J]. ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS,2019,24(1):27.
APA Li, Jiajun.,Yan, Guihai.,Lu, Wenyan.,Gong, Shijun.,Jiang, Shuhao.,...&Li, Xiaowei.(2019).SynergyFlow: An Elastic Accelerator Architecture Supporting Batch Processing of Large-Scale Deep Neural Networks.ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS,24(1),27.
MLA Li, Jiajun,et al."SynergyFlow: An Elastic Accelerator Architecture Supporting Batch Processing of Large-Scale Deep Neural Networks".ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS 24.1(2019):27.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Li, Jiajun]的文章
[Yan, Guihai]的文章
[Lu, Wenyan]的文章
百度学术
百度学术中相似的文章
[Li, Jiajun]的文章
[Yan, Guihai]的文章
[Lu, Wenyan]的文章
必应学术
必应学术中相似的文章
[Li, Jiajun]的文章
[Yan, Guihai]的文章
[Lu, Wenyan]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。