CSpace

浏览/检索结果: 共5条,第1-5条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
Multi-Node Acceleration for Large-Scale GCNs 期刊论文
IEEE TRANSACTIONS ON COMPUTERS, 2022, 卷号: 71, 期号: 12, 页码: 3140-3152
作者:  Sun, Gongjian;  Yan, Mingyu;  Wang, Duo;  Li, Han;  Li, Wenming;  Ye, Xiaochun;  Fan, Dongrui;  Xie, Yuan
收藏  |  浏览/下载:26/0  |  提交时间:2023/07/12
Deep learning  graph neural network  hardware accelerator  multi-node system  communication optimization  
JBNN: A Hardware Design for Binarized Neural Networks Using Single-Flux-Quantum Circuits 期刊论文
IEEE TRANSACTIONS ON COMPUTERS, 2022, 卷号: 71, 期号: 12, 页码: 3203-3214
作者:  Fu, Rongliang;  Huang, Junying;  Wu, Haibin;  Ye, Xiaochun;  Fan, Dongrui;  Ho, Tsung-Yi
收藏  |  浏览/下载:15/0  |  提交时间:2023/07/12
Superconducting  single-flux-quantum  accelerator  binarized neural network  
A synergistic reinforcement learning-based framework design in driving automation 期刊论文
COMPUTERS & ELECTRICAL ENGINEERING, 2022, 卷号: 101, 页码: 15
作者:  Qi, Yuqiong;  Hu, Yang;  Wu, Haibin;  Li, Shen;  Ye, Xiaochun;  Fan, Dongrui
收藏  |  浏览/下载:27/0  |  提交时间:2022/12/07
Autonomous Driving  Heterogeneous Multicore AI Accelerator  Criteria  Reinforcement Learning  Scheduling  
Hardware Acceleration for GCNs via Bidirectional Fusion 期刊论文
IEEE COMPUTER ARCHITECTURE LETTERS, 2021, 卷号: 20, 期号: 1, 页码: 4
作者:  Li, Han;  Yan, Mingyu;  Yang, Xiaocheng;  Deng, Lei;  Li, Wenming;  Ye, Xiaochun;  Fan, Dongrui;  Xie, Yuan
收藏  |  浏览/下载:36/0  |  提交时间:2021/12/01
Random access memory  Computational modeling  Analytical models  Hardware  Engines  Computer architecture  Transforms  Graph convolutional neural networks  hardware accelerator  bidirectional execution  inter-phase fusion  
An efficient dataflow accelerator for scientific applications 期刊论文
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 卷号: 112, 页码: 580-588
作者:  Ye, Xiaochun;  Tan, Xu;  Wu, Meng;  Feng, Yujing;  Wang, Da;  Zhang, Hao;  Pei, Songwen;  Fan, Dongrui
收藏  |  浏览/下载:219/0  |  提交时间:2020/12/10
Dataflow architecture  Scientific computing  Instruction level parallelism