CSpace

浏览/检索结果: 共13条,第1-10条 帮助

限定条件            
已选(0)清除 条数/页:   排序方式:
IVP: An Intelligent Video Processing Architecture for Video Streaming 期刊论文
IEEE TRANSACTIONS ON COMPUTERS, 2023, 卷号: 72, 期号: 1, 页码: 264-277
作者:  Gao, Chengsi;  Wang, Ying;  Han, Yinhe;  Chen, Weiwei;  Zhang, Lei
收藏  |  浏览/下载:13/0  |  提交时间:2023/07/12
Video enhancement  compressed video  DNN  approximate computing  optical flow  accelerator  
Dadu-SV: Accelerate Stereo Vision Processing on NPU 期刊论文
IEEE EMBEDDED SYSTEMS LETTERS, 2022, 卷号: 14, 期号: 4, 页码: 191-194
作者:  Min, Feng;  Wang, Ying;  Xu, Haobo;  Huang, Junpei;  Wang, Yujie;  Zou, Xingqi;  Lu, Meixuan;  Han, Yinhe
收藏  |  浏览/下载:14/0  |  提交时间:2023/07/12
Hardware acceleration  neural computing  neural processing unit (NPU)  semiglobal matching (SGM)  stereo vision  
Amphis: Managing Reconfigurable Processor Architectures With Generative Adversarial Learning 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 卷号: 41, 期号: 11, 页码: 3993-4003
作者:  Chen, Weiwei;  Wang, Ying;  Xu, Ying;  Gao, Chengsi;  Han, Yinhe;  Zhang, Lei
收藏  |  浏览/下载:14/0  |  提交时间:2023/07/12
Resource management  Predictive models  Runtime  Generators  Generative adversarial networks  Computational modeling  Training  Design space exploration  generative adversarial network (GAN)  reconfigurable processor  
LINAC: A Spatially Linear Accelerator for Convolutional Neural Networks 期刊论文
IEEE COMPUTER ARCHITECTURE LETTERS, 2022, 卷号: 21, 期号: 1, 页码: 29-32
作者:  Xiao, Hang;  Xu, Haobo;  Wang, Ying;  Wang, Yujie;  Han, Yinhe
收藏  |  浏览/下载:21/0  |  提交时间:2022/12/07
Linear particle accelerator  Correlation  Kernel  Convolution  Linear regression  System-on-chip  Quantization (signal)  Neural network  acceleration  convolution  linear regression  bit-sparsity  
Thread: Towards fine-grained precision reconfiguration in variable-precision neural network accelerator 期刊论文
IEICE ELECTRONICS EXPRESS, 2019, 卷号: 16, 期号: 14, 页码: 6
作者:  Zhang, Shichang;  Wang, Ying;  Chen, Xiaoming;  Han, Yinhe;  Wang, Yujie;  Li, Xiaowei
收藏  |  浏览/下载:77/0  |  提交时间:2019/12/10
DNN accelerator  variable bit-precision  bit-serial  bit-parallel  fine-grained precision  
PIMSim: A Flexible and Detailed Processing-in-Memory Simulator 期刊论文
IEEE COMPUTER ARCHITECTURE LETTERS, 2019, 卷号: 18, 期号: 1, 页码: 6-9
作者:  Xu, Sheng;  Chen, Xiaoming;  Wang, Ying;  Han, Yinhe;  Qian, Xuehai;  Li, Xiaowei
收藏  |  浏览/下载:80/0  |  提交时间:2019/04/03
Processing-in-memory  simulator  heterogeneous computing  memory system  
A Low Overhead In-Network Data Compressor for the Memory Hierarchy of Chip Multiprocessors 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2018, 卷号: 37, 期号: 6, 页码: 1265-1277
作者:  Wang, Ying;  Li, Huawei;  Han, Yinhe;  Li, Xiaowei
收藏  |  浏览/下载:67/0  |  提交时间:2019/12/10
Cache  chip multiprocessor (CMP)  compression  memory hierarchy  network-on-chip (NoC)  
STT-RAM Buffer Design for Precision-Tunable General-Purpose Neural Network Accelerator 期刊论文
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2017, 卷号: 25, 期号: 4, 页码: 1285-1296
作者:  Song, Lili;  Wang, Ying;  Han, Yinhe;  Li, Huawei;  Cheng, Yuanqing;  Li, Xiaowei
收藏  |  浏览/下载:70/0  |  提交时间:2019/12/12
Approximate computing  machine learning  neural network  spin toque transfer RAM (STT-RAM)  
PSI Conscious Write Scheduling: Architectural Support for Reliable Power Delivery in 3-D Die-Stacked PCM 期刊论文
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2016, 卷号: 24, 期号: 5, 页码: 1613-1625
作者:  Wang, Ying;  Han, Yinhe;  Li, Huawei;  Zhang, Lei;  Cheng, Yuanqing;  Li, Xiaowei
收藏  |  浏览/下载:53/0  |  提交时间:2019/12/13
3-D integration  IR-drop  phase-change memory (PCM)  through-silicon-via (TSV)  write throughput  
VANUCA: Enabling Near-Threshold Voltage Operation in Large-Capacity Cache 期刊论文
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2016, 卷号: 24, 期号: 3, 页码: 858-870
作者:  Wang, Ying;  Han, Yinhe;  Li, Huawei;  Li, Xiaowei
收藏  |  浏览/下载:35/0  |  提交时间:2019/12/13
Cache design  fault tolerant  multi-V-dd  near-threshold voltage (NTV)  nonuniform cache access (NUCA)