CSpace

浏览/检索结果: 共4条,第1-4条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
MJOA-MU: End-to-edge collaborative computation for DNN inference based on model uploading 期刊论文
COMPUTER NETWORKS, 2023, 卷号: 231, 页码: 17
作者:  Yang, Huan;  Sun, Sheng;  Liu, Min;  Zhang, Qiuping;  Wang, Yuwei
收藏  |  浏览/下载:7/0  |  提交时间:2023/12/04
DNN inference  Model uploading  DNN partitioning  Resource allocation  
A Coordinated Model Pruning and Mapping Framework for RRAM-Based DNN Accelerators 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 卷号: 42, 期号: 7, 页码: 2364-2376
作者:  Qu, Songyun;  Li, Bing;  Zhao, Shixin;  Zhang, Lei;  Wang, Ying
收藏  |  浏览/下载:7/0  |  提交时间:2023/12/04
AutoML  bit-pruning  deep neural networks (DNNs)  resistive random access memory (RRAM)  
Network Pruning for Bit-Serial Accelerators 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 卷号: 42, 期号: 5, 页码: 1597-1609
作者:  Zhao, Xiandong;  Wang, Ying;  Liu, Cheng;  Shi, Cong;  Tu, Kaijie;  Zhang, Lei
收藏  |  浏览/下载:7/0  |  提交时间:2023/12/04
AI accelerators  neural networks (NNs)  NN compression  
BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks 期刊论文
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2023, 卷号: 31, 期号: 1, 页码: 90-103
作者:  Li, Hongyan;  Lu, Hang;  Wang, Haoxuan;  Deng, Shengji;  Li, Xiaowei
收藏  |  浏览/下载:13/0  |  提交时间:2023/07/12
Deep learning accelerator  deep neural network (DNN)  hardware runtime pruning