CSpace

浏览/检索结果: 共17条,第1-10条 帮助

已选(0)清除 条数/页:   排序方式:
PIMCOMP: An End-to-End DNN Compiler for Processing-In-Memory Accelerators 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2025, 卷号: 44, 期号: 5, 页码: 1745-1759
作者:  Sun, Xiaotian;  Wang, Xinyu;  Li, Wanqian;  Han, Yinhe;  Chen, Xiaoming
收藏  |  浏览/下载:13/0  |  提交时间:2025/06/25
Hardware  Optimization  Artificial neural networks  Pipelines  Parallel processing  Biological system modeling  Resource management  Adaptation models  Scheduling  Memory management  Deep neural network (DNN)  end-to-end compiler  processing-in-memory (PIM) accelerator  system-level optimization  
VastPipe: A High-Throughput Inference System via Adaptive Space-Division Multiplexing for Diverse Accelerators 期刊论文
JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2025, 卷号: 40, 期号: 2, 页码: 444-463
作者:  Ma, Li-Xian;  Wang, Le-Ping;  Shao, En;  Cao, Rong-Yu;  Tan, Guang-Ming
收藏  |  浏览/下载:5/0  |  提交时间:2025/06/25
cluster scheduling  resource management  reinforcement learning  DNN accelerator  
Collaborative non-chain DNN inference with multi-device based on layer parallel 期刊论文
DIGITAL COMMUNICATIONS AND NETWORKS, 2024, 卷号: 10, 期号: 6, 页码: 1748-1759
作者:  Zhang, Qiuping;  Sun, Sheng;  Luo, Junjie;  Liu, Min;  Li, Zhongcheng;  Yang, Huan;  Wang, Yuwei
收藏  |  浏览/下载:13/0  |  提交时间:2025/06/25
Collaborative DNN inference  Multi-device collaboration  Non-chain DNN model  
Advancements in Accelerating Deep Neural Network Inference on AIoT Devices: A Survey 期刊论文
IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2024, 卷号: 9, 期号: 6, 页码: 830-847
作者:  Cheng, Long;  Gu, Yan;  Liu, Qingzhi;  Yang, Lei;  Liu, Cheng;  Wang, Ying
收藏  |  浏览/下载:13/0  |  提交时间:2025/06/25
Computational modeling  Hardware  Artificial neural networks  Optimization  Internet of Things  Adaptation models  Data models  AIoT devices  DNN inference  model compression  parallel computing  performance optimization  survey  
General Purpose Deep Learning Accelerator Based on Bit Interleaving 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 卷号: 43, 期号: 5, 页码: 1470-1483
作者:  Chang, Liang;  Lu, Hang;  Li, Chenglong;  Zhao, Xin;  Hu, Zhicheng;  Zhou, Jun;  Li, Xiaowei
收藏  |  浏览/下载:29/0  |  提交时间:2024/12/06
Synchronization  Parallel processing  Computational modeling  Training  Pragmatics  Power demand  Hardware acceleration  Accelerator  bit-level sparsity  deep neural network (DNN)  
Mortar-FP8: Morphing the Existing FP32 Infrastructure for High-Performance Deep Learning Acceleration 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 卷号: 43, 期号: 3, 页码: 878-891
作者:  Li, Hongyan;  Lu, Hang;  Li, Xiaowei
收藏  |  浏览/下载:37/0  |  提交时间:2024/05/20
Deep learning accelerator  deep neural network (DNN)  fp8 format  
AKGF: Automatic Kernel Generation for DNN on CPU-FPGA 期刊论文
COMPUTER JOURNAL, 2023, 页码: 9
作者:  Dong, Dong;  Jiang, Hongxu;  Diao, Boyu
收藏  |  浏览/下载:31/0  |  提交时间:2023/12/04
DNN accelerated compilers  polyhedral model  heterogeneous computing  CPU-FPGA  
MJOA-MU: End-to-edge collaborative computation for DNN inference based on model uploading 期刊论文
COMPUTER NETWORKS, 2023, 卷号: 231, 页码: 17
作者:  Yang, Huan;  Sun, Sheng;  Liu, Min;  Zhang, Qiuping;  Wang, Yuwei
收藏  |  浏览/下载:33/0  |  提交时间:2023/12/04
DNN inference  Model uploading  DNN partitioning  Resource allocation  
DyPipe: A Holistic Approach to Accelerating Dynamic Neural Networks with Dynamic Pipelining 期刊论文
JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2023, 卷号: 38, 期号: 4, 页码: 899-910
作者:  Zhuang, Yi-Min;  Hu, Xing;  Chen, Xiao-Bing;  Zhi, Tian
收藏  |  浏览/下载:27/0  |  提交时间:2024/05/20
dynamic neural network (NN)  deep neural network (DNN) accelerator  dynamic pipelining  
BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks 期刊论文
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2023, 卷号: 31, 期号: 1, 页码: 90-103
作者:  Li, Hongyan;  Lu, Hang;  Wang, Haoxuan;  Deng, Shengji;  Li, Xiaowei
收藏  |  浏览/下载:39/0  |  提交时间:2023/07/12
Deep learning accelerator  deep neural network (DNN)  hardware runtime pruning