CSpace

浏览/检索结果: 共13条,第1-10条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation With Fine-Grained Geometry 期刊论文
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 卷号: 45, 期号: 7, 页码: 8902-8919
作者:  Gao, Lin;  Sun, Jia-Mu;  Mo, Kaichun;  Lai, Yu-Kun;  Guibas, Leonidas J.;  Yang, Jie
收藏  |  浏览/下载:7/0  |  提交时间:2023/12/04
3Dindoor scene synthesis  deep generative model  fine-grained mesh generation  graph neural network  recursive neural network  relationship graphs  variational autoencoder  
DyPipe: A Holistic Approach to Accelerating Dynamic Neural Networks with Dynamic Pipelining 期刊论文
JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2023, 卷号: 38, 期号: 4, 页码: 899-910
作者:  Zhuang, Yi-Min;  Hu, Xing;  Chen, Xiao-Bing;  Zhi, Tian
收藏  |  浏览/下载:2/0  |  提交时间:2024/05/20
dynamic neural network (NN)  deep neural network (DNN) accelerator  dynamic pipelining  
JBNN: A Hardware Design for Binarized Neural Networks Using Single-Flux-Quantum Circuits 期刊论文
IEEE TRANSACTIONS ON COMPUTERS, 2022, 卷号: 71, 期号: 12, 页码: 3203-3214
作者:  Fu, Rongliang;  Huang, Junying;  Wu, Haibin;  Ye, Xiaochun;  Fan, Dongrui;  Ho, Tsung-Yi
收藏  |  浏览/下载:15/0  |  提交时间:2023/07/12
Superconducting  single-flux-quantum  accelerator  binarized neural network  
Search-Free Inference Acceleration for Sparse Convolutional Neural Networks 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 卷号: 41, 期号: 7, 页码: 2156-2169
作者:  Liu, Bosheng;  Chen, Xiaoming;  Han, Yinhe;  Wu, Jigang;  Chang, Liang;  Liu, Peng;  Xu, Haobo
收藏  |  浏览/下载:24/0  |  提交时间:2022/12/07
Internal interconnection  memory bandwidth  sparse accelerators  sparse convolution neural networks (CNNs)  
Synthesizing Mesh Deformation Sequences With Bidirectional LSTM 期刊论文
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 卷号: 28, 期号: 4, 页码: 1906-1916
作者:  Qiao, Yi-Ling;  Lai, Yu-Kun;  Fu, Hongbo;  Gao, Lin
收藏  |  浏览/下载:18/0  |  提交时间:2022/12/07
Strain  Shape  Three-dimensional displays  Animation  Feature extraction  Machine learning  Computer architecture  Mesh deformation  mesh sequences  LSTM  deep learning  shape generation  
Rubik: A Hierarchical Architecture for Efficient Graph Neural Network Training 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 卷号: 41, 期号: 4, 页码: 936-949
作者:  Chen, Xiaobing;  Wang, Yuke;  Xie, Xinfeng;  Hu, Xing;  Basak, Abanti;  Liang, Ling;  Yan, Mingyu;  Deng, Lei;  Ding, Yufei;  Du, Zidong;  Xie, Yuan
收藏  |  浏览/下载:21/0  |  提交时间:2022/12/07
Deep learning accelerator  graph neural network (GNN)  
Cambricon-G: A Polyvalent Energy-Efficient Accelerator for Dynamic Graph Neural Networks 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 卷号: 41, 期号: 1, 页码: 116-128
作者:  Song, Xinkai;  Zhi, Tian;  Fan, Zhe;  Zhang, Zhenxing;  Zeng, Xi;  Li, Wei;  Hu, Xing;  Du, Zidong;  Guo, Qi;  Chen, Yunji
收藏  |  浏览/下载:29/0  |  提交时间:2022/06/21
Accelerator  architecture  graph neural networks (GNNs)  
A Decomposable Winograd Method for N-D Convolution Acceleration in Video Analysis 期刊论文
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 页码: 21
作者:  Huang, Di;  Zhang, Rui;  Zhang, Xishan;  Wu, Fan;  Wang, Xianzhuo;  Jin, Pengwei;  Liu, Shaoli;  Li, Ling;  Chen, Yunji
收藏  |  浏览/下载:35/0  |  提交时间:2021/12/01
Convolution neural networks  Model acceleration  Winograd algorithm  Video analysis  
Neural Collaborative Preference Learning With Pairwise Comparisons 期刊论文
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 卷号: 23, 页码: 1977-1989
作者:  Li, Zhaopeng;  Xu, Qianqian;  Jiang, Yangbangyan;  Ma, Ke;  Cao, Xiaochun;  Huang, Qingming
收藏  |  浏览/下载:26/0  |  提交时间:2022/06/21
Recommender system  collaborative ranking  neural networks  preference ranking  
Swallow: A Versatile Accelerator for Sparse Neural Networks 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 卷号: 39, 期号: 12, 页码: 4881-4893
作者:  Liu, Bosheng;  Chen, Xiaoming;  Han, Yinhe;  Xu, Haobo
收藏  |  浏览/下载:28/0  |  提交时间:2021/12/01
Accelerator  convolutional (Conv) layers  fully connected (FC) layers  sparse neural networks (SNNs)