CSpace

浏览/检索结果: 共11条,第1-10条 帮助

限定条件        
已选(0)清除 条数/页:   排序方式:
SqueezeFlow: A Sparse CNN Accelerator Exploiting Concise Convolution Rules 期刊论文
IEEE TRANSACTIONS ON COMPUTERS, 2019, 卷号: 68, 期号: 11, 页码: 1663-1677
作者:  Li, Jiajun;  Jiang, Shuhao;  Gong, Shijun;  Wu, Jingya;  Yan, Junchao;  Yan, Guihai;  Li, Xiaowei
收藏  |  浏览/下载:41/0  |  提交时间:2020/12/10
Convolutional neural networks  accelerator architecture  hardware acceleration  
Deep Hashing Based on VAE-GAN for Efficient Similarity Retrieval 期刊论文
CHINESE JOURNAL OF ELECTRONICS, 2019, 卷号: 28, 期号: 6, 页码: 1191-1197
作者:  Jin, Guoqing;  Zhang, Yongdong;  Lu, Ke
收藏  |  浏览/下载:42/0  |  提交时间:2020/12/10
file organisation  image retrieval  learning (artificial intelligence)  neural nets  pairwise hashing learning  semantic perserving feature mapping model  adversarial generative process  image feature vector  hash codes  pairwise ranking loss  generative networks  VAE-GAN based hashing framework  image retrieval  content preserving images  similarity retrieval  variational autoencoder  generative adversarial network  Image retrieval  Learning to hash  Variational autoencoder(VAE)  Generative adversarial network(GAN)  
Addressing Sparsity in Deep Neural Networks 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2019, 卷号: 38, 期号: 10, 页码: 1858-1871
作者:  Zhou, Xuda;  Du, Zidong;  Zhang, Shijin;  Zhang, Lei;  Lan, Huiying;  Liu, Shaoli;  Li, Ling;  Guo, Qi;  Chen, Tianshi;  Chen, Yunji
收藏  |  浏览/下载:257/0  |  提交时间:2019/12/10
Accelerator  architecture  deep neural networks (DNNs)  sparsity  
BSHIFT: A Low Cost Deep Neural Networks Accelerator 期刊论文
INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, 2019, 卷号: 47, 期号: 3, 页码: 360-372
作者:  Yu, Yong;  Zhi, Tian;  Zhou, Xuda;  Liu, Shaoli;  Chen, Yunji;  Cheng, Shuyao
收藏  |  浏览/下载:85/0  |  提交时间:2019/08/16
Deep neural networks  Low power  Lossless  Accelerator  
Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks 期刊论文
IEEE TRANSACTIONS ON COMPUTERS, 2019, 卷号: 68, 期号: 6, 页码: 867-881
作者:  Lu, Wenyan;  Yan, Guihai;  Li, Jiajun;  Gong, Shijun;  Jiang, Shuhao;  Wu, Jingya;  Li, Xiaowei
收藏  |  浏览/下载:249/0  |  提交时间:2019/08/16
Convolutional neural networks  accelerator  architecture  parallelism  sparsity  
Deep Representation Learning With Part Loss for Person Re-Identification 期刊论文
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 卷号: 28, 期号: 6, 页码: 2860-2871
作者:  Yao, Hantao;  Zhang, Shiliang;  Hong, Richang;  Zhang, Yongdong;  Xu, Changsheng;  Tian, Qi
收藏  |  浏览/下载:83/0  |  提交时间:2019/08/16
Person re-identification  representation learning  part lass networks  convolutional neural networks  
moDNN: Memory Optimal Deep Neural Network Training on Graphics Processing Units 期刊论文
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2019, 卷号: 30, 期号: 3, 页码: 646-661
作者:  Chen, Xiaoming;  Chen, Danny Ziyi;  Han, Yinhe;  Hu, Xiaobo Sharon
收藏  |  浏览/下载:303/0  |  提交时间:2019/04/03
Deep neural networks  graphics processing units  memory usage  
DeepUbi: a deep learning framework for prediction of ubiquitination sites in proteins 期刊论文
BMC BIOINFORMATICS, 2019, 卷号: 20, 页码: 10
作者:  Fu, Hongli;  Yang, Yingxi;  Wang, Xiaobo;  Wang, Hui;  Xu, Yan
收藏  |  浏览/下载:108/0  |  提交时间:2019/04/03
Ubiquitination  Deep learning  Convolutional neural networks  
CSCC: Convolution Split Compression Calculation Algorithm for Deep Neural Network 期刊论文
IEEE ACCESS, 2019, 卷号: 7, 页码: 71607-71615
作者:  Fan, Shengyu;  Yu, Hui;  Lu, Dianjie;  Jiao, Shuai;  Xu, Weizhi;  Liu, Fangai;  Liu, Zhiyong
收藏  |  浏览/下载:235/0  |  提交时间:2019/08/16
Convolutional neural network  sparse matrix vector multiplication  neural networks  convolution  sparse matrices  
SynergyFlow: An Elastic Accelerator Architecture Supporting Batch Processing of Large-Scale Deep Neural Networks 期刊论文
ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2019, 卷号: 24, 期号: 1, 页码: 27
作者:  Li, Jiajun;  Yan, Guihai;  Lu, Wenyan;  Gong, Shijun;  Jiang, Shuhao;  Wu, Jingya;  Li, Xiaowei
收藏  |  浏览/下载:70/0  |  提交时间:2019/04/03
Deep neural networks  convolutional neural networks  accelerator  architecture  resource utilization  complementary effect