Institute of Computing Technology, Chinese Academy IR
Learning deep representations via extreme learning machines | |
Yu, Wenchao1,2; Zhuang, Fuzhen1,2; He, Qing1; Shi, Zhongzhi1 | |
2015-02-03 | |
发表期刊 | NEUROCOMPUTING |
ISSN | 0925-2312 |
卷号 | 149页码:308-315 |
摘要 | Extreme learning machine (ELM) as an emerging technology has achieved exceptional performance in large-scale settings, and is well suited to binary and multi-class classification, as well as regression tasks. However, existing ELM and its variants predominantly employ single hidden layer feedforward networks, leaving the popular and potentially powerful stacked generalization principle unexploited for seeking predictive deep representations of input data. Deep architectures can find higher-level representations, thus can potentially capture relevant higher-level abstractions. But most of current deep learning methods require solving a difficult and non-convex optimization problem. In this paper, we propose a stacked model, DrELM, to learn deep representations via extreme learning machine according to stacked generalization philosophy. The proposed model utilizes ELM as a base building block and incorporates random shift and kernelization as stacking elements. Specifically, in each layer. DrELM integrates a random projection of the predictions obtained by ELM into the original feature, and then applies kernel functions to generate the resultant feature. To verify the classification and regression performance of DrELM, we conduct the experiments on both synthetic and real-world data sets. The experimental results show that DrELM outperforms ELM and kernel ELMs, which appear to demonstrate that DrELM could yield predictive features that are suitable for prediction tasks. The performances of the deep models (i.e. Stacked Auto-encoder) are comparable. However, due to the utilization of ELM, DrELM is easier to learn and faster in testing. (C) 2014 Elsevier B.V. All rights reserved. |
关键词 | Extreme learning machine Deep learning Representation learning Stacked ELMs Stacked generalization DrELM |
DOI | 10.1016/j.neucom.2014.03.077 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Natural Science Foundation of China[6117505261203297] ; National Natural Science Foundation of China[60933004] ; National Natural Science Foundation of China[61035003] ; National High-tech R&D Program of China (863 Program)[2013AA01A606] ; National High-tech R&D Program of China (863 Program)[2012AA011003] ; National Program on Key Basic Research Project (973 Program)[2013CB329502] |
WOS研究方向 | Computer Science |
WOS类目 | Computer Science, Artificial Intelligence |
WOS记录号 | WOS:000360028800037 |
出版者 | ELSEVIER SCIENCE BV |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/9399 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Yu, Wenchao |
作者单位 | 1.Chinese Acad Sci, Key Lab Intelligent Informat Proc, Inst Comp Technol, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Beijing 100049, Peoples R China |
推荐引用方式 GB/T 7714 | Yu, Wenchao,Zhuang, Fuzhen,He, Qing,et al. Learning deep representations via extreme learning machines[J]. NEUROCOMPUTING,2015,149:308-315. |
APA | Yu, Wenchao,Zhuang, Fuzhen,He, Qing,&Shi, Zhongzhi.(2015).Learning deep representations via extreme learning machines.NEUROCOMPUTING,149,308-315. |
MLA | Yu, Wenchao,et al."Learning deep representations via extreme learning machines".NEUROCOMPUTING 149(2015):308-315. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论