Institute of Computing Technology, Chinese Academy IR
Leveraging maximum entropy and correlation on latent factors for learning representations | |
He, Zhicheng1; Liu, Jie1; Dang, Kai1; Zhuang, Fuzhen2,3; Huang, Yalou4 | |
2020-11-01 | |
发表期刊 | NEURAL NETWORKS |
ISSN | 0893-6080 |
卷号 | 131页码:312-323 |
摘要 | Many tasks involve learning representations from matrices, and Non-negative Matrix Factorization (NMF) has been widely used due to its excellent interpretability. Through factorization, sample vectors are reconstructed as additive combinations of latent factors, which are represented as non-negative distributions over the raw input features. NMF models are significantly affected by latent factors' distribution characteristics and the correlations among them. And NMF models are faced with the challenge of learning robust latent factor. To this end, we propose to learn representations with an awareness of the semantic quality evaluated from the aspects of intra- and inter-factors. On the one hand, a Maximum Entropy-based function is devised for the intra-factor semantic quality. On the other hand, the semantic uniqueness is evaluated via inter-factor correlation, which reinforces the aim of semantic compactness. Moreover, we present a novel non-linear NMF framework. The learning algorithm is presented and the convergence is theoretically analyzed and proved. Extensive experimental results on multiple datasets demonstrate that our method can be successfully applied to representative NMF models and boost performances over state-of-the-art models. (C) 2020 Elsevier Ltd. All rights reserved. |
关键词 | Non-negative Matrix Factorization Maximum entropy Correlated latent factor learning |
DOI | 10.1016/j.neunet.2020.07.027 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Natural Science Foundation of China[61976119] ; Science and Technology Planning Project of Tianjin, China[18ZXZNGX00310] |
WOS研究方向 | Computer Science ; Neurosciences & Neurology |
WOS类目 | Computer Science, Artificial Intelligence ; Neurosciences |
WOS记录号 | WOS:000581746300025 |
出版者 | PERGAMON-ELSEVIER SCIENCE LTD |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/15475 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Liu, Jie |
作者单位 | 1.Nankai Univ, Coll Artificial Intelligence, Tianjin, Peoples R China 2.Chinese Acad Sci, Xiamen Data Intelligence Acad ICT, Xiamen, Peoples R China 3.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing, Peoples R China 4.Nankai Univ, Coll Software, Tianjin, Peoples R China |
推荐引用方式 GB/T 7714 | He, Zhicheng,Liu, Jie,Dang, Kai,et al. Leveraging maximum entropy and correlation on latent factors for learning representations[J]. NEURAL NETWORKS,2020,131:312-323. |
APA | He, Zhicheng,Liu, Jie,Dang, Kai,Zhuang, Fuzhen,&Huang, Yalou.(2020).Leveraging maximum entropy and correlation on latent factors for learning representations.NEURAL NETWORKS,131,312-323. |
MLA | He, Zhicheng,et al."Leveraging maximum entropy and correlation on latent factors for learning representations".NEURAL NETWORKS 131(2020):312-323. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论