CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding
Zhu, Yunchang1,2; Pang, Liang1; Wu, Kangxi1,2; Lan, Yanyan3; Shen, Huawei1,2; Cheng, Xueqi1,2
2024-09-01
发表期刊ACM TRANSACTIONS ON INFORMATION SYSTEMS
ISSN1046-8188
卷号42期号:5页码:29
摘要Current natural language understanding (NLU) models have been continuously scaling up, both in terms of model size and input context, introducing more hidden and input neurons. While this generally improves performance on average, the extra neurons do not yield a consistent improvement for all instances. This is because some hidden neurons are redundant, and the noise mixed in input neurons tends to distract the model. Previous work mainly focuses on extrinsically reducing low-utility neurons by additional post- or pre-processing, such as network pruning and context selection, to avoid this problem. Beyond that, can we make the model reduce redundant parameters and suppress input noise by intrinsically enhancing the utility of each neuron? If a model can efficiently utilize neurons, no matter which neurons are ablated (disabled), the ablated submodel should perform no better than the original full model. Based on such a comparison principle between models, we propose a cross-model comparative loss for a broad range of tasks. Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model isminimal. We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets fromthree distinctNLU tasks based on fivewidely used pre-trained language models and find it particularly superior for models with few parameters or long input.
关键词Natural language understanding question answering pseudo-relevance feedback loss function
DOI10.1145/3652599
收录类别SCI
语种英语
资助项目National Key R&D Program of China[2022YFB3103700] ; National Key R&D Program of China[2022YFB3103704] ; National Natural Science Foundation of China (NSFC)[62276248] ; National Natural Science Foundation of China (NSFC)[U21B2046] ; Youth Innovation Promotion Association CAS[2023111]
WOS研究方向Computer Science
WOS类目Computer Science, Information Systems
WOS记录号WOS:001253867000011
出版者ASSOC COMPUTING MACHINERY
引用统计
被引频次:1[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/39843
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Pang, Liang
作者单位1.Chinese Acad Sci, Inst Comp Technol, CAS Key Lab AI Secur, 6 Kexueyuan South Rd, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, 6 Kexueyuan South Rd, Beijing 100190, Peoples R China
3.Tsinghua Univ, Inst AI Ind Res, 30 Shuangqing Rd, Beijing 100084, Peoples R China
推荐引用方式
GB/T 7714
Zhu, Yunchang,Pang, Liang,Wu, Kangxi,et al. Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding[J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS,2024,42(5):29.
APA Zhu, Yunchang,Pang, Liang,Wu, Kangxi,Lan, Yanyan,Shen, Huawei,&Cheng, Xueqi.(2024).Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding.ACM TRANSACTIONS ON INFORMATION SYSTEMS,42(5),29.
MLA Zhu, Yunchang,et al."Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding".ACM TRANSACTIONS ON INFORMATION SYSTEMS 42.5(2024):29.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhu, Yunchang]的文章
[Pang, Liang]的文章
[Wu, Kangxi]的文章
百度学术
百度学术中相似的文章
[Zhu, Yunchang]的文章
[Pang, Liang]的文章
[Wu, Kangxi]的文章
必应学术
必应学术中相似的文章
[Zhu, Yunchang]的文章
[Pang, Liang]的文章
[Wu, Kangxi]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。