CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
STP: Special token prompt for parameter-efficient tuning of pre-trained language models
Yan, Yaoyao1; Yu, Hui2; Wang, Da3,4; Ye, Jing3,4,5; Liu, Fang'ai1; Xu, Weizhi1,4
2025-07-25
发表期刊EXPERT SYSTEMS WITH APPLICATIONS
ISSN0957-4174
卷号284页码:10
摘要Fine-tuning has become the standard method for using large pre-trained language models to accomplish specific downstream tasks. However, full fine-tuning requires updating all model parameters, which is not only computationally expensive but also prone to catastrophic forgetting, compromising the knowledge acquired during pre-training. In this work, we propose Special Token Prompt, a method that automatically generates prompts by combining specific task and input data using special tokens. By analyzing the attention weight distribution of the model, we introduce different special token prompts at various Transformer layers. During fine-tuning, we update only the special token prompts while keeping the other parameters of the language model frozen. Through this approach, the model is able to effectively propagate information to other tokens during the forward pass. On the GLUE benchmark, we achieved performance comparable to full fine-tuning by updating only 0.009% to 0.011% of parameters on the BERT-base model and 0.011% to 0.015% on the RoBERTa-base model.
关键词Fine-tuning Special token prompt Transformer Attention weight distribution
DOI10.1016/j.eswa.2025.127665
收录类别SCI
语种英语
资助项目Natural Science Foundation Shandong Province[ZR2022MF328] ; Natural Science Foundation Shandong Province[ZR2019LZH014] ; National Natural Science Foundation of China[92473203] ; National Natural Science Foundation of China[61602284] ; National Natural Science Foundation of China[61602285] ; State Key Lab of Processors Open Fund Project[CLQ202409] ; State Key Lab of Processors Open Fund Project[CLQ202402] ; CCF-Ricore Education Fund[CCF-Ricore OF 2024003]
WOS研究方向Computer Science ; Engineering ; Operations Research & Management Science
WOS类目Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic ; Operations Research & Management Science
WOS记录号WOS:001488514900002
出版者PERGAMON-ELSEVIER SCIENCE LTD
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/42392
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Xu, Weizhi
作者单位1.Shandong Normal Univ, Informat Sci & Engn Sch, Jinan, Peoples R China
2.Shandong Normal Univ, Business Sch, Jinan, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
4.State Key Lab Processors, Beijing, Peoples R China
5.CASTEST Co Ltd, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Yan, Yaoyao,Yu, Hui,Wang, Da,et al. STP: Special token prompt for parameter-efficient tuning of pre-trained language models[J]. EXPERT SYSTEMS WITH APPLICATIONS,2025,284:10.
APA Yan, Yaoyao,Yu, Hui,Wang, Da,Ye, Jing,Liu, Fang'ai,&Xu, Weizhi.(2025).STP: Special token prompt for parameter-efficient tuning of pre-trained language models.EXPERT SYSTEMS WITH APPLICATIONS,284,10.
MLA Yan, Yaoyao,et al."STP: Special token prompt for parameter-efficient tuning of pre-trained language models".EXPERT SYSTEMS WITH APPLICATIONS 284(2025):10.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yan, Yaoyao]的文章
[Yu, Hui]的文章
[Wang, Da]的文章
百度学术
百度学术中相似的文章
[Yan, Yaoyao]的文章
[Yu, Hui]的文章
[Wang, Da]的文章
必应学术
必应学术中相似的文章
[Yan, Yaoyao]的文章
[Yu, Hui]的文章
[Wang, Da]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。