Institute of Computing Technology, Chinese Academy IR
| STP: Special token prompt for parameter-efficient tuning of pre-trained language models | |
| Yan, Yaoyao1; Yu, Hui2; Wang, Da3,4; Ye, Jing3,4,5; Liu, Fang'ai1; Xu, Weizhi1,4 | |
| 2025-07-25 | |
| 发表期刊 | EXPERT SYSTEMS WITH APPLICATIONS
![]() |
| ISSN | 0957-4174 |
| 卷号 | 284页码:10 |
| 摘要 | Fine-tuning has become the standard method for using large pre-trained language models to accomplish specific downstream tasks. However, full fine-tuning requires updating all model parameters, which is not only computationally expensive but also prone to catastrophic forgetting, compromising the knowledge acquired during pre-training. In this work, we propose Special Token Prompt, a method that automatically generates prompts by combining specific task and input data using special tokens. By analyzing the attention weight distribution of the model, we introduce different special token prompts at various Transformer layers. During fine-tuning, we update only the special token prompts while keeping the other parameters of the language model frozen. Through this approach, the model is able to effectively propagate information to other tokens during the forward pass. On the GLUE benchmark, we achieved performance comparable to full fine-tuning by updating only 0.009% to 0.011% of parameters on the BERT-base model and 0.011% to 0.015% on the RoBERTa-base model. |
| 关键词 | Fine-tuning Special token prompt Transformer Attention weight distribution |
| DOI | 10.1016/j.eswa.2025.127665 |
| 收录类别 | SCI |
| 语种 | 英语 |
| 资助项目 | Natural Science Foundation Shandong Province[ZR2022MF328] ; Natural Science Foundation Shandong Province[ZR2019LZH014] ; National Natural Science Foundation of China[92473203] ; National Natural Science Foundation of China[61602284] ; National Natural Science Foundation of China[61602285] ; State Key Lab of Processors Open Fund Project[CLQ202409] ; State Key Lab of Processors Open Fund Project[CLQ202402] ; CCF-Ricore Education Fund[CCF-Ricore OF 2024003] |
| WOS研究方向 | Computer Science ; Engineering ; Operations Research & Management Science |
| WOS类目 | Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic ; Operations Research & Management Science |
| WOS记录号 | WOS:001488514900002 |
| 出版者 | PERGAMON-ELSEVIER SCIENCE LTD |
| 引用统计 | |
| 文献类型 | 期刊论文 |
| 条目标识符 | http://119.78.100.204/handle/2XEOYT63/42392 |
| 专题 | 中国科学院计算技术研究所期刊论文_英文 |
| 通讯作者 | Xu, Weizhi |
| 作者单位 | 1.Shandong Normal Univ, Informat Sci & Engn Sch, Jinan, Peoples R China 2.Shandong Normal Univ, Business Sch, Jinan, Peoples R China 3.Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China 4.State Key Lab Processors, Beijing, Peoples R China 5.CASTEST Co Ltd, Beijing, Peoples R China |
| 推荐引用方式 GB/T 7714 | Yan, Yaoyao,Yu, Hui,Wang, Da,et al. STP: Special token prompt for parameter-efficient tuning of pre-trained language models[J]. EXPERT SYSTEMS WITH APPLICATIONS,2025,284:10. |
| APA | Yan, Yaoyao,Yu, Hui,Wang, Da,Ye, Jing,Liu, Fang'ai,&Xu, Weizhi.(2025).STP: Special token prompt for parameter-efficient tuning of pre-trained language models.EXPERT SYSTEMS WITH APPLICATIONS,284,10. |
| MLA | Yan, Yaoyao,et al."STP: Special token prompt for parameter-efficient tuning of pre-trained language models".EXPERT SYSTEMS WITH APPLICATIONS 284(2025):10. |
| 条目包含的文件 | 条目无相关文件。 | |||||
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论