CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Bi-Modality Individual-Aware Prompt Tuning for Visual-Language Model
Yao, Hantao1; Zhang, Rui2; Lyu, Huaihai3,4; Zhang, Yongdong1; Xu, Changsheng3,4
2025-08-01
发表期刊IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
ISSN0162-8828
卷号47期号:8页码:6352-6368
摘要Prompt tuning is a valuable technique for adapting visual language models (VLMs) to different downstream tasks, such as domain generalization and learning from a few examples. Previous methods have utilized Context Optimization approaches to deduce domain-shared or cross-modality prompt tokens, which enhance generalization and discriminative ability in textual or visual contexts. However, these prompt tokens, inferred from training data, cannot adapt perfectly to the distribution of the test dataset. This work introduces a novel approach called Bi-modality Individual-aware Prompt Tuning (BIP) by explicitly incorporating the individual's essential prior knowledge into the learnable prompt to enhance their discriminability and generalization. The critical insight of BIP involves applying the Textual Knowledge Embedding (TKE) and Visual Knowledge Embedding (VKE) models to project the class-aware textual essential knowledge and the instance-aware essential knowledge into the class-aware prompt and instance-aware prompt, referred to as Textual-level Class-aware Prompt tuning (TCP) and Visual-level Instance-aware Prompt tuning (VIP). On the one hand, TCP integrates the generated class-aware prompts into the Text Encoder to produce a dynamic class-aware classifier to improve generalization on unseen domains. On the other hand, VIP uses the instance-aware prompt to generate the dynamic visual embedding of each instance, thereby enhancing the discriminative capability of visual embedding. Comprehensive evaluations demonstrate that BIP can be used as a plug-and-play module easily integrated with existing methods and achieves superior performance on 15 benchmarks across four tasks.
关键词Tuning Visualization Training Adaptation models Hands Feature extraction Data models Artificial intelligence Transformers Data mining Prompt tuning individual-aware prompt tuning bi-modality prompt tuning visual-language model
DOI10.1109/TPAMI.2025.3557780
收录类别SCI
语种英语
资助项目National Science and Technology[2021ZD0112202] ; National Natural Science Foundation of China[62376268] ; National Natural Science Foundation of China[U23A20387] ; National Natural Science Foundation of China[U21B2044] ; National Natural Science Foundation of China[62121002]
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS记录号WOS:001522958700031
出版者IEEE COMPUTER SOC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/41767
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Yao, Hantao
作者单位1.Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Peoples R China
2.Chinese Acad Sci, Inst Comp Technol, State Key Lab Proc, Beijing 100190, Peoples R China
3.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
4.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
推荐引用方式
GB/T 7714
Yao, Hantao,Zhang, Rui,Lyu, Huaihai,et al. Bi-Modality Individual-Aware Prompt Tuning for Visual-Language Model[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2025,47(8):6352-6368.
APA Yao, Hantao,Zhang, Rui,Lyu, Huaihai,Zhang, Yongdong,&Xu, Changsheng.(2025).Bi-Modality Individual-Aware Prompt Tuning for Visual-Language Model.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,47(8),6352-6368.
MLA Yao, Hantao,et al."Bi-Modality Individual-Aware Prompt Tuning for Visual-Language Model".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 47.8(2025):6352-6368.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yao, Hantao]的文章
[Zhang, Rui]的文章
[Lyu, Huaihai]的文章
百度学术
百度学术中相似的文章
[Yao, Hantao]的文章
[Zhang, Rui]的文章
[Lyu, Huaihai]的文章
必应学术
必应学术中相似的文章
[Yao, Hantao]的文章
[Zhang, Rui]的文章
[Lyu, Huaihai]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。