Institute of Computing Technology, Chinese Academy IR
| Bi-Modality Individual-Aware Prompt Tuning for Visual-Language Model | |
| Yao, Hantao1; Zhang, Rui2; Lyu, Huaihai3,4; Zhang, Yongdong1; Xu, Changsheng3,4 | |
| 2025-08-01 | |
| 发表期刊 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
![]() |
| ISSN | 0162-8828 |
| 卷号 | 47期号:8页码:6352-6368 |
| 摘要 | Prompt tuning is a valuable technique for adapting visual language models (VLMs) to different downstream tasks, such as domain generalization and learning from a few examples. Previous methods have utilized Context Optimization approaches to deduce domain-shared or cross-modality prompt tokens, which enhance generalization and discriminative ability in textual or visual contexts. However, these prompt tokens, inferred from training data, cannot adapt perfectly to the distribution of the test dataset. This work introduces a novel approach called Bi-modality Individual-aware Prompt Tuning (BIP) by explicitly incorporating the individual's essential prior knowledge into the learnable prompt to enhance their discriminability and generalization. The critical insight of BIP involves applying the Textual Knowledge Embedding (TKE) and Visual Knowledge Embedding (VKE) models to project the class-aware textual essential knowledge and the instance-aware essential knowledge into the class-aware prompt and instance-aware prompt, referred to as Textual-level Class-aware Prompt tuning (TCP) and Visual-level Instance-aware Prompt tuning (VIP). On the one hand, TCP integrates the generated class-aware prompts into the Text Encoder to produce a dynamic class-aware classifier to improve generalization on unseen domains. On the other hand, VIP uses the instance-aware prompt to generate the dynamic visual embedding of each instance, thereby enhancing the discriminative capability of visual embedding. Comprehensive evaluations demonstrate that BIP can be used as a plug-and-play module easily integrated with existing methods and achieves superior performance on 15 benchmarks across four tasks. |
| 关键词 | Tuning Visualization Training Adaptation models Hands Feature extraction Data models Artificial intelligence Transformers Data mining Prompt tuning individual-aware prompt tuning bi-modality prompt tuning visual-language model |
| DOI | 10.1109/TPAMI.2025.3557780 |
| 收录类别 | SCI |
| 语种 | 英语 |
| 资助项目 | National Science and Technology[2021ZD0112202] ; National Natural Science Foundation of China[62376268] ; National Natural Science Foundation of China[U23A20387] ; National Natural Science Foundation of China[U21B2044] ; National Natural Science Foundation of China[62121002] |
| WOS研究方向 | Computer Science ; Engineering |
| WOS类目 | Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic |
| WOS记录号 | WOS:001522958700031 |
| 出版者 | IEEE COMPUTER SOC |
| 引用统计 | |
| 文献类型 | 期刊论文 |
| 条目标识符 | http://119.78.100.204/handle/2XEOYT63/41767 |
| 专题 | 中国科学院计算技术研究所期刊论文_英文 |
| 通讯作者 | Yao, Hantao |
| 作者单位 | 1.Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, State Key Lab Proc, Beijing 100190, Peoples R China 3.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China 4.Univ Chinese Acad Sci, Beijing 100049, Peoples R China |
| 推荐引用方式 GB/T 7714 | Yao, Hantao,Zhang, Rui,Lyu, Huaihai,et al. Bi-Modality Individual-Aware Prompt Tuning for Visual-Language Model[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2025,47(8):6352-6368. |
| APA | Yao, Hantao,Zhang, Rui,Lyu, Huaihai,Zhang, Yongdong,&Xu, Changsheng.(2025).Bi-Modality Individual-Aware Prompt Tuning for Visual-Language Model.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,47(8),6352-6368. |
| MLA | Yao, Hantao,et al."Bi-Modality Individual-Aware Prompt Tuning for Visual-Language Model".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 47.8(2025):6352-6368. |
| 条目包含的文件 | 条目无相关文件。 | |||||
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论