Institute of Computing Technology, Chinese Academy IR
| Enhancing the Robustness of Vision-Language Foundation Models by Alignment Perturbation | |
| Zhang, Cong1,2; Wang, Shuhui3,4; Li, Xiaodan5; Zhu, Yao6; Qi, Honggang2; Huang, Qingming2 | |
| 2025 | |
| 发表期刊 | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
![]() |
| ISSN | 1556-6013 |
| 卷号 | 20页码:7091-7105 |
| 摘要 | While Vision-Language Models (VLMs) based on large-scale models have shown revolutionary advancements across various vision-language tasks, research on improving VLM robustness remains underexplored. Existing studies primarily focus on attacking VLM after the pretrained visual or textual encoders, typically requiring obvious noise or long inference time. In this study, we look into VLM structure and highlight alignment module's role as a protective filter that enhances VLM robustness against various perturbations. Motivated by these insights, we investigate VLM from both user and model developer perspectives and introduce the alignment perturbation strategy, which consists of multimodal, visual, and textual perturbations. Multimodal perturbation aims to achieve targeted textual output generation and is further utilized to enhance VLM robustness. Minimal perturbations to visual or textual inputs can lead to significant changes in the overall output of VLMs, revealing their sensitivity to both visual and textual input variations. Building on the alignment perturbation strategy, we propose alignment robust training, which efficiently improves VLM robustness by finetuning the parameters of alignment module without excessive resource consumption. Experiment results across various tasks and models demonstrate the effectiveness of the proposed alignment perturbation and alignment robust training. These methods deepen the understanding of VLM robustness, allowing for secure and reliable deployment towards diverse real-world scenarios. Codes are available at https://github.com/zhangconghhh/RobustVLMs |
| 关键词 | Multimedia forensics adversarial perturbation robust training robust training vision-language models vision-language models vision-language models |
| DOI | 10.1109/TIFS.2025.3586430 |
| 收录类别 | SCI |
| 语种 | 英语 |
| 资助项目 | National Key Research and Development Program of China[2023YFC2508704] ; National Natural Science Foundation of China[62236008] ; National Natural Science Foundation of China[62022083] ; National Natural Science Foundation of China[U21B2038] ; Fundamental Research Funds for the Central Universities |
| WOS研究方向 | Computer Science ; Engineering |
| WOS类目 | Computer Science, Theory & Methods ; Engineering, Electrical & Electronic |
| WOS记录号 | WOS:001543405400003 |
| 出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
| 引用统计 | |
| 文献类型 | 期刊论文 |
| 条目标识符 | http://119.78.100.204/handle/2XEOYT63/41995 |
| 专题 | 中国科学院计算技术研究所期刊论文_英文 |
| 通讯作者 | Wang, Shuhui |
| 作者单位 | 1.Univ Sci & Technol Beijing, Sch Intelligence Sci & Technol, Beijing 100083, Peoples R China 2.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 101408, Peoples R China 3.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China 4.Peng Cheng Lab, Shenzhen 518066, Peoples R China 5.Alibaba Grp, Hangzhou 311121, Peoples R China 6.Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China |
| 推荐引用方式 GB/T 7714 | Zhang, Cong,Wang, Shuhui,Li, Xiaodan,et al. Enhancing the Robustness of Vision-Language Foundation Models by Alignment Perturbation[J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,2025,20:7091-7105. |
| APA | Zhang, Cong,Wang, Shuhui,Li, Xiaodan,Zhu, Yao,Qi, Honggang,&Huang, Qingming.(2025).Enhancing the Robustness of Vision-Language Foundation Models by Alignment Perturbation.IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,20,7091-7105. |
| MLA | Zhang, Cong,et al."Enhancing the Robustness of Vision-Language Foundation Models by Alignment Perturbation".IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 20(2025):7091-7105. |
| 条目包含的文件 | 条目无相关文件。 | |||||
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论