CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Enhancing the Robustness of Vision-Language Foundation Models by Alignment Perturbation
Zhang, Cong1,2; Wang, Shuhui3,4; Li, Xiaodan5; Zhu, Yao6; Qi, Honggang2; Huang, Qingming2
2025
发表期刊IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
ISSN1556-6013
卷号20页码:7091-7105
摘要While Vision-Language Models (VLMs) based on large-scale models have shown revolutionary advancements across various vision-language tasks, research on improving VLM robustness remains underexplored. Existing studies primarily focus on attacking VLM after the pretrained visual or textual encoders, typically requiring obvious noise or long inference time. In this study, we look into VLM structure and highlight alignment module's role as a protective filter that enhances VLM robustness against various perturbations. Motivated by these insights, we investigate VLM from both user and model developer perspectives and introduce the alignment perturbation strategy, which consists of multimodal, visual, and textual perturbations. Multimodal perturbation aims to achieve targeted textual output generation and is further utilized to enhance VLM robustness. Minimal perturbations to visual or textual inputs can lead to significant changes in the overall output of VLMs, revealing their sensitivity to both visual and textual input variations. Building on the alignment perturbation strategy, we propose alignment robust training, which efficiently improves VLM robustness by finetuning the parameters of alignment module without excessive resource consumption. Experiment results across various tasks and models demonstrate the effectiveness of the proposed alignment perturbation and alignment robust training. These methods deepen the understanding of VLM robustness, allowing for secure and reliable deployment towards diverse real-world scenarios. Codes are available at https://github.com/zhangconghhh/RobustVLMs
关键词Multimedia forensics adversarial perturbation robust training robust training vision-language models vision-language models vision-language models
DOI10.1109/TIFS.2025.3586430
收录类别SCI
语种英语
资助项目National Key Research and Development Program of China[2023YFC2508704] ; National Natural Science Foundation of China[62236008] ; National Natural Science Foundation of China[62022083] ; National Natural Science Foundation of China[U21B2038] ; Fundamental Research Funds for the Central Universities
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:001543405400003
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/41995
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Wang, Shuhui
作者单位1.Univ Sci & Technol Beijing, Sch Intelligence Sci & Technol, Beijing 100083, Peoples R China
2.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 101408, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
4.Peng Cheng Lab, Shenzhen 518066, Peoples R China
5.Alibaba Grp, Hangzhou 311121, Peoples R China
6.Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
推荐引用方式
GB/T 7714
Zhang, Cong,Wang, Shuhui,Li, Xiaodan,et al. Enhancing the Robustness of Vision-Language Foundation Models by Alignment Perturbation[J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,2025,20:7091-7105.
APA Zhang, Cong,Wang, Shuhui,Li, Xiaodan,Zhu, Yao,Qi, Honggang,&Huang, Qingming.(2025).Enhancing the Robustness of Vision-Language Foundation Models by Alignment Perturbation.IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,20,7091-7105.
MLA Zhang, Cong,et al."Enhancing the Robustness of Vision-Language Foundation Models by Alignment Perturbation".IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 20(2025):7091-7105.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhang, Cong]的文章
[Wang, Shuhui]的文章
[Li, Xiaodan]的文章
百度学术
百度学术中相似的文章
[Zhang, Cong]的文章
[Wang, Shuhui]的文章
[Li, Xiaodan]的文章
必应学术
必应学术中相似的文章
[Zhang, Cong]的文章
[Wang, Shuhui]的文章
[Li, Xiaodan]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。