Institute of Computing Technology, Chinese Academy IR
Adaptive Perturbation for Adversarial Attack | |
Yuan, Zheng1,2; Zhang, Jie1,2; Jiang, Zhaoyan3; Li, Liangliang; Shan, Shiguang2 | |
2024-08-01 | |
发表期刊 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE |
ISSN | 0162-8828 |
卷号 | 46期号:8页码:5663-5676 |
摘要 | In recent years, the security of deep learning models achieves more and more attentions with the rapid development of neural networks, which are vulnerable to adversarial examples. Almost all existing gradient-based attack methods use the sign function in the generation to meet the requirement of perturbation budget on L-infinity norm. However, we find that the sign function may be improper for generating adversarial examples since it modifies the exact gradient direction. Instead of using the sign function, we propose to directly utilize the exact gradient direction with a scaling factor for generating adversarial perturbations, which improves the attack success rates of adversarial examples even with fewer perturbations. At the same time, we also theoretically prove that this method can achieve better black-box transferability. Moreover, considering that the best scaling factor varies across different images, we propose an adaptive scaling factor generator to seek an appropriate scaling factor for each image, which avoids the computational cost for manually searching the scaling factor. Our method can be integrated with almost all existing gradient-based attack methods to further improve their attack success rates. Extensive experiments on the CIFAR10 and ImageNet datasets show that our method exhibits higher transferability and outperforms the state-of-the-art methods. |
关键词 | Perturbation methods Iterative methods Adaptation models Generators Closed box Security Training Adversarial attack transfer-based attack adversarial example adaptive perturbation |
DOI | 10.1109/TPAMI.2024.3367773 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Key R&D Program of China[2021YFC3310100] ; National Natural Science Foundation of China[62176251] ; Beijing Nova Program[20230484368] ; Youth Innovation Promotion Association CAS |
WOS研究方向 | Computer Science ; Engineering |
WOS类目 | Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic |
WOS记录号 | WOS:001262841000014 |
出版者 | IEEE COMPUTER SOC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/39845 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Zhang, Jie |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100049, Peoples R China 3.Tencent, Shenzhen 518057, Peoples R China |
推荐引用方式 GB/T 7714 | Yuan, Zheng,Zhang, Jie,Jiang, Zhaoyan,et al. Adaptive Perturbation for Adversarial Attack[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2024,46(8):5663-5676. |
APA | Yuan, Zheng,Zhang, Jie,Jiang, Zhaoyan,Li, Liangliang,&Shan, Shiguang.(2024).Adaptive Perturbation for Adversarial Attack.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,46(8),5663-5676. |
MLA | Yuan, Zheng,et al."Adaptive Perturbation for Adversarial Attack".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 46.8(2024):5663-5676. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论