CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Attention-guided transformation-invariant attack for black-box adversarial examples
Zhu, Jiaqi1; Dai, Feng2; Yu, Lingyun1,3; Xie, Hongtao1; Wang, Lidong4; Wu, Bo5; Zhang, Yongdong1
2022-01-11
发表期刊INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS
ISSN0884-8173
页码24
摘要With the development of media convergence, information acquisition is no longer limited to traditional media, such as newspapers and televisions, but more from digital media on the Internet, where media contents should be under supervision by platforms. At present, the media content analysis technology of Internet platforms relies on deep neural networks (DNNs). However, DNNs show vulnerability to adversarial examples, which results in security risks. Therefore, it is necessary to adequately study the internal mechanism of adversarial examples to build more effective supervision models. When coming to practical applications, supervision models are mostly faced with black-box attacks, where cross-model transferability of adversarial examples has attracted increasing attention. In this paper, to improve the transferability of adversarial examples, we propose an attention-guided transformation-invariant adversarial attack method, which incorporates an attention mechanism to disrupt the most distinctive features and simultaneously ensures adversarial attack invariance under different transformations. Specifically, we dynamically weight the latent features according to an attention mechanism and disrupt them accordingly. Meanwhile, considering the lack of semantics in low-level features, high-level semantics are introduced as spatial guidance to make low-level feature perturbations concentrate on the most discriminative regions. Moreover, since the attention heatmaps may vary significantly across different models, a transformation-invariant aggregated attack strategy is proposed to alleviate overfitting to the proxy model attention. Comprehensive experimental results show that the proposed method can significantly improve the transferability of adversarial examples.
关键词adversarial examples attention media convergence security transformation-invariant
DOI10.1002/int.22808
收录类别SCI
语种英语
资助项目National Key Research and Development Program of China[2018YFB0804203] ; National Natural Science Foundation of China[62121002] ; National Natural Science Foundation of China[U1936210] ; National Natural Science Foundation of China[62072438] ; National Natural Science Foundation of China[U1936110] ; National Natural Science Foundation of China[62102127] ; Hefei Postdoctoral Research Activities Foundation[BSH202101]
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:000741469300001
出版者WILEY
引用统计
被引频次:2[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/18296
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Xie, Hongtao
作者单位1.Univ Sci & Technol China, Sch Informat Sci & Technol, 443 Huangshan Rd, Hefei 230027, Peoples R China
2.Chinese Acad Sci, Key Lab Intelligent Informat Proc, Beijing, Peoples R China
3.Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
4.Beijing Radio & TV Stn, Beijing, Peoples R China
5.MIT IBM Watson AI Lab, Cambridge, MA USA
推荐引用方式
GB/T 7714
Zhu, Jiaqi,Dai, Feng,Yu, Lingyun,et al. Attention-guided transformation-invariant attack for black-box adversarial examples[J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS,2022:24.
APA Zhu, Jiaqi.,Dai, Feng.,Yu, Lingyun.,Xie, Hongtao.,Wang, Lidong.,...&Zhang, Yongdong.(2022).Attention-guided transformation-invariant attack for black-box adversarial examples.INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS,24.
MLA Zhu, Jiaqi,et al."Attention-guided transformation-invariant attack for black-box adversarial examples".INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS (2022):24.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhu, Jiaqi]的文章
[Dai, Feng]的文章
[Yu, Lingyun]的文章
百度学术
百度学术中相似的文章
[Zhu, Jiaqi]的文章
[Dai, Feng]的文章
[Yu, Lingyun]的文章
必应学术
必应学术中相似的文章
[Zhu, Jiaqi]的文章
[Dai, Feng]的文章
[Yu, Lingyun]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。