CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
SPFL: A Self-Purified Federated Learning Method Against Poisoning Attacks
Liu, Zizhen1,2,3; He, Weiyang4; Chang, Chip-Hong4; Ye, Jing1,2,3; Li, Huawei1,2,3; Li, Xiaowei1,2,3
2024
发表期刊IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
ISSN1556-6013
卷号19页码:6604-6619
摘要While Federated learning (FL) is attractive for pulling privacy-preserving distributed training data, the credibility of participating clients and non-inspectable data pose new security threats, of which poisoning attacks are particularly rampant and hard to defend without compromising privacy, performance or other desirable properties. In this paper, we propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model to supervise the training of aggregated model in each iteration. The purification is performed by an attention-guided self-knowledge distillation where the teacher and student models are optimized locally for task loss, distillation loss and attention loss simultaneously. SPFL imposes no restriction on the communication protocol and aggregator at the server. It can work in tandem with any existing secure aggregation algorithms and protocols for augmented security and privacy guarantee. We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against poisoning attacks. The attack success rate of SPFL trained model remains the lowest among all defense methods in comparison, even if the poisoning attack is launched in every iteration with all but one malicious clients in the system. Meantime, it improves the model quality on normal inputs compared to FedAvg, either under attack or in the absence of an attack.
关键词Data models Servers Training Hidden Markov models Training data Adaptation models Security Federated learning poisoning attack knowledge distillation attention maps deep neural network
DOI10.1109/TIFS.2024.3420135
收录类别SCI
语种英语
资助项目National Research Foundation, Singapore[NRF2018NCR-NCR009-0001] ; Ministry of Education, Singapore[MOE-T2EP20121-0008] ; National Natural Science Foundation of China (NSFC)[92373206] ; National Natural Science Foundation of China (NSFC)[U20A20202] ; Youth Innovation Promotion Association Chinese Academy of Sciences (CAS)
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:001270320400001
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/39640
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Chang, Chip-Hong
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 101408, Peoples R China
3.CASTEST Co Ltd, Beijing 100190, Peoples R China
4.Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
推荐引用方式
GB/T 7714
Liu, Zizhen,He, Weiyang,Chang, Chip-Hong,et al. SPFL: A Self-Purified Federated Learning Method Against Poisoning Attacks[J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,2024,19:6604-6619.
APA Liu, Zizhen,He, Weiyang,Chang, Chip-Hong,Ye, Jing,Li, Huawei,&Li, Xiaowei.(2024).SPFL: A Self-Purified Federated Learning Method Against Poisoning Attacks.IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,19,6604-6619.
MLA Liu, Zizhen,et al."SPFL: A Self-Purified Federated Learning Method Against Poisoning Attacks".IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 19(2024):6604-6619.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Liu, Zizhen]的文章
[He, Weiyang]的文章
[Chang, Chip-Hong]的文章
百度学术
百度学术中相似的文章
[Liu, Zizhen]的文章
[He, Weiyang]的文章
[Chang, Chip-Hong]的文章
必应学术
必应学术中相似的文章
[Liu, Zizhen]的文章
[He, Weiyang]的文章
[Chang, Chip-Hong]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。