CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout
Xue, Jingjing1; Sun, Sheng1; Liu, Min2; Li, Qi3,4; Xu, Ke2,5
2025
发表期刊IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
ISSN1556-6013
卷号20页码:2464-2479
摘要Federated Learning (FL) has emerged as a privacy-preserving training paradigm, which enables distributed devices to jointly learn a shared model without raw data sharing. However, the inaccessible client-side data and unverifiable local training leave FL vulnerable to Byzantine attacks. Most defense strategies focus on penalizing malicious clients in server-side aggregations and ignore clients-side weight units poisoning assessment, failing to maintain robustness and convergence in non-IID settings. In this paper, we propose Federated learning with Benignity-assessable Bayesian Dropout and variational Attention (FedBDA) to achieve local robust training based on fine-grained benignity indicators and guarantee global robustness over non-IID data. Specifically, FedBDA integrates variational inference explanation of dropout into local training, where each client individually quantifies the benign degree of weight units to determine a resilient dropping pattern for the local Bayesian model, enabling client-side robust training with Bayesian interpretability. To accommodate variational distributions of local Bayesian models and globally assess their benign potentials, we design a joint attention mechanism based on Jensen-Shannon divergence among local, global, and median distributions for robust weighted aggregation. Theoretical analysis proves the robustness and convergence of FedBDA. We conduct extensive experiments on four benchmark datasets with five typical attacks, and the results demonstrate that FedBDA outperforms status quo approaches in model performance and running efficiency.
关键词Bayes methods Training Servers Data models Robustness Distributed databases Uplink Convergence Computational modeling Recurrent neural networks Federated learning Byzantine attack dropout defense robust aggregation
DOI10.1109/TIFS.2025.3536777
收录类别SCI
语种英语
资助项目National Key Research and Development Program of China[2021YFB2900102] ; National Natural Science Foundation of China[62472410] ; National Natural Science Foundation of China[62072436] ; National Science Fund for Distinguished Young Scholars of China[62425201]
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:001438166200008
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/40715
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Liu, Min
作者单位1.Chinese Acad Sci, Inst Comp Technol, Beijing 100045, Peoples R China
2.Zhongguancun Lab, Beijing 100086, Peoples R China
3.Tsinghua Univ, Inst Network Sci & Cyberspace, Beijing, Peoples R China
4.Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRist, Beijing, Peoples R China
5.Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Xue, Jingjing,Sun, Sheng,Liu, Min,et al. Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout[J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,2025,20:2464-2479.
APA Xue, Jingjing,Sun, Sheng,Liu, Min,Li, Qi,&Xu, Ke.(2025).Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout.IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,20,2464-2479.
MLA Xue, Jingjing,et al."Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout".IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 20(2025):2464-2479.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Xue, Jingjing]的文章
[Sun, Sheng]的文章
[Liu, Min]的文章
百度学术
百度学术中相似的文章
[Xue, Jingjing]的文章
[Sun, Sheng]的文章
[Liu, Min]的文章
必应学术
必应学术中相似的文章
[Xue, Jingjing]的文章
[Sun, Sheng]的文章
[Liu, Min]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。