Institute of Computing Technology, Chinese Academy IR
Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout | |
Xue, Jingjing1; Sun, Sheng1; Liu, Min2; Li, Qi3,4; Xu, Ke2,5 | |
2025 | |
发表期刊 | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
![]() |
ISSN | 1556-6013 |
卷号 | 20页码:2464-2479 |
摘要 | Federated Learning (FL) has emerged as a privacy-preserving training paradigm, which enables distributed devices to jointly learn a shared model without raw data sharing. However, the inaccessible client-side data and unverifiable local training leave FL vulnerable to Byzantine attacks. Most defense strategies focus on penalizing malicious clients in server-side aggregations and ignore clients-side weight units poisoning assessment, failing to maintain robustness and convergence in non-IID settings. In this paper, we propose Federated learning with Benignity-assessable Bayesian Dropout and variational Attention (FedBDA) to achieve local robust training based on fine-grained benignity indicators and guarantee global robustness over non-IID data. Specifically, FedBDA integrates variational inference explanation of dropout into local training, where each client individually quantifies the benign degree of weight units to determine a resilient dropping pattern for the local Bayesian model, enabling client-side robust training with Bayesian interpretability. To accommodate variational distributions of local Bayesian models and globally assess their benign potentials, we design a joint attention mechanism based on Jensen-Shannon divergence among local, global, and median distributions for robust weighted aggregation. Theoretical analysis proves the robustness and convergence of FedBDA. We conduct extensive experiments on four benchmark datasets with five typical attacks, and the results demonstrate that FedBDA outperforms status quo approaches in model performance and running efficiency. |
关键词 | Bayes methods Training Servers Data models Robustness Distributed databases Uplink Convergence Computational modeling Recurrent neural networks Federated learning Byzantine attack dropout defense robust aggregation |
DOI | 10.1109/TIFS.2025.3536777 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Key Research and Development Program of China[2021YFB2900102] ; National Natural Science Foundation of China[62472410] ; National Natural Science Foundation of China[62072436] ; National Science Fund for Distinguished Young Scholars of China[62425201] |
WOS研究方向 | Computer Science ; Engineering |
WOS类目 | Computer Science, Theory & Methods ; Engineering, Electrical & Electronic |
WOS记录号 | WOS:001438166200008 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/40715 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Liu, Min |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Beijing 100045, Peoples R China 2.Zhongguancun Lab, Beijing 100086, Peoples R China 3.Tsinghua Univ, Inst Network Sci & Cyberspace, Beijing, Peoples R China 4.Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRist, Beijing, Peoples R China 5.Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Xue, Jingjing,Sun, Sheng,Liu, Min,et al. Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout[J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,2025,20:2464-2479. |
APA | Xue, Jingjing,Sun, Sheng,Liu, Min,Li, Qi,&Xu, Ke.(2025).Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout.IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,20,2464-2479. |
MLA | Xue, Jingjing,et al."Enhancing Federated Learning Robustness Using Locally Benignity-Assessable Bayesian Dropout".IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 20(2025):2464-2479. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论