The aggregation model protection algorithm in scenarios with majority of malicious participant
摘要:
隐私保护联邦学习能够帮助多个参与者构建机器学习模型。然而,该方法很难防御恶意参与者占多数时的投毒攻击。此外,用户或服务器可能会私自出售聚合模型。针对以上问题,提出了一种抗大多数恶意参与者的安全聚合方案,同时保护聚合结果不泄露。在训练阶段,参与者使用差分隐私噪声和随机数保护局部模型;然后参与者对其余的差分隐私模型进行准确率测试,并将结果记录在一个向量中;最后参与者与服务器执行不经意传输协议。得到聚合模型,通过安全分析证明了安全性和正确性,实验结果表明算法在恶意参与者占多数时仍能保持良好的检测能力,并在一定程度上保证了参与者的公平性。
Privacy-preserving federated learning can help multiple participantsbuild a machine learning model. However ,this method is difficult to defend against poisoning attacks when malicious participants are in the majority. Additionally, users or servers may privately sell the aggregated model. To address these issues, a secure aggregation scheme is proposed to resist most malicious participants while protecting the privacy of the aggregated result. In the training phase, participants use differential privacy noise and random numbers to protect their local models. Then, participants test the accuracy of differential privacy models of other participants and record the results in a vector. Finally, participants and the server execute the oblivious transfer protocol to obtain the aggregated model. The security and correctness are proved through a security analysis. The experimental results show that the algorithm can maintain good detection ability even when malicious participants are in the majority and ensure the fairness of the participants to some extent.
作者:
张恩,高婷,黄昱晨
Zhang En, Gao Ting, Huang Yuchen
机构地区:
河南师范大学计算机与信息工程学院;河南师范大学智慧商务与物联网技术河南省工程实验室
引用本文:
张恩,高婷,黄昱晨。恶意参与者多数情景下的聚合模型保护算法 [ J ].河南师范大学学报(自然科学版),2025,53(4):58-65.( Zhang En, Gao Ting, Huang Yuchen. The aggregation model protection algorithm in scenarios with majority of malicious participant [ J ].Journal of Henan Normal University(Natural Science Edition),202553(4):58-65.DOI:10.16366/j.cnki.1000-2367.2024.04.12.0001.)
基金:
国家自然科学基金;河南省科技攻关项目
关键词:
联邦学习;隐私保护;不经意传输;同态哈希
federated learning; privacy preserving; oblivious transfer; homomorphic hash
分类号:
TP309.2