Fundamental Limits of Information-Theoretic Secure Aggregation
Description
Secure aggregation is a fundamental cryptographic technique used in distributed machine learning paradigms, such as federated learning, enabling a central server to compute the sum of users' inputs without learning anything about their individual data.
This seminar explores the fundamental information-theoretic limits of secure aggregation by analyzing two critical extensions to the standard model. The first paper [1] investigates a two-round communication protocol for handling unknown user dropouts, characterizing the optimal communication cost required. The second paper [2] extends the traditional client-server architecture to a hierarchical network (clients, relays, and server). It introduces a "relay security" constraint to keep intermediate nodes oblivious to user inputs and establishes the optimal trade-off between communication efficiency and key generation efficiency.
[1]. Zhao, Yizhou, and Hua Sun. "Information theoretic secure aggregation with user dropouts." IEEE Transactions on Information Theory 68.11 (2022): 7471-7484.
[2]. Zhang, Xiang, et al. "Optimal communication and key rate region for hierarchical secure aggregation with user collusion." IEEE Transactions on Information Theory 72.2 (2025): 1030-1050.
Federated learning allows to train a machine learning model in a distributed manner, i.e., the training data are collected and stored locally by users such as mobile devices or multiple institutes. The training is under the coordination of a central server and performed iteratively. In each iteration, the server sends the current global model to the users, who update their local model and send the local updates to the server for aggregation.
FL is proposed to protect user's sensitive data since these training data never leave the user devices. However, works have shown that the local updates still leaks information about the local datasets. To deal with this leakage, SecAgg[1] is proposed. Secure aggregation is to make sure that the server only obtains the aggregation of the local updates rather than each individual update.
However, recent work [2] has shown that, SecAgg only preserves privacy of the users in a single training round. Due to user selection in federated learning, by observing the aggregated models over multiple training rounds, the server is able to recoverindividual local models of the users.
The goal of this seminar is to study and understand SecAgg [1], the multi-round privacy leakage it suffers and how is this problem solved in [2].
[1]. Bonawitz, Keith, et al. "Practical secure aggregation for privacy-preserving machine learning." proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.
[2]. So, Jinhyun, et al. "Securing secure aggregation: Mitigating multi-round privacy leakage in federated learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 8. 2023.
Joint Resource Allocation and Incentive Mechanism Design for Edge-Assisted Blockchain Federated Learning
Description
Blockchain-based Federated Learning (BFL) ensures data privacy and security but faces two critical challenges in practical deployments: (1) Resource Constraints: The dual computational burden of local model training and blockchain consensus overwhelms resource-constrained terminals; (2) Incentive Mechanism Deficiencies: The system lacks an effective dynamic economic scheme to motivate terminals to continuously contribute high-quality data and computing power.
To explicitly address these two problems, this thesis proposes an edge-assisted BFL framework. First, we introduce a computation offloading mechanism to assist terminals. By dynamically allocating computing tasks to edge servers, this mechanism effectively breaks the physical resource bottleneck of terminals. Second, we introduce a two-layer Stackelberg game model to solve the incentive design problem, establishing a dynamic economic balance among the Model Owner (MO), terminals, and edge nodes.
By integrating these two solutions, we construct a joint resource allocation and incentive framework. To solve this high-dimensional optimization problem while preserving data privacy, we employ a distributed algorithm based on the Alternating Direction Method of Multipliers (ADMM) to achieve the Nash equilibrium. Simulation results demonstrate that, compared to baseline schemes, the proposed joint mechanism effectively mitigates computational delays, accelerates model convergence, and significantly enhances the overall utilities of participants, providing a robust and fair design for BFL systems.
Due to the inherent data heterogeneity, partial client participation, and the direct impact of the client updates on the global model, fairness and robustness are two important concerns in federated learning.
Fairness can be evaluated by the performance (accuracy) of the global model on clients' local data, while robustness measures the system's resilience to misbehavior from malicious clients.
Statistical heterogeneity often underlines the tension between accuracy, fairness, and robustness. Robust aggregations mitigate the impact of outliers, which also filter out informative updates and violate fairness, and up-weighting the importance of rare clients, the model can easily overfit to corrupted devices.
In this seminar, you will learn the strategies to tackle fairness and robustness simultaneously in the federated learning setting. The following papers can be a good starting point.
Li, Tian, et al. "Fair resource allocation in federated learning." arXiv preprint arXiv:1905.10497 (2019).
Blanchard, Peva, et al. "Machine learning with adversaries: Byzantine tolerant gradient descent." Advances in neural information processing systems 30 (2017).
Li, Tian, et al. "Ditto: Fair and robust federated learning through personalization." International conference on machine learning. PMLR, 2021.
Supervisor:
Yue Xia
Publications
2025
Xia, Y.; Egger, M.; Hofmeister, C.; Bitar, R.: LoByITFL: Low Communication Secure and Private Federated Learning. International Workshop on Secure and Efficient Federated Learning In Conjunction with ACM AsiaCCS 2025 (FL-AsiaCCS’25), 2025 more…
Xia, Y.; Jahani-Nezhad, T. ; Bitar, R.: Fed-DPRoC: Communication-Efficient Differentially Private and Robust Federated Learning. The 3rd IEEE International Conference on Federated Learning Technologies and Applications (FLTA25), 2025 more…
Xia, Y.; Jahani-Nezhad, T. ; Bitar, R.: Fed-DPRoC: Communication-Efficient Differentially Private and Robust Federated Learning. Workshop on Distributed Computing, Optimization & Learning (WDCL) 2025, 2025 more…
2024
Xia, Y.; Hofmeister, C.; Egger, M.; Bitar, R.: Byzantine-resilient and Information-Theoretically Private Federated Learning. Munich Workshop on Coding and Cryptography (MWCC), 2024 more…
Xia, Y.; Hofmeister, C.; Egger, M.; Bitar, R.: Byzantine-Resilient and Information-Theoretically Private Federated Learning. IEEE International Symposium on Information Theory (ISIT) , 2024 more…
Xia, Y.; Hofmeister, C.; Egger, M.; Bitar, R.: Byzantine-Resilient Secure Aggregation for Federated Learning Without Privacy Compromises. IEEE Information Theory Workshop (ITW) , 2024 more…