Secure Aggregation as a way to protect clients inputs in grid measurements

Secure aggregation consists of computing the sum of data collected from multiple sources without disclosing these individual inputs. Secure aggregation has been found in EPES environments in order to protect user inputs. Recently, federated learning emerged as a new collaborative machine learning technology to train machine learning models. In the following, we present the suitability of secure aggregation in federated learning in the context of EPES and in particular for PHOENIX project.

The term Federated Learning was initially introduced by McMahan et al. [1, 2] and refers to a technology that enables training machine learning models on data from different sources without the need to store the data at a central location. Federated Learning (FL) is performed in several rounds with multiple clients and a server. A FL client is installed at each data source location. At the beginning, the FL server initiates the same model for all clients. For each FL round, the clients perform local training on their own data to improve the received machine learning model and send the updated model to the server. The latter aggregates the trained models received from clients by averaging them and then sends the outcome back to the clients. After the clients receive the aggregated model, a new FL round starts where the clients and the server repeat the steps. FL stops when the aggregated model converges. The main goal of FL is to protect the privacy of the local data while still being able to use them for training public models. This technology provides a great advantage over other techniques that try to achieve the same goal (eg., training on encrypted data). The latter adds a large computational overhead since it involves encryption of the inputs then performing complex computations on encrypted data. FL requires less computation as it only involves the averaging operation at the server. While FL is proposed for privacy preserving purposes, it lacks formal guarantee of privacy. For example, adversaries who have access to the training results sent from each client to the server might be able to infer a training sample from a client’s private dataset. Many types of inference attacks on FL are investigated and researched. To prevent such attacks, secure aggregation is used as a protection of the client’s input while permitting the computation of the sum of all the updates.

Nowadays, the use of secure aggregation based on cryptographic schemes in federated learning becomes increasingly popular [2]. We already witness several federated learning frameworks such as FATE, Paddle FL , and Pysyft integrating these technologies. Nevertheless, these implementations are not practical in real-world  scenarios since they underestimate the impact of secure aggregation schemes based on FL [2]. Indeed, federated learning features some unique properties and characteristics that differ from previous applications where secure aggregation was used.hr>


[1] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research. PMLR, 2017.

[2] ] M. Mansouri, M. Onen, W. Ben Jaballah , M. Conti, “ Sok: Secure Aggregation based on Cryptographic Schemes for Federated Learning”, accepted @ Privacy Enhancing Technologies Symposium PETS 2023.

 


Latest PHOENIX Tweets

No posts Available for given user Or posts will be private.

This project has received funding from the European Union’s Horizon 2020 research and Innovation programme under grant agreement N°832989. All information on this website reflects only the authors' view. The Agency and the Commission are not responsible for any use that may be made of the information this website contains.

Sign up to our newsletter