In the past few years, Federated Learning (FL) has emerged as an effective approach for training Neural Networks (NNs) over a computing network while preserving data privacy. Most existing FL approaches require defining a priori 1) a predefined structure for all the NNs running on the clients and 2) an explicit aggregation procedure. These can be limiting factors in cases where pre-defining such algorithmic details is difficult. Recently, NEvoFed was proposed, an FL method that leverages Neuroevolution running on the clients, in which the NN structures are heterogeneous and the aggregation is implicitly accomplished on the client side. Here, we propose MFC-NEvoFed, a novel approach to FL that does not require learning models, i.e., neural network parameters, to be distributed over the networks, thus taking a step towards security improvement. The only information exchanged in client/server communication is the performance of each model on local data, allowing the emergence of optimal NN architectures without needing any kind of model aggregation. Another appealing feature of our framework is that it can be used with any Machine Learning algorithm provided that, during the learning phase, the model updates do not depend on the input data. To assess the validity of MFC-NEvoFed, we test it on four datasets, showing that very compact NNs can be obtained without drops in performance compared to canonical FL. Finally, such compact structures allow for a step towards explainability, which is highly desirable in domains such as digital health, from which the tested datasets come.
Model-Free-Communication Federated Neuroevolution / Custode, Leonardo Lucio; Iacca, Giovanni; De Falco, Ivanoe; Scafuri, Umberto; Della Cioppa, Antonio. - In: ACM TRANSACTIONS ON EVOLUTIONARY LEARNING AND OPTIMIZATION. - ISSN 2688-299X. - 2025:(2025). [10.1145/3745032]
Model-Free-Communication Federated Neuroevolution
Leonardo Lucio Custode;Giovanni Iacca;
2025-01-01
Abstract
In the past few years, Federated Learning (FL) has emerged as an effective approach for training Neural Networks (NNs) over a computing network while preserving data privacy. Most existing FL approaches require defining a priori 1) a predefined structure for all the NNs running on the clients and 2) an explicit aggregation procedure. These can be limiting factors in cases where pre-defining such algorithmic details is difficult. Recently, NEvoFed was proposed, an FL method that leverages Neuroevolution running on the clients, in which the NN structures are heterogeneous and the aggregation is implicitly accomplished on the client side. Here, we propose MFC-NEvoFed, a novel approach to FL that does not require learning models, i.e., neural network parameters, to be distributed over the networks, thus taking a step towards security improvement. The only information exchanged in client/server communication is the performance of each model on local data, allowing the emergence of optimal NN architectures without needing any kind of model aggregation. Another appealing feature of our framework is that it can be used with any Machine Learning algorithm provided that, during the learning phase, the model updates do not depend on the input data. To assess the validity of MFC-NEvoFed, we test it on four datasets, showing that very compact NNs can be obtained without drops in performance compared to canonical FL. Finally, such compact structures allow for a step towards explainability, which is highly desirable in domains such as digital health, from which the tested datasets come.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



