Federated Learning (FL) is a privacy-preserving paradigm that enables collaborative model training across distributed, sensitive data sources without centralized data collection. By keeping raw data local to clients and exchanging only model updates, FL is particularly suitable for Internet-of-Things (IoT) and edge intelligence systems, where data are inherently decentralized and privacy constraints are strict. As a result, FL has become a foundational approach for large-scale distributed learning in real-world environments. Despite its promise, practical deployment of FL remains challenging due to data heterogeneity, limited communication resources, and unreliable client participation. In realistic IoT and edge settings, data distributions are non-IID (Independent and Identically Distributed), network bandwidth is constrained, and clients may participate intermittently or transmit delayed updates. These factors degrade learning performance, destabilize convergence, and limit the scalability of conventional FL algorithms that are typically designed under synchronous and idealized assumptions. This dissertation aims to address these challenges by systematically designing FL frameworks that remain accurate, communication-efficient, and robust under realistic operating conditions. The adopted research methodology employs a multi-level design, addressing complementary challenges across the FL pipeline while preserving data privacy. The proposed solutions are guided by two core objectives: maintaining competitive predictive performance and ensuring stable, scalable learning in heterogeneous, unreliable distributed environments. The research presented in this dissertation yields four complementary contributions, each addressing a distinct level of the FL design space. First, FedFor, a federated multi-task learning framework with dual attention, enables effective joint optimization of related learning objectives by exploiting shared temporal representations under heterogeneous IoT data distributions. Second, GASPU, a Genetic Algorithm-based selective parameter update strategy, formulates parameter transmission as an optimization problem, reducing communication overhead through adaptive parameter selection. Third, AWSFL introduces sign-based gradient compression combined with loss-aware adaptive aggregation and error feedback to achieve high communication compression while preserving robust convergence under non-IID data. Finally, FedSyncDrop addresses system-level unreliability by introducing staleness-aware and dropout-resilient aggregation mechanisms for asynchronous FL settings with intermittent client participation. Extensive experimental evaluations on representative datasets demonstrate that the proposed frameworks substantially reduce communication costs and improve robustness without compromising model accuracy. The results establish that multi-task coupling improves predictive performance under data heterogeneity, adaptive parameter selection and gradient compression significantly reduce communication overhead, and staleness-aware aggregation preserves convergence stability under irregular client participation. The contributions of this dissertation establish a coherent, multi-level design framework for FL that integrates task-level modeling, communication-efficient updates, and system-aware aggregation. The proposed approaches advance the practical applicability of FL in heterogeneous, bandwidth-constrained, and intermittently connected IoT and edge intelligence environments, and provide principled design insights for future large-scale distributed learning systems.
Communication-Efficient Federated Learning for Resource-Constrained Edge Environments / Hassan, Mir. - (2026 Mar 23), pp. 1-119.
Communication-Efficient Federated Learning for Resource-Constrained Edge Environments
Hassan, Mir
2026-03-23
Abstract
Federated Learning (FL) is a privacy-preserving paradigm that enables collaborative model training across distributed, sensitive data sources without centralized data collection. By keeping raw data local to clients and exchanging only model updates, FL is particularly suitable for Internet-of-Things (IoT) and edge intelligence systems, where data are inherently decentralized and privacy constraints are strict. As a result, FL has become a foundational approach for large-scale distributed learning in real-world environments. Despite its promise, practical deployment of FL remains challenging due to data heterogeneity, limited communication resources, and unreliable client participation. In realistic IoT and edge settings, data distributions are non-IID (Independent and Identically Distributed), network bandwidth is constrained, and clients may participate intermittently or transmit delayed updates. These factors degrade learning performance, destabilize convergence, and limit the scalability of conventional FL algorithms that are typically designed under synchronous and idealized assumptions. This dissertation aims to address these challenges by systematically designing FL frameworks that remain accurate, communication-efficient, and robust under realistic operating conditions. The adopted research methodology employs a multi-level design, addressing complementary challenges across the FL pipeline while preserving data privacy. The proposed solutions are guided by two core objectives: maintaining competitive predictive performance and ensuring stable, scalable learning in heterogeneous, unreliable distributed environments. The research presented in this dissertation yields four complementary contributions, each addressing a distinct level of the FL design space. First, FedFor, a federated multi-task learning framework with dual attention, enables effective joint optimization of related learning objectives by exploiting shared temporal representations under heterogeneous IoT data distributions. Second, GASPU, a Genetic Algorithm-based selective parameter update strategy, formulates parameter transmission as an optimization problem, reducing communication overhead through adaptive parameter selection. Third, AWSFL introduces sign-based gradient compression combined with loss-aware adaptive aggregation and error feedback to achieve high communication compression while preserving robust convergence under non-IID data. Finally, FedSyncDrop addresses system-level unreliability by introducing staleness-aware and dropout-resilient aggregation mechanisms for asynchronous FL settings with intermittent client participation. Extensive experimental evaluations on representative datasets demonstrate that the proposed frameworks substantially reduce communication costs and improve robustness without compromising model accuracy. The results establish that multi-task coupling improves predictive performance under data heterogeneity, adaptive parameter selection and gradient compression significantly reduce communication overhead, and staleness-aware aggregation preserves convergence stability under irregular client participation. The contributions of this dissertation establish a coherent, multi-level design framework for FL that integrates task-level modeling, communication-efficient updates, and system-aware aggregation. The proposed approaches advance the practical applicability of FL in heterogeneous, bandwidth-constrained, and intermittently connected IoT and edge intelligence environments, and provide principled design insights for future large-scale distributed learning systems.| File | Dimensione | Formato | |
|---|---|---|---|
|
_PhD_Thesis____Mir_Hassan (3).pdf
embargo fino al 10/03/2027
Tipologia:
Tesi di dottorato (Doctoral Thesis)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
2.54 MB
Formato
Adobe PDF
|
2.54 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



