Human Activity Recognition (HAR) has profound applications in domains such as healthcare, wearable devices, and smart environments, where continuous monitoring is essential. However, traditional centralized learning approaches raise privacy concerns and are infeasible for resource-constrained devices due to high communication costs. Federated Learning (FL) offers a privacy-preserving solution by training models locally and aggregating updates, but it still encounters considerable communication overhead. This paper proposes an optimized FL framework that integrates model compression through Knowledge Distillation (KD) to reduce communication costs while preserving model performance. Using a Teacher-Student model architecture, the framework enables the deployment of highly compressed Student models without compromising accuracy or other performance metrics. Experimental evaluations on a smartphone-based HAR dataset show that the proposed framework achieves comparable accuracy, precision, and recall w.r.t. the original (uncompressed) models, with up to 75% reduction in model size (and, consequently, communication cost). This approach demonstrates scalability and feasibility for real-world HAR applications on resource-constrained devices, also laying the groundwork for efficient, privacy-preserving distributed learning in heterogeneous computational environments.

Enhancing Communication-Efficient Federated Learning for Human Activity Recognition Through Knowledge Distillation / Bilal, Hafiz Muhammad; Hassan, Mir; Iacca, Giovanni. - (2025), pp. 1-7. ( 101st IEEE Vehicular Technology Conference, VTC 2025-Spring 2025 Oslo 17th June-20th June 2025) [10.1109/vtc2025-spring65109.2025.11174325].

Enhancing Communication-Efficient Federated Learning for Human Activity Recognition Through Knowledge Distillation

Hassan, Mir;Iacca, Giovanni
2025-01-01

Abstract

Human Activity Recognition (HAR) has profound applications in domains such as healthcare, wearable devices, and smart environments, where continuous monitoring is essential. However, traditional centralized learning approaches raise privacy concerns and are infeasible for resource-constrained devices due to high communication costs. Federated Learning (FL) offers a privacy-preserving solution by training models locally and aggregating updates, but it still encounters considerable communication overhead. This paper proposes an optimized FL framework that integrates model compression through Knowledge Distillation (KD) to reduce communication costs while preserving model performance. Using a Teacher-Student model architecture, the framework enables the deployment of highly compressed Student models without compromising accuracy or other performance metrics. Experimental evaluations on a smartphone-based HAR dataset show that the proposed framework achieves comparable accuracy, precision, and recall w.r.t. the original (uncompressed) models, with up to 75% reduction in model size (and, consequently, communication cost). This approach demonstrates scalability and feasibility for real-world HAR applications on resource-constrained devices, also laying the groundwork for efficient, privacy-preserving distributed learning in heterogeneous computational environments.
2025
2025 IEEE 101st Vehicular Technology Conference (VTC2025-Spring)
New York, NY, USA
IEEE
9798331531478
Bilal, Hafiz Muhammad; Hassan, Mir; Iacca, Giovanni
Enhancing Communication-Efficient Federated Learning for Human Activity Recognition Through Knowledge Distillation / Bilal, Hafiz Muhammad; Hassan, Mir; Iacca, Giovanni. - (2025), pp. 1-7. ( 101st IEEE Vehicular Technology Conference, VTC 2025-Spring 2025 Oslo 17th June-20th June 2025) [10.1109/vtc2025-spring65109.2025.11174325].
File in questo prodotto:
File Dimensione Formato  
2025001033.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 410.04 kB
Formato Adobe PDF
410.04 kB Adobe PDF Visualizza/Apri
Enhancing_Communication-Efficient_Federated_Learning_for_Human_Activity_Recognition_Through_Knowledge_Distillation.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 694.19 kB
Formato Adobe PDF
694.19 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/464531
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex 0
social impact