Deep learning has achieved state-of-the-art performance on several computer vision tasks and domains. Nevertheless, it still has a high computational cost and demands a significant amount of parameters. Such requirements hinder the use in resource-limited environments and demand both software and hardware optimization. Another limitation is that deep models are usually specialized into a single domain or task, requiring them to learn and store new parameters for each new one. Multi-Domain Learning (MDL) attempts to solve this problem by learning a single model capable of performing well in multiple domains. Nevertheless, the models are usually larger than the baseline for a single domain. This work tackles both of these problems: our objective is to prune models capable of handling multiple domains according to a user-defined budget, making them more computationally affordable while keeping a similar classification performance. We achieve this by encouraging all domains to use a similar subset of filters from the baseline model, up to the amount defined by the user’s budget. Then, filters that are not used by any domain are pruned from the network. The proposed approach innovates by better adapting to resource-limited devices while being one of the few works that handles multiple domains at test time with fewer parameters and lower computational complexity than the baseline model for a single domain.

Budget-aware pruning: Handling multiple domains with less parameters / dos Santos, Samuel Felipe; Berriel, Rodrigo; Oliveira-Santos, Thiago; Sebe, Nicu; Almeida, Jurandy. - In: PATTERN RECOGNITION. - ISSN 0031-3203. - 167:(2025). [10.1016/j.patcog.2025.111714]

Budget-aware pruning: Handling multiple domains with less parameters

Oliveira-Santos, Thiago;Sebe, Nicu;
2025-01-01

Abstract

Deep learning has achieved state-of-the-art performance on several computer vision tasks and domains. Nevertheless, it still has a high computational cost and demands a significant amount of parameters. Such requirements hinder the use in resource-limited environments and demand both software and hardware optimization. Another limitation is that deep models are usually specialized into a single domain or task, requiring them to learn and store new parameters for each new one. Multi-Domain Learning (MDL) attempts to solve this problem by learning a single model capable of performing well in multiple domains. Nevertheless, the models are usually larger than the baseline for a single domain. This work tackles both of these problems: our objective is to prune models capable of handling multiple domains according to a user-defined budget, making them more computationally affordable while keeping a similar classification performance. We achieve this by encouraging all domains to use a similar subset of filters from the baseline model, up to the amount defined by the user’s budget. Then, filters that are not used by any domain are pruned from the network. The proposed approach innovates by better adapting to resource-limited devices while being one of the few works that handles multiple domains at test time with fewer parameters and lower computational complexity than the baseline model for a single domain.
2025
dos Santos, Samuel Felipe; Berriel, Rodrigo; Oliveira-Santos, Thiago; Sebe, Nicu; Almeida, Jurandy
Budget-aware pruning: Handling multiple domains with less parameters / dos Santos, Samuel Felipe; Berriel, Rodrigo; Oliveira-Santos, Thiago; Sebe, Nicu; Almeida, Jurandy. - In: PATTERN RECOGNITION. - ISSN 0031-3203. - 167:(2025). [10.1016/j.patcog.2025.111714]
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0031320325003747-main (2).pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.54 MB
Formato Adobe PDF
1.54 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/454432
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact